Workaround if demo preview image load slow in mddocs (#11412)
* Small tests for demo video workaround * Small fix * Add workaround for langchain-chatchat demo video * Small fix * Small fix * Update for other demo videos in quickstart * Add missing for text-generation-webui quickstart
This commit is contained in:
		
							parent
							
								
									5e823ef2ce
								
							
						
					
					
						commit
						ecb9efde65
					
				
					 11 changed files with 85 additions and 13 deletions
				
			
		| 
						 | 
				
			
			@ -4,7 +4,14 @@
 | 
			
		|||
 | 
			
		||||
See the demo of finetuning LLaMA2-7B on Intel Arc GPU below.
 | 
			
		||||
 | 
			
		||||
[](https://llm-assets.readthedocs.io/en/latest/_images/axolotl-qlora-linux-arc.mp4)
 | 
			
		||||
<table width="100%">
 | 
			
		||||
  <tr>
 | 
			
		||||
    <td><a href="https://llm-assets.readthedocs.io/en/latest/_images/axolotl-qlora-linux-arc.mp4"><img src="https://llm-assets.readthedocs.io/en/latest/_images/axolotl-qlora-linux-arc.png"/></a></td>
 | 
			
		||||
  </tr>
 | 
			
		||||
  <tr>
 | 
			
		||||
    <td align="center">You could also click <a href="https://llm-assets.readthedocs.io/en/latest/_images/axolotl-qlora-linux-arc.mp4">here</a> to watch the demo video.</td>
 | 
			
		||||
  </tr>
 | 
			
		||||
</table>
 | 
			
		||||
 | 
			
		||||
## Quickstart
 | 
			
		||||
 | 
			
		||||
| 
						 | 
				
			
			
 | 
			
		|||
| 
						 | 
				
			
			@ -4,7 +4,7 @@
 | 
			
		|||
 | 
			
		||||
*See the demos of running LLaMA2-7B (English) and ChatGLM-3-6B (Chinese) on an Intel Core Ultra laptop below.*
 | 
			
		||||
 | 
			
		||||
<table border="1" width="100%">
 | 
			
		||||
<table width="100%">
 | 
			
		||||
  <tr>
 | 
			
		||||
    <td align="center" width="50%">English</td>
 | 
			
		||||
    <td align="center" width="50%">简体中文</td>
 | 
			
		||||
| 
						 | 
				
			
			@ -12,7 +12,10 @@
 | 
			
		|||
  <tr>
 | 
			
		||||
    <td><a href="https://llm-assets.readthedocs.io/en/latest/_images/langchain-chatchat-en.mp4"><img src="https://llm-assets.readthedocs.io/en/latest/_images/langchain-chatchat-en.png"/></a></td>
 | 
			
		||||
    <td><a href="https://llm-assets.readthedocs.io/en/latest/_images/langchain-chatchat-cn.mp4"><img src="https://llm-assets.readthedocs.io/en/latest/_images/langchain-chatchat-cn.png"/></a></td>
 | 
			
		||||
</tr>
 | 
			
		||||
  </tr>
 | 
			
		||||
  <tr>
 | 
			
		||||
    <td colspan="2" align="center">You could also click <a href="https://llm-assets.readthedocs.io/en/latest/_images/langchain-chatchat-en.mp4">English</a> or <a href="https://llm-assets.readthedocs.io/en/latest/_images/langchain-chatchat-cn.mp4">简体中文</a> to watch the demo videos.</td>
 | 
			
		||||
  </tr>
 | 
			
		||||
</table>
 | 
			
		||||
 | 
			
		||||
> [!NOTE]
 | 
			
		||||
| 
						 | 
				
			
			
 | 
			
		|||
| 
						 | 
				
			
			@ -5,7 +5,14 @@
 | 
			
		|||
 | 
			
		||||
Below is a demo of using `Continue` with [CodeQWen1.5-7B](https://huggingface.co/Qwen/CodeQwen1.5-7B-Chat) running on Intel A770 GPU. This demo illustrates how a programmer used `Continue` to find a solution for the [Kaggle's _Titanic_ challenge](https://www.kaggle.com/competitions/titanic/), which involves asking `Continue` to complete the code for model fitting, evaluation, hyper parameter tuning, feature engineering, and explain generated code.
 | 
			
		||||
 | 
			
		||||
[](https://llm-assets.readthedocs.io/en/latest/_images/continue_demo_ollama_backend_arc.mp4)
 | 
			
		||||
<table width="100%">
 | 
			
		||||
  <tr>
 | 
			
		||||
    <td><a href="https://llm-assets.readthedocs.io/en/latest/_images/continue_demo_ollama_backend_arc.mp4"><img src="https://llm-assets.readthedocs.io/en/latest/_images/continue_demo_ollama_backend_arc.png"/></a></td>
 | 
			
		||||
  </tr>
 | 
			
		||||
  <tr>
 | 
			
		||||
    <td align="center">You could also click <a href="https://llm-assets.readthedocs.io/en/latest/_images/continue_demo_ollama_backend_arc.mp4">here</a> to watch the demo video.</td>
 | 
			
		||||
  </tr>
 | 
			
		||||
</table>
 | 
			
		||||
 | 
			
		||||
## Quickstart
 | 
			
		||||
 | 
			
		||||
| 
						 | 
				
			
			
 | 
			
		|||
| 
						 | 
				
			
			@ -6,8 +6,14 @@
 | 
			
		|||
 | 
			
		||||
*See the demo of a RAG workflow in Dify running LLaMA2-7B on Intel A770 GPU below.*
 | 
			
		||||
 | 
			
		||||
[](https://llm-assets.readthedocs.io/en/latest/_images/dify-rag-small.mp4)
 | 
			
		||||
 | 
			
		||||
<table width="100%">
 | 
			
		||||
  <tr>
 | 
			
		||||
    <td><a href="https://llm-assets.readthedocs.io/en/latest/_images/dify-rag-small.mp4"><img src="https://llm-assets.readthedocs.io/en/latest/_images/dify-rag-small.png"/></a></td>
 | 
			
		||||
  </tr>
 | 
			
		||||
  <tr>
 | 
			
		||||
    <td align="center">You could also click <a href="https://llm-assets.readthedocs.io/en/latest/_images/dify-rag-small.mp4">here</a> to watch the demo video.</td>
 | 
			
		||||
  </tr>
 | 
			
		||||
</table>
 | 
			
		||||
 | 
			
		||||
## Quickstart
 | 
			
		||||
 | 
			
		||||
| 
						 | 
				
			
			
 | 
			
		|||
| 
						 | 
				
			
			@ -6,7 +6,14 @@ Now, you can easily run Llama 3 on Intel GPU using `llama.cpp` and `Ollama` with
 | 
			
		|||
 | 
			
		||||
See the demo of running Llama-3-8B-Instruct on Intel Arc GPU using `Ollama` below.
 | 
			
		||||
 | 
			
		||||
[](https://llm-assets.readthedocs.io/en/latest/_images/ollama-llama3-linux-arc.mp4)
 | 
			
		||||
<table width="100%">
 | 
			
		||||
  <tr>
 | 
			
		||||
    <td><a href="https://llm-assets.readthedocs.io/en/latest/_images/ollama-llama3-linux-arc.mp4"><img src="https://llm-assets.readthedocs.io/en/latest/_images/ollama-llama3-linux-arc.png"/></a></td>
 | 
			
		||||
  </tr>
 | 
			
		||||
  <tr>
 | 
			
		||||
    <td align="center">You could also click <a href="https://llm-assets.readthedocs.io/en/latest/_images/ollama-llama3-linux-arc.mp4">here</a> to watch the demo video.</td>
 | 
			
		||||
  </tr>
 | 
			
		||||
</table>
 | 
			
		||||
 | 
			
		||||
## Quick Start
 | 
			
		||||
This quickstart guide walks you through how to run Llama 3 on Intel GPU using `llama.cpp` / `Ollama` with IPEX-LLM.
 | 
			
		||||
| 
						 | 
				
			
			
 | 
			
		|||
| 
						 | 
				
			
			@ -4,7 +4,14 @@
 | 
			
		|||
 | 
			
		||||
See the demo of running LLaMA2-7B on Intel Arc GPU below.
 | 
			
		||||
 | 
			
		||||
[](https://llm-assets.readthedocs.io/en/latest/_images/llama-cpp-arc.mp4)
 | 
			
		||||
<table width="100%">
 | 
			
		||||
  <tr>
 | 
			
		||||
    <td><a href="https://llm-assets.readthedocs.io/en/latest/_images/llama-cpp-arc.mp4"><img src="https://llm-assets.readthedocs.io/en/latest/_images/llama-cpp-arc.png"/></a></td>
 | 
			
		||||
  </tr>
 | 
			
		||||
  <tr>
 | 
			
		||||
    <td align="center">You could also click <a href="https://llm-assets.readthedocs.io/en/latest/_images/llama-cpp-arc.mp4">here</a> to watch the demo video.</td>
 | 
			
		||||
  </tr>
 | 
			
		||||
</table>
 | 
			
		||||
 | 
			
		||||
> [!NOTE]
 | 
			
		||||
> `ipex-llm[cpp]==2.5.0b20240527` is consistent with [c780e75](https://github.com/ggerganov/llama.cpp/commit/c780e75305dba1f67691a8dc0e8bc8425838a452) of llama.cpp.
 | 
			
		||||
| 
						 | 
				
			
			
 | 
			
		|||
| 
						 | 
				
			
			@ -4,7 +4,14 @@
 | 
			
		|||
 | 
			
		||||
See the demo of running LLaMA2-7B on Intel Arc GPU below.
 | 
			
		||||
 | 
			
		||||
[](https://llm-assets.readthedocs.io/en/latest/_images/ollama-linux-arc.mp4)
 | 
			
		||||
<table width="100%">
 | 
			
		||||
  <tr>
 | 
			
		||||
    <td><a href="https://llm-assets.readthedocs.io/en/latest/_images/ollama-linux-arc.mp4"><img src="https://llm-assets.readthedocs.io/en/latest/_images/ollama-linux-arc.png"/></a></td>
 | 
			
		||||
  </tr>
 | 
			
		||||
  <tr>
 | 
			
		||||
    <td align="center">You could also click <a href="https://llm-assets.readthedocs.io/en/latest/_images/ollama-linux-arc.mp4">here</a> to watch the demo video.</td>
 | 
			
		||||
  </tr>
 | 
			
		||||
</table>
 | 
			
		||||
 | 
			
		||||
> [!NOTE]
 | 
			
		||||
> `ipex-llm[cpp]==2.5.0b20240527` is consistent with [v0.1.34](https://github.com/ollama/ollama/releases/tag/v0.1.34) of ollama.
 | 
			
		||||
| 
						 | 
				
			
			
 | 
			
		|||
| 
						 | 
				
			
			@ -4,7 +4,14 @@
 | 
			
		|||
 | 
			
		||||
*See the demo of running Mistral:7B on Intel Arc A770 below.*
 | 
			
		||||
 | 
			
		||||
[](https://llm-assets.readthedocs.io/en/latest/_images/open_webui_demo.mp4)
 | 
			
		||||
<table width="100%">
 | 
			
		||||
  <tr>
 | 
			
		||||
    <td><a href="https://llm-assets.readthedocs.io/en/latest/_images/open_webui_demo.mp4"><img src="https://llm-assets.readthedocs.io/en/latest/_images/open_webui_demo.png"/></a></td>
 | 
			
		||||
  </tr>
 | 
			
		||||
  <tr>
 | 
			
		||||
    <td align="center">You could also click <a href="https://llm-assets.readthedocs.io/en/latest/_images/open_webui_demo.mp4">here</a> to watch the demo video.</td>
 | 
			
		||||
  </tr>
 | 
			
		||||
</table>
 | 
			
		||||
 | 
			
		||||
## Quickstart
 | 
			
		||||
 | 
			
		||||
| 
						 | 
				
			
			
 | 
			
		|||
| 
						 | 
				
			
			@ -4,7 +4,14 @@
 | 
			
		|||
 | 
			
		||||
*See the demo of privateGPT running Mistral:7B on Intel Arc A770 below.*
 | 
			
		||||
 | 
			
		||||
[](https://llm-assets.readthedocs.io/en/latest/_images/PrivateGPT-ARC.mp4)
 | 
			
		||||
<table width="100%">
 | 
			
		||||
  <tr>
 | 
			
		||||
    <td><a href="https://llm-assets.readthedocs.io/en/latest/_images/PrivateGPT-ARC.mp4"><img src="https://llm-assets.readthedocs.io/en/latest/_images/PrivateGPT-ARC.png"/></a></td>
 | 
			
		||||
  </tr>
 | 
			
		||||
  <tr>
 | 
			
		||||
    <td align="center">You could also click <a href="https://llm-assets.readthedocs.io/en/latest/_images/PrivateGPT-ARC.mp4">here</a> to watch the demo video.</td>
 | 
			
		||||
  </tr>
 | 
			
		||||
</table>
 | 
			
		||||
 | 
			
		||||
## Quickstart
 | 
			
		||||
 | 
			
		||||
| 
						 | 
				
			
			
 | 
			
		|||
| 
						 | 
				
			
			@ -5,7 +5,14 @@
 | 
			
		|||
 | 
			
		||||
*See the demo of ragflow running Qwen2:7B on Intel Arc A770 below.*
 | 
			
		||||
 | 
			
		||||
[](https://llm-assets.readthedocs.io/en/latest/_images/ragflow-record.mp4)
 | 
			
		||||
<table width="100%">
 | 
			
		||||
  <tr>
 | 
			
		||||
    <td><a href="https://llm-assets.readthedocs.io/en/latest/_images/ragflow-record.mp4"><img src="https://llm-assets.readthedocs.io/en/latest/_images/ragflow-record.png"/></a></td>
 | 
			
		||||
  </tr>
 | 
			
		||||
  <tr>
 | 
			
		||||
    <td align="center">You could also click <a href="https://llm-assets.readthedocs.io/en/latest/_images/ragflow-record.mp4">here</a> to watch the demo video.</td>
 | 
			
		||||
  </tr>
 | 
			
		||||
</table>
 | 
			
		||||
 | 
			
		||||
## Quickstart
 | 
			
		||||
 | 
			
		||||
| 
						 | 
				
			
			
 | 
			
		|||
| 
						 | 
				
			
			@ -4,7 +4,14 @@ The [oobabooga/text-generation-webui](https://github.com/oobabooga/text-generati
 | 
			
		|||
 | 
			
		||||
See the demo of running LLaMA2-7B on an Intel Core Ultra laptop below.
 | 
			
		||||
 | 
			
		||||
[](https://llm-assets.readthedocs.io/en/latest/_images/webui-mtl.mp4)
 | 
			
		||||
<table width="100%">
 | 
			
		||||
  <tr>
 | 
			
		||||
    <td><a href="https://llm-assets.readthedocs.io/en/latest/_images/webui-mtl.mp4"><img src="https://llm-assets.readthedocs.io/en/latest/_images/webui-mtl.png"/></a></td>
 | 
			
		||||
  </tr>
 | 
			
		||||
  <tr>
 | 
			
		||||
    <td align="center">You could also click <a href="https://llm-assets.readthedocs.io/en/latest/_images/webui-mtl.mp4">here</a> to watch the demo video.</td>
 | 
			
		||||
  </tr>
 | 
			
		||||
</table>
 | 
			
		||||
 | 
			
		||||
## Quickstart
 | 
			
		||||
This quickstart guide walks you through setting up and using the [Text Generation WebUI](https://github.com/intel-analytics/text-generation-webui) with `ipex-llm`. 
 | 
			
		||||
| 
						 | 
				
			
			
 | 
			
		|||
		Loading…
	
		Reference in a new issue