From ecb9efde6552e0ee9e22f0f1179022f0fb232df2 Mon Sep 17 00:00:00 2001 From: Yuwen Hu <54161268+Oscilloscope98@users.noreply.github.com> Date: Mon, 24 Jun 2024 16:17:50 +0800 Subject: [PATCH] Workaround if demo preview image load slow in mddocs (#11412) * Small tests for demo video workaround * Small fix * Add workaround for langchain-chatchat demo video * Small fix * Small fix * Update for other demo videos in quickstart * Add missing for text-generation-webui quickstart --- docs/mddocs/Quickstart/axolotl_quickstart.md | 9 ++++++++- docs/mddocs/Quickstart/chatchat_quickstart.md | 7 +++++-- docs/mddocs/Quickstart/continue_quickstart.md | 9 ++++++++- docs/mddocs/Quickstart/dify_quickstart.md | 10 ++++++++-- .../Quickstart/llama3_llamacpp_ollama_quickstart.md | 9 ++++++++- docs/mddocs/Quickstart/llama_cpp_quickstart.md | 9 ++++++++- docs/mddocs/Quickstart/ollama_quickstart.md | 9 ++++++++- .../Quickstart/open_webui_with_ollama_quickstart.md | 9 ++++++++- docs/mddocs/Quickstart/privateGPT_quickstart.md | 9 ++++++++- docs/mddocs/Quickstart/ragflow_quickstart.md | 9 ++++++++- docs/mddocs/Quickstart/webui_quickstart.md | 9 ++++++++- 11 files changed, 85 insertions(+), 13 deletions(-) diff --git a/docs/mddocs/Quickstart/axolotl_quickstart.md b/docs/mddocs/Quickstart/axolotl_quickstart.md index f0a2dfa1..c0654cd2 100644 --- a/docs/mddocs/Quickstart/axolotl_quickstart.md +++ b/docs/mddocs/Quickstart/axolotl_quickstart.md @@ -4,7 +4,14 @@ See the demo of finetuning LLaMA2-7B on Intel Arc GPU below. -[![Demo video](https://llm-assets.readthedocs.io/en/latest/_images/axolotl-qlora-linux-arc.png)](https://llm-assets.readthedocs.io/en/latest/_images/axolotl-qlora-linux-arc.mp4) + + + + + + + +
You could also click here to watch the demo video.
## Quickstart diff --git a/docs/mddocs/Quickstart/chatchat_quickstart.md b/docs/mddocs/Quickstart/chatchat_quickstart.md index 70fc62fd..217d199c 100644 --- a/docs/mddocs/Quickstart/chatchat_quickstart.md +++ b/docs/mddocs/Quickstart/chatchat_quickstart.md @@ -4,7 +4,7 @@ *See the demos of running LLaMA2-7B (English) and ChatGLM-3-6B (Chinese) on an Intel Core Ultra laptop below.* - +
@@ -12,7 +12,10 @@ - + + + +
English 简体中文
You could also click English or 简体中文 to watch the demo videos.
> [!NOTE] diff --git a/docs/mddocs/Quickstart/continue_quickstart.md b/docs/mddocs/Quickstart/continue_quickstart.md index cd1f42cd..9bfbd1b1 100644 --- a/docs/mddocs/Quickstart/continue_quickstart.md +++ b/docs/mddocs/Quickstart/continue_quickstart.md @@ -5,7 +5,14 @@ Below is a demo of using `Continue` with [CodeQWen1.5-7B](https://huggingface.co/Qwen/CodeQwen1.5-7B-Chat) running on Intel A770 GPU. This demo illustrates how a programmer used `Continue` to find a solution for the [Kaggle's _Titanic_ challenge](https://www.kaggle.com/competitions/titanic/), which involves asking `Continue` to complete the code for model fitting, evaluation, hyper parameter tuning, feature engineering, and explain generated code. -[![Demo video](https://llm-assets.readthedocs.io/en/latest/_images/continue_demo_ollama_backend_arc.png)](https://llm-assets.readthedocs.io/en/latest/_images/continue_demo_ollama_backend_arc.mp4) + + + + + + + +
You could also click here to watch the demo video.
## Quickstart diff --git a/docs/mddocs/Quickstart/dify_quickstart.md b/docs/mddocs/Quickstart/dify_quickstart.md index 7e0e9c03..d507f9bd 100644 --- a/docs/mddocs/Quickstart/dify_quickstart.md +++ b/docs/mddocs/Quickstart/dify_quickstart.md @@ -6,8 +6,14 @@ *See the demo of a RAG workflow in Dify running LLaMA2-7B on Intel A770 GPU below.* -[![Demo video](https://llm-assets.readthedocs.io/en/latest/_images/dify-rag-small.png)](https://llm-assets.readthedocs.io/en/latest/_images/dify-rag-small.mp4) - + + + + + + + +
You could also click here to watch the demo video.
## Quickstart diff --git a/docs/mddocs/Quickstart/llama3_llamacpp_ollama_quickstart.md b/docs/mddocs/Quickstart/llama3_llamacpp_ollama_quickstart.md index 61309160..8ab22500 100644 --- a/docs/mddocs/Quickstart/llama3_llamacpp_ollama_quickstart.md +++ b/docs/mddocs/Quickstart/llama3_llamacpp_ollama_quickstart.md @@ -6,7 +6,14 @@ Now, you can easily run Llama 3 on Intel GPU using `llama.cpp` and `Ollama` with See the demo of running Llama-3-8B-Instruct on Intel Arc GPU using `Ollama` below. -[![Demo video](https://llm-assets.readthedocs.io/en/latest/_images/ollama-llama3-linux-arc.png)](https://llm-assets.readthedocs.io/en/latest/_images/ollama-llama3-linux-arc.mp4) + + + + + + + +
You could also click here to watch the demo video.
## Quick Start This quickstart guide walks you through how to run Llama 3 on Intel GPU using `llama.cpp` / `Ollama` with IPEX-LLM. diff --git a/docs/mddocs/Quickstart/llama_cpp_quickstart.md b/docs/mddocs/Quickstart/llama_cpp_quickstart.md index 300b275e..1297f474 100644 --- a/docs/mddocs/Quickstart/llama_cpp_quickstart.md +++ b/docs/mddocs/Quickstart/llama_cpp_quickstart.md @@ -4,7 +4,14 @@ See the demo of running LLaMA2-7B on Intel Arc GPU below. -[![Demo video](https://llm-assets.readthedocs.io/en/latest/_images/llama-cpp-arc.png)](https://llm-assets.readthedocs.io/en/latest/_images/llama-cpp-arc.mp4) + + + + + + + +
You could also click here to watch the demo video.
> [!NOTE] > `ipex-llm[cpp]==2.5.0b20240527` is consistent with [c780e75](https://github.com/ggerganov/llama.cpp/commit/c780e75305dba1f67691a8dc0e8bc8425838a452) of llama.cpp. diff --git a/docs/mddocs/Quickstart/ollama_quickstart.md b/docs/mddocs/Quickstart/ollama_quickstart.md index 4760f6a2..98a8be98 100644 --- a/docs/mddocs/Quickstart/ollama_quickstart.md +++ b/docs/mddocs/Quickstart/ollama_quickstart.md @@ -4,7 +4,14 @@ See the demo of running LLaMA2-7B on Intel Arc GPU below. -[![Demo video](https://llm-assets.readthedocs.io/en/latest/_images/ollama-linux-arc.png)](https://llm-assets.readthedocs.io/en/latest/_images/ollama-linux-arc.mp4) + + + + + + + +
You could also click here to watch the demo video.
> [!NOTE] > `ipex-llm[cpp]==2.5.0b20240527` is consistent with [v0.1.34](https://github.com/ollama/ollama/releases/tag/v0.1.34) of ollama. diff --git a/docs/mddocs/Quickstart/open_webui_with_ollama_quickstart.md b/docs/mddocs/Quickstart/open_webui_with_ollama_quickstart.md index d143e7a6..6981b464 100644 --- a/docs/mddocs/Quickstart/open_webui_with_ollama_quickstart.md +++ b/docs/mddocs/Quickstart/open_webui_with_ollama_quickstart.md @@ -4,7 +4,14 @@ *See the demo of running Mistral:7B on Intel Arc A770 below.* -[![Demo video](https://llm-assets.readthedocs.io/en/latest/_images/open_webui_demo.png)](https://llm-assets.readthedocs.io/en/latest/_images/open_webui_demo.mp4) + + + + + + + +
You could also click here to watch the demo video.
## Quickstart diff --git a/docs/mddocs/Quickstart/privateGPT_quickstart.md b/docs/mddocs/Quickstart/privateGPT_quickstart.md index c5fb068f..b95599d5 100644 --- a/docs/mddocs/Quickstart/privateGPT_quickstart.md +++ b/docs/mddocs/Quickstart/privateGPT_quickstart.md @@ -4,7 +4,14 @@ *See the demo of privateGPT running Mistral:7B on Intel Arc A770 below.* -[![Demo video](https://llm-assets.readthedocs.io/en/latest/_images/PrivateGPT-ARC.png)](https://llm-assets.readthedocs.io/en/latest/_images/PrivateGPT-ARC.mp4) + + + + + + + +
You could also click here to watch the demo video.
## Quickstart diff --git a/docs/mddocs/Quickstart/ragflow_quickstart.md b/docs/mddocs/Quickstart/ragflow_quickstart.md index 8e71c221..254aa372 100644 --- a/docs/mddocs/Quickstart/ragflow_quickstart.md +++ b/docs/mddocs/Quickstart/ragflow_quickstart.md @@ -5,7 +5,14 @@ *See the demo of ragflow running Qwen2:7B on Intel Arc A770 below.* -[![Demo video](https://llm-assets.readthedocs.io/en/latest/_images/ragflow-record.png)](https://llm-assets.readthedocs.io/en/latest/_images/ragflow-record.mp4) + + + + + + + +
You could also click here to watch the demo video.
## Quickstart diff --git a/docs/mddocs/Quickstart/webui_quickstart.md b/docs/mddocs/Quickstart/webui_quickstart.md index e5e486ee..2775605f 100644 --- a/docs/mddocs/Quickstart/webui_quickstart.md +++ b/docs/mddocs/Quickstart/webui_quickstart.md @@ -4,7 +4,14 @@ The [oobabooga/text-generation-webui](https://github.com/oobabooga/text-generati See the demo of running LLaMA2-7B on an Intel Core Ultra laptop below. -[![Demo video](https://llm-assets.readthedocs.io/en/latest/_images/webui-mtl.png)](https://llm-assets.readthedocs.io/en/latest/_images/webui-mtl.mp4) + + + + + + + +
You could also click here to watch the demo video.
## Quickstart This quickstart guide walks you through setting up and using the [Text Generation WebUI](https://github.com/intel-analytics/text-generation-webui) with `ipex-llm`.