From 54f9d07d8f7e9defe5639b818b7e47c0fbf6c6fd Mon Sep 17 00:00:00 2001 From: Yuwen Hu <54161268+Oscilloscope98@users.noreply.github.com> Date: Fri, 21 Jun 2024 13:27:43 +0800 Subject: [PATCH] Further mddocs fixes (#11386) * Update mddocs for ragflow quickstart * Fixes for docker guides mddocs * Further fixes --- .../DockerGuides/docker_cpp_xpu_quickstart.md | 10 +- .../docker_pytorch_inference_gpu.md | 21 ++-- .../docker_run_pytorch_inference_in_vscode.md | 18 ++- .../mddocs/DockerGuides/docker_windows_gpu.md | 6 +- .../Overview/KeyFeatures/inference_on_gpu.md | 2 +- docs/mddocs/Quickstart/install_linux_gpu.md | 4 +- docs/mddocs/Quickstart/install_windows_gpu.md | 8 +- .../mddocs/Quickstart/llama_cpp_quickstart.md | 8 +- .../Quickstart/privateGPT_quickstart.md | 8 +- docs/mddocs/Quickstart/ragflow_quickstart.md | 119 ++++++------------ 10 files changed, 82 insertions(+), 122 deletions(-) diff --git a/docs/mddocs/DockerGuides/docker_cpp_xpu_quickstart.md b/docs/mddocs/DockerGuides/docker_cpp_xpu_quickstart.md index 85a6b9ad..5efb9e93 100644 --- a/docs/mddocs/DockerGuides/docker_cpp_xpu_quickstart.md +++ b/docs/mddocs/DockerGuides/docker_cpp_xpu_quickstart.md @@ -26,8 +26,7 @@ docker pull intelanalytics/ipex-llm-inference-cpp-xpu:latest Choose one of the following methods to start the container: -
-For Linux: +- For **Linux users**: To map the `xpu` into the container, you need to specify `--device=/dev/dri` when booting the container. Select the device you are running(device type:(Max, Flex, Arc, iGPU)). And change the `/path/to/models` to mount the models. `bench_model` is used to benchmark quickly. If want to benchmark, make sure it on the `/path/to/models` @@ -47,10 +46,8 @@ Choose one of the following methods to start the container: --shm-size="16g" \ $DOCKER_IMAGE ``` -
-
-For Windows: +- For **Windows users**: To map the `xpu` into the container, you need to specify `--device=/dev/dri` when booting the container. And change the `/path/to/models` to mount the models. Then add `--privileged` and map the `/usr/lib/wsl` to the docker. @@ -72,9 +69,6 @@ Choose one of the following methods to start the container: --shm-size="16g" \ $DOCKER_IMAGE ``` -
- ---- After the container is booted, you could get into the container through `docker exec`. diff --git a/docs/mddocs/DockerGuides/docker_pytorch_inference_gpu.md b/docs/mddocs/DockerGuides/docker_pytorch_inference_gpu.md index a4199b54..f08dca12 100644 --- a/docs/mddocs/DockerGuides/docker_pytorch_inference_gpu.md +++ b/docs/mddocs/DockerGuides/docker_pytorch_inference_gpu.md @@ -18,8 +18,7 @@ docker pull intelanalytics/ipex-llm-xpu:latest Start ipex-llm-xpu Docker Container. Choose one of the following commands to start the container: -
-For Linux: +- For **Linux users**: ```bash export DOCKER_IMAGE=intelanalytics/ipex-llm-xpu:latest @@ -35,10 +34,8 @@ Start ipex-llm-xpu Docker Container. Choose one of the following commands to sta -v $MODEL_PATH:/llm/models \ $DOCKER_IMAGE ``` -
-
-For Windows WSL: +- For **Windows WSL users**: ```bash #/bin/bash @@ -57,14 +54,12 @@ Start ipex-llm-xpu Docker Container. Choose one of the following commands to sta -v /usr/lib/wsl:/usr/lib/wsl \ $DOCKER_IMAGE ``` -
- ---- Access the container: ``` docker exec -it $CONTAINER_NAME bash ``` + To verify the device is successfully mapped into the container, run `sycl-ls` to check the result. In a machine with Arc A770, the sampled output is: ```bash @@ -107,7 +102,6 @@ source ipex-llm-init --gpu --device python run.py ``` - **Result Interpretation** After the benchmarking is completed, you can obtain a CSV result file under the current folder. You can mainly look at the results of columns `1st token avg latency (ms)` and `2+ avg latency (ms/token)` for the benchmark results. You can also check whether the column `actual input/output tokens` is consistent with the column `input/output tokens` and whether the parameters you specified in `config.yaml` have been successfully applied in the benchmarking. @@ -135,10 +129,11 @@ Here is a demostration: We provide several PyTorch examples that you could apply IPEX-LLM INT4 optimizations on models on Intel GPUs For example, if your model is Llama-2-7b-chat-hf and mounted on /llm/models, you can navigate to /examples/llama2 directory, excute the following command to run example: - ```bash - cd /examples/ - python ./generate.py --repo-id-or-model-path /llm/models/Llama-2-7b-chat-hf --prompt PROMPT --n-predict N_PREDICT - ``` + +```bash +cd /examples/ +python ./generate.py --repo-id-or-model-path /llm/models/Llama-2-7b-chat-hf --prompt PROMPT --n-predict N_PREDICT +``` Arguments info: diff --git a/docs/mddocs/DockerGuides/docker_run_pytorch_inference_in_vscode.md b/docs/mddocs/DockerGuides/docker_run_pytorch_inference_in_vscode.md index 8652d396..c6db03e9 100644 --- a/docs/mddocs/DockerGuides/docker_run_pytorch_inference_in_vscode.md +++ b/docs/mddocs/DockerGuides/docker_run_pytorch_inference_in_vscode.md @@ -51,8 +51,7 @@ docker pull intelanalytics/ipex-llm-xpu:latest Start ipex-llm-xpu Docker Container. Choose one of the following commands to start the container: -
-For Linux: +- For **Linux users**: ```bash @@ -69,10 +68,8 @@ Start ipex-llm-xpu Docker Container. Choose one of the following commands to sta -v $MODEL_PATH:/llm/models \ $DOCKER_IMAGE ``` -
-
-For Windows WSL: +- For **Windows WSL users**: ```bash #/bin/bash @@ -91,9 +88,7 @@ Start ipex-llm-xpu Docker Container. Choose one of the following commands to sta -v /usr/lib/wsl:/usr/lib/wsl \ $DOCKER_IMAGE ``` -
---- ## Run/Develop Pytorch Examples @@ -108,10 +103,11 @@ Now you are in a running Docker Container, Open folder `/ipex-llm/python/llm/exa In this folder, we provide several PyTorch examples that you could apply IPEX-LLM INT4 optimizations on models on Intel GPUs. For example, if your model is Llama-2-7b-chat-hf and mounted on /llm/models, you can navigate to llama2 directory, excute the following command to run example: - ```bash - cd - python ./generate.py --repo-id-or-model-path /llm/models/Llama-2-7b-chat-hf --prompt PROMPT --n-predict N_PREDICT - ``` + +```bash +cd +python ./generate.py --repo-id-or-model-path /llm/models/Llama-2-7b-chat-hf --prompt PROMPT --n-predict N_PREDICT +``` Arguments info: diff --git a/docs/mddocs/DockerGuides/docker_windows_gpu.md b/docs/mddocs/DockerGuides/docker_windows_gpu.md index 0fd9a965..71a69403 100644 --- a/docs/mddocs/DockerGuides/docker_windows_gpu.md +++ b/docs/mddocs/DockerGuides/docker_windows_gpu.md @@ -35,9 +35,9 @@ Follow the instructions in [this guide](https://docs.microsoft.com/en-us/windows #### Enable Docker integration with WSL2 Open **Docker desktop**, and select `Settings`->`Resources`->`WSL integration`->turn on `Ubuntu` button->`Apply & restart`. - - - + + + > [!TIP] diff --git a/docs/mddocs/Overview/KeyFeatures/inference_on_gpu.md b/docs/mddocs/Overview/KeyFeatures/inference_on_gpu.md index 126dc2af..4ce0df60 100644 --- a/docs/mddocs/Overview/KeyFeatures/inference_on_gpu.md +++ b/docs/mddocs/Overview/KeyFeatures/inference_on_gpu.md @@ -30,7 +30,7 @@ You could choose to use [PyTorch API](./optimize_model.md) or [`transformers`-st model = model.to('xpu') # Important after obtaining the optimized model ``` - > **Tip**" + > **Tip**: > > When running LLMs on Intel iGPUs for Windows users, we recommend setting `cpu_embedding=True` in the `optimize_model` function. This will allow the memory-intensive embedding layer to utilize the CPU instead of iGPU. > diff --git a/docs/mddocs/Quickstart/install_linux_gpu.md b/docs/mddocs/Quickstart/install_linux_gpu.md index d4442b0b..afb64e6f 100644 --- a/docs/mddocs/Quickstart/install_linux_gpu.md +++ b/docs/mddocs/Quickstart/install_linux_gpu.md @@ -284,7 +284,9 @@ Now let's play with a real LLM. We'll be using the [phi-1.5](https://huggingface print(output_str) ``` - > **Note**: When running LLMs on Intel iGPUs with limited memory size, we recommend setting `cpu_embedding=True` in the `from_pretrained` function. + > **Note**: + > + > When running LLMs on Intel iGPUs with limited memory size, we recommend setting `cpu_embedding=True` in the `from_pretrained` function. > This will allow the memory-intensive embedding layer to utilize the CPU instead of GPU. - Step 5. Run `demo.py` within the activated Python environment using the following command: diff --git a/docs/mddocs/Quickstart/install_windows_gpu.md b/docs/mddocs/Quickstart/install_windows_gpu.md index feddbed2..eb7fa9f9 100644 --- a/docs/mddocs/Quickstart/install_windows_gpu.md +++ b/docs/mddocs/Quickstart/install_windows_gpu.md @@ -102,7 +102,9 @@ You can verify if `ipex-llm` is successfully installed following below steps. torch.Size([1, 1, 40, 40]) ``` - > **Tip**: If you encounter any problem, please refer to [here](../Overview/install_gpu.md#troubleshooting) for help. + > **Tip**: + > + > If you encounter any problem, please refer to [here](../Overview/install_gpu.md#troubleshooting) for help. - To exit the Python interactive shell, simply press Ctrl+Z then press Enter (or input `exit()` then press Enter). @@ -239,7 +241,9 @@ Now let's play with a real LLM. We'll be using the [Qwen-1.8B-Chat](https://hugg output_str = tokenizer.decode(output[0], skip_special_tokens=True) print(output_str) ``` - > **Note**: Please note that the repo id on ModelScope may be different from Hugging Face for some models. + > **Note**: + > + > Please note that the repo id on ModelScope may be different from Hugging Face for some models. > [!NOTE] > When running LLMs on Intel iGPUs with limited memory size, we recommend setting `cpu_embedding=True` in the `from_pretrained` function. diff --git a/docs/mddocs/Quickstart/llama_cpp_quickstart.md b/docs/mddocs/Quickstart/llama_cpp_quickstart.md index 455d96f4..300b275e 100644 --- a/docs/mddocs/Quickstart/llama_cpp_quickstart.md +++ b/docs/mddocs/Quickstart/llama_cpp_quickstart.md @@ -135,7 +135,9 @@ Before running, you should download or copy community GGUF model to your current ./main -m mistral-7b-instruct-v0.1.Q4_K_M.gguf -n 32 --prompt "Once upon a time, there existed a little girl who liked to have adventures. She wanted to go to places and meet new people, and have fun" -t 8 -e -ngl 33 --color ``` - > **Note**: For more details about meaning of each parameter, you can use `./main -h`. + > **Note**: + > + > For more details about meaning of each parameter, you can use `./main -h`. - For **Windows users**: @@ -145,7 +147,9 @@ Before running, you should download or copy community GGUF model to your current main -m mistral-7b-instruct-v0.1.Q4_K_M.gguf -n 32 --prompt "Once upon a time, there existed a little girl who liked to have adventures. She wanted to go to places and meet new people, and have fun" -t 8 -e -ngl 33 --color ``` - > **Note**: For more details about meaning of each parameter, you can use `main -h`. + > **Note**: + > + > For more details about meaning of each parameter, you can use `main -h`. #### Sample Output ``` diff --git a/docs/mddocs/Quickstart/privateGPT_quickstart.md b/docs/mddocs/Quickstart/privateGPT_quickstart.md index 2e2f18f4..c5fb068f 100644 --- a/docs/mddocs/Quickstart/privateGPT_quickstart.md +++ b/docs/mddocs/Quickstart/privateGPT_quickstart.md @@ -72,7 +72,9 @@ Run below commands to start the service in another terminal: PGPT_PROFILES=ollama make run ``` - > **Note**: Setting `PGPT_PROFILES=ollama` will load the configuration from `settings.yaml` and `settings-ollama.yaml`. + > **Note**: + > + > Setting `PGPT_PROFILES=ollama` will load the configuration from `settings.yaml` and `settings-ollama.yaml`. - For **Windows users**: @@ -82,7 +84,9 @@ Run below commands to start the service in another terminal: make run ``` - > **Note**: Setting `PGPT_PROFILES=ollama` will load the configuration from `settings.yaml` and `settings-ollama.yaml`. + > **Note**: + > + > Setting `PGPT_PROFILES=ollama` will load the configuration from `settings.yaml` and `settings-ollama.yaml`. Upon successful deployment, you will see logs in the terminal similar to the following: diff --git a/docs/mddocs/Quickstart/ragflow_quickstart.md b/docs/mddocs/Quickstart/ragflow_quickstart.md index 161831d9..ec3f755b 100644 --- a/docs/mddocs/Quickstart/ragflow_quickstart.md +++ b/docs/mddocs/Quickstart/ragflow_quickstart.md @@ -5,8 +5,7 @@ *See the demo of ragflow running Qwen2:7B on Intel Arc A770 below.* - - +[![Demo video](https://llm-assets.readthedocs.io/en/latest/_images/ragflow-record.png)](https://llm-assets.readthedocs.io/en/latest/_images/ragflow-record.mp4) ## Quickstart @@ -17,64 +16,47 @@ - Disk >= 50 GB - Docker >= 24.0.0 & Docker Compose >= v2.26.1 - ### 1. Install and Start `Ollama` Service on Intel GPU Follow the steps in [Run Ollama with IPEX-LLM on Intel GPU Guide](./ollama_quickstart.md) to install and run Ollama on Intel GPU. Ensure that `ollama serve` is running correctly and can be accessed through a local URL (e.g., `https://127.0.0.1:11434`) or a remote URL (e.g., `http://your_ip:11434`). +> [!IMPORTANT] +> If the `RAGFlow` is not deployed on the same machine where Ollama is running (which means `RAGFlow` needs to connect to a remote Ollama service), you must configure the Ollama service to accept connections from any IP address. To achieve this, set or export the environment variable `OLLAMA_HOST=0.0.0.0` before executing the command `ollama serve`. - -```eval_rst -.. important:: - - If the `RAGFlow` is not deployed on the same machine where Ollama is running (which means `RAGFlow` needs to connect to a remote Ollama service), you must configure the Ollama service to accept connections from any IP address. To achieve this, set or export the environment variable `OLLAMA_HOST=0.0.0.0` before executing the command `ollama serve`. - -.. tip:: - - If your local LLM is running on Intel Arcâ„¢ A-Series Graphics with Linux OS (Kernel 6.2), it is recommended to additionaly set the following environment variable for optimal performance before executing `ollama serve`: - - .. code-block:: bash - - export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1 -``` +> [!TIP] +> If your local LLM is running on Intel Arcâ„¢ A-Series Graphics with Linux OS (Kernel 6.2), it is recommended to additionaly set the following environment variable for optimal performance before executing `ollama serve`: +> +> ```bash +> export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1 +> ``` ### 2. Pull Model Now we need to pull a model for RAG using Ollama. Here we use [Qwen/Qwen2-7B](https://huggingface.co/Qwen/Qwen2-7B) model as an example. Open a new terminal window, run the following command to pull [`qwen2:latest`](https://ollama.com/library/qwen2). +- For **Linux users**: -```eval_rst -.. tabs:: - .. tab:: Linux + ```bash + export no_proxy=localhost,127.0.0.1 + ./ollama pull qwen2:latest + ``` - .. code-block:: bash +- For **Windows users**: - export no_proxy=localhost,127.0.0.1 - ./ollama pull qwen2:latest + Please run the following command in Miniforge or Anaconda Prompt. - .. tab:: Windows + ```cmd + set no_proxy=localhost,127.0.0.1 + ollama pull qwen2:latest + ``` - Please run the following command in Miniforge or Anaconda Prompt. - - .. code-block:: cmd - - set no_proxy=localhost,127.0.0.1 - ollama pull qwen2:latest - -.. seealso:: - - Besides Qwen2, there are other LLM models you might want to explore, such as Llama3, Phi3, Mistral, etc. You can find all available models in the `Ollama model library `_. Simply search for the model, pull it in a similar manner, and give it a try. -``` +> [!TIP] +> Besides Qwen2, there are other LLM models you might want to explore, such as Llama3, Phi3, Mistral, etc. You can find all available models in the [Ollama model library](https://ollama.com/library). Simply search for the model, pull it in a similar manner, and give it a try. ### 3. Start `RAGFlow` Service - -```eval_rst -.. note:: - - The steps in section 3 is verified on Linux system only. -``` - +> [!NOTE] +> The steps in section 3 is verified on Linux system only. #### 3.1 Download `RAGFlow` @@ -110,12 +92,8 @@ vm.max_map_count=262144 Build the pre-built Docker images and start up the server: -```eval_rst -.. note:: - - Running the following commands automatically downloads the *dev* version RAGFlow Docker image. To download and run a specified Docker version, update `RAGFLOW_VERSION` in **docker/.env** to the intended version, for example `RAGFLOW_VERSION=v0.7.0`, before running the following commands. -``` - +> [!NOTE] +> Running the following commands automatically downloads the *dev* version RAGFlow Docker image. To download and run a specified Docker version, update `RAGFLOW_VERSION` in **docker/.env** to the intended version, for example `RAGFLOW_VERSION=v0.7.0`, before running the following commands. ```bash $ export no_proxy=localhost,127.0.0.1 @@ -124,12 +102,8 @@ $ chmod +x ./entrypoint.sh $ docker compose up -d ``` - -```eval_rst -.. note:: - - The core image is about 9 GB in size and may take a while to load. -``` +> [!NOTE] +> The core image is about 9 GB in size and may take a while to load. Check the server status after having the server up and running: @@ -153,18 +127,13 @@ Upon successful deployment, you will see logs in the terminal similar to the fol INFO:werkzeug:Press CTRL+C to quit ``` - You can now open a browser and access the RAGflow web portal. With the default settings, simply enter `http://IP_OF_YOUR_MACHINE` (without the port number), as the default HTTP serving port `80` can be omitted. If RAGflow is deployed on the same machine as your browser, you can also access the web portal at `http://127.0.0.1` or `http://localhost`. ### 4. Using `RAGFlow` -```eval_rst -.. note:: - - For detailed information about how to use RAGFlow, visit the README of `RAGFlow official repository `_. - -``` +> [!NOTE] +> For detailed information about how to use RAGFlow, visit the README of [RAGFlow official repository](https://github.com/infiniflow/ragflow). #### Log-in @@ -195,11 +164,8 @@ If the connection is successful, you will see the model listed down **Show more -```eval_rst -.. note:: - - If you want to use an Ollama server hosted at a different URL, simply update the **Ollama Base URL** to the new URL and press the **OK** button again to re-confirm the connection to Ollama. -``` +> [!NOTE] +> If you want to use an Ollama server hosted at a different URL, simply update the **Ollama Base URL** to the new URL and press the **OK** button again to re-confirm the connection to Ollama. #### Create Knowledge Base @@ -248,24 +214,19 @@ Start new conversations by clicking **Chat** in the top navbar. On the left side, create a conversation by clicking **Create an Assistant**. Under **Assistant Setting**, give it a name and select your knowledge bases. - - - + + + Next, go to **Model Setting**, choose your model added by Ollama, and disable the **Max Tokens** toggle. Finally, click **OK** to start. -```eval_rst -.. tip:: +> [!TIP] +> Enabling the **Max Tokens** toggle may result in very short answers. - Enabling the **Max Tokens** toggle may result in very short answers. -``` - - - - - -
+ + + Input your questions into the **Message Resume Assistant** textbox at the bottom, and click the button on the right to get responses.