Further mddocs fixes (#11386)
* Update mddocs for ragflow quickstart * Fixes for docker guides mddocs * Further fixes
This commit is contained in:
parent
b30bf7648e
commit
54f9d07d8f
10 changed files with 82 additions and 122 deletions
|
|
@ -26,8 +26,7 @@ docker pull intelanalytics/ipex-llm-inference-cpp-xpu:latest
|
|||
|
||||
Choose one of the following methods to start the container:
|
||||
|
||||
<details>
|
||||
<Summary>For <strong>Linux</strong>:</summary>
|
||||
- For **Linux users**:
|
||||
|
||||
To map the `xpu` into the container, you need to specify `--device=/dev/dri` when booting the container. Select the device you are running(device type:(Max, Flex, Arc, iGPU)). And change the `/path/to/models` to mount the models. `bench_model` is used to benchmark quickly. If want to benchmark, make sure it on the `/path/to/models`
|
||||
|
||||
|
|
@ -47,10 +46,8 @@ Choose one of the following methods to start the container:
|
|||
--shm-size="16g" \
|
||||
$DOCKER_IMAGE
|
||||
```
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary>For <strong>Windows</strong>:</summary>
|
||||
- For **Windows users**:
|
||||
|
||||
To map the `xpu` into the container, you need to specify `--device=/dev/dri` when booting the container. And change the `/path/to/models` to mount the models. Then add `--privileged` and map the `/usr/lib/wsl` to the docker.
|
||||
|
||||
|
|
@ -72,9 +69,6 @@ Choose one of the following methods to start the container:
|
|||
--shm-size="16g" \
|
||||
$DOCKER_IMAGE
|
||||
```
|
||||
</details>
|
||||
|
||||
---
|
||||
|
||||
After the container is booted, you could get into the container through `docker exec`.
|
||||
|
||||
|
|
|
|||
|
|
@ -18,8 +18,7 @@ docker pull intelanalytics/ipex-llm-xpu:latest
|
|||
|
||||
Start ipex-llm-xpu Docker Container. Choose one of the following commands to start the container:
|
||||
|
||||
<details>
|
||||
<summary>For <strong>Linux</strong>:</summary>
|
||||
- For **Linux users**:
|
||||
|
||||
```bash
|
||||
export DOCKER_IMAGE=intelanalytics/ipex-llm-xpu:latest
|
||||
|
|
@ -35,10 +34,8 @@ Start ipex-llm-xpu Docker Container. Choose one of the following commands to sta
|
|||
-v $MODEL_PATH:/llm/models \
|
||||
$DOCKER_IMAGE
|
||||
```
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary>For <strong>Windows WSL</strong>:</summary>
|
||||
- For **Windows WSL users**:
|
||||
|
||||
```bash
|
||||
#/bin/bash
|
||||
|
|
@ -57,14 +54,12 @@ Start ipex-llm-xpu Docker Container. Choose one of the following commands to sta
|
|||
-v /usr/lib/wsl:/usr/lib/wsl \
|
||||
$DOCKER_IMAGE
|
||||
```
|
||||
</details>
|
||||
|
||||
---
|
||||
|
||||
Access the container:
|
||||
```
|
||||
docker exec -it $CONTAINER_NAME bash
|
||||
```
|
||||
|
||||
To verify the device is successfully mapped into the container, run `sycl-ls` to check the result. In a machine with Arc A770, the sampled output is:
|
||||
|
||||
```bash
|
||||
|
|
@ -107,7 +102,6 @@ source ipex-llm-init --gpu --device <value>
|
|||
python run.py
|
||||
```
|
||||
|
||||
|
||||
**Result Interpretation**
|
||||
|
||||
After the benchmarking is completed, you can obtain a CSV result file under the current folder. You can mainly look at the results of columns `1st token avg latency (ms)` and `2+ avg latency (ms/token)` for the benchmark results. You can also check whether the column `actual input/output tokens` is consistent with the column `input/output tokens` and whether the parameters you specified in `config.yaml` have been successfully applied in the benchmarking.
|
||||
|
|
@ -135,10 +129,11 @@ Here is a demostration:
|
|||
We provide several PyTorch examples that you could apply IPEX-LLM INT4 optimizations on models on Intel GPUs
|
||||
|
||||
For example, if your model is Llama-2-7b-chat-hf and mounted on /llm/models, you can navigate to /examples/llama2 directory, excute the following command to run example:
|
||||
```bash
|
||||
cd /examples/<model_dir>
|
||||
python ./generate.py --repo-id-or-model-path /llm/models/Llama-2-7b-chat-hf --prompt PROMPT --n-predict N_PREDICT
|
||||
```
|
||||
|
||||
```bash
|
||||
cd /examples/<model_dir>
|
||||
python ./generate.py --repo-id-or-model-path /llm/models/Llama-2-7b-chat-hf --prompt PROMPT --n-predict N_PREDICT
|
||||
```
|
||||
|
||||
|
||||
Arguments info:
|
||||
|
|
|
|||
|
|
@ -51,8 +51,7 @@ docker pull intelanalytics/ipex-llm-xpu:latest
|
|||
|
||||
Start ipex-llm-xpu Docker Container. Choose one of the following commands to start the container:
|
||||
|
||||
<details>
|
||||
<summary>For <strong>Linux</strong>:</summary>
|
||||
- For **Linux users**:
|
||||
|
||||
```bash
|
||||
|
||||
|
|
@ -69,10 +68,8 @@ Start ipex-llm-xpu Docker Container. Choose one of the following commands to sta
|
|||
-v $MODEL_PATH:/llm/models \
|
||||
$DOCKER_IMAGE
|
||||
```
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary>For <strong>Windows WSL</strong>:</summary>
|
||||
- For **Windows WSL users**:
|
||||
|
||||
```bash
|
||||
#/bin/bash
|
||||
|
|
@ -91,9 +88,7 @@ Start ipex-llm-xpu Docker Container. Choose one of the following commands to sta
|
|||
-v /usr/lib/wsl:/usr/lib/wsl \
|
||||
$DOCKER_IMAGE
|
||||
```
|
||||
</details>
|
||||
|
||||
---
|
||||
|
||||
## Run/Develop Pytorch Examples
|
||||
|
||||
|
|
@ -108,10 +103,11 @@ Now you are in a running Docker Container, Open folder `/ipex-llm/python/llm/exa
|
|||
In this folder, we provide several PyTorch examples that you could apply IPEX-LLM INT4 optimizations on models on Intel GPUs.
|
||||
|
||||
For example, if your model is Llama-2-7b-chat-hf and mounted on /llm/models, you can navigate to llama2 directory, excute the following command to run example:
|
||||
```bash
|
||||
cd <model_dir>
|
||||
python ./generate.py --repo-id-or-model-path /llm/models/Llama-2-7b-chat-hf --prompt PROMPT --n-predict N_PREDICT
|
||||
```
|
||||
|
||||
```bash
|
||||
cd <model_dir>
|
||||
python ./generate.py --repo-id-or-model-path /llm/models/Llama-2-7b-chat-hf --prompt PROMPT --n-predict N_PREDICT
|
||||
```
|
||||
|
||||
|
||||
Arguments info:
|
||||
|
|
|
|||
|
|
@ -35,9 +35,9 @@ Follow the instructions in [this guide](https://docs.microsoft.com/en-us/windows
|
|||
#### Enable Docker integration with WSL2
|
||||
|
||||
Open **Docker desktop**, and select `Settings`->`Resources`->`WSL integration`->turn on `Ubuntu` button->`Apply & restart`.
|
||||
<a href="https://llm-assets.readthedocs.io/en/latest/_images/docker_desktop_new.png">
|
||||
<img src="https://llm-assets.readthedocs.io/en/latest/_images/docker_desktop_new.png" width=100%; />
|
||||
</a>
|
||||
<a href="https://llm-assets.readthedocs.io/en/latest/_images/docker_desktop_new.png">
|
||||
<img src="https://llm-assets.readthedocs.io/en/latest/_images/docker_desktop_new.png" width=100%; />
|
||||
</a>
|
||||
|
||||
|
||||
> [!TIP]
|
||||
|
|
|
|||
|
|
@ -30,7 +30,7 @@ You could choose to use [PyTorch API](./optimize_model.md) or [`transformers`-st
|
|||
model = model.to('xpu') # Important after obtaining the optimized model
|
||||
```
|
||||
|
||||
> **Tip**"
|
||||
> **Tip**:
|
||||
>
|
||||
> When running LLMs on Intel iGPUs for Windows users, we recommend setting `cpu_embedding=True` in the `optimize_model` function. This will allow the memory-intensive embedding layer to utilize the CPU instead of iGPU.
|
||||
>
|
||||
|
|
|
|||
|
|
@ -284,7 +284,9 @@ Now let's play with a real LLM. We'll be using the [phi-1.5](https://huggingface
|
|||
print(output_str)
|
||||
```
|
||||
|
||||
> **Note**: When running LLMs on Intel iGPUs with limited memory size, we recommend setting `cpu_embedding=True` in the `from_pretrained` function.
|
||||
> **Note**:
|
||||
>
|
||||
> When running LLMs on Intel iGPUs with limited memory size, we recommend setting `cpu_embedding=True` in the `from_pretrained` function.
|
||||
> This will allow the memory-intensive embedding layer to utilize the CPU instead of GPU.
|
||||
|
||||
- Step 5. Run `demo.py` within the activated Python environment using the following command:
|
||||
|
|
|
|||
|
|
@ -102,7 +102,9 @@ You can verify if `ipex-llm` is successfully installed following below steps.
|
|||
torch.Size([1, 1, 40, 40])
|
||||
```
|
||||
|
||||
> **Tip**: If you encounter any problem, please refer to [here](../Overview/install_gpu.md#troubleshooting) for help.
|
||||
> **Tip**:
|
||||
>
|
||||
> If you encounter any problem, please refer to [here](../Overview/install_gpu.md#troubleshooting) for help.
|
||||
|
||||
- To exit the Python interactive shell, simply press Ctrl+Z then press Enter (or input `exit()` then press Enter).
|
||||
|
||||
|
|
@ -239,7 +241,9 @@ Now let's play with a real LLM. We'll be using the [Qwen-1.8B-Chat](https://hugg
|
|||
output_str = tokenizer.decode(output[0], skip_special_tokens=True)
|
||||
print(output_str)
|
||||
```
|
||||
> **Note**: Please note that the repo id on ModelScope may be different from Hugging Face for some models.
|
||||
> **Note**:
|
||||
>
|
||||
> Please note that the repo id on ModelScope may be different from Hugging Face for some models.
|
||||
|
||||
> [!NOTE]
|
||||
> When running LLMs on Intel iGPUs with limited memory size, we recommend setting `cpu_embedding=True` in the `from_pretrained` function.
|
||||
|
|
|
|||
|
|
@ -135,7 +135,9 @@ Before running, you should download or copy community GGUF model to your current
|
|||
./main -m mistral-7b-instruct-v0.1.Q4_K_M.gguf -n 32 --prompt "Once upon a time, there existed a little girl who liked to have adventures. She wanted to go to places and meet new people, and have fun" -t 8 -e -ngl 33 --color
|
||||
```
|
||||
|
||||
> **Note**: For more details about meaning of each parameter, you can use `./main -h`.
|
||||
> **Note**:
|
||||
>
|
||||
> For more details about meaning of each parameter, you can use `./main -h`.
|
||||
|
||||
- For **Windows users**:
|
||||
|
||||
|
|
@ -145,7 +147,9 @@ Before running, you should download or copy community GGUF model to your current
|
|||
main -m mistral-7b-instruct-v0.1.Q4_K_M.gguf -n 32 --prompt "Once upon a time, there existed a little girl who liked to have adventures. She wanted to go to places and meet new people, and have fun" -t 8 -e -ngl 33 --color
|
||||
```
|
||||
|
||||
> **Note**: For more details about meaning of each parameter, you can use `main -h`.
|
||||
> **Note**:
|
||||
>
|
||||
> For more details about meaning of each parameter, you can use `main -h`.
|
||||
|
||||
#### Sample Output
|
||||
```
|
||||
|
|
|
|||
|
|
@ -72,7 +72,9 @@ Run below commands to start the service in another terminal:
|
|||
PGPT_PROFILES=ollama make run
|
||||
```
|
||||
|
||||
> **Note**: Setting `PGPT_PROFILES=ollama` will load the configuration from `settings.yaml` and `settings-ollama.yaml`.
|
||||
> **Note**:
|
||||
>
|
||||
> Setting `PGPT_PROFILES=ollama` will load the configuration from `settings.yaml` and `settings-ollama.yaml`.
|
||||
|
||||
- For **Windows users**:
|
||||
|
||||
|
|
@ -82,7 +84,9 @@ Run below commands to start the service in another terminal:
|
|||
make run
|
||||
```
|
||||
|
||||
> **Note**: Setting `PGPT_PROFILES=ollama` will load the configuration from `settings.yaml` and `settings-ollama.yaml`.
|
||||
> **Note**:
|
||||
>
|
||||
> Setting `PGPT_PROFILES=ollama` will load the configuration from `settings.yaml` and `settings-ollama.yaml`.
|
||||
|
||||
Upon successful deployment, you will see logs in the terminal similar to the following:
|
||||
|
||||
|
|
|
|||
|
|
@ -5,8 +5,7 @@
|
|||
|
||||
*See the demo of ragflow running Qwen2:7B on Intel Arc A770 below.*
|
||||
|
||||
<video src="https://llm-assets.readthedocs.io/en/latest/_images/ragflow-record.mp4" width="100%" controls></video>
|
||||
|
||||
[](https://llm-assets.readthedocs.io/en/latest/_images/ragflow-record.mp4)
|
||||
|
||||
## Quickstart
|
||||
|
||||
|
|
@ -17,64 +16,47 @@
|
|||
- Disk >= 50 GB
|
||||
- Docker >= 24.0.0 & Docker Compose >= v2.26.1
|
||||
|
||||
|
||||
### 1. Install and Start `Ollama` Service on Intel GPU
|
||||
|
||||
Follow the steps in [Run Ollama with IPEX-LLM on Intel GPU Guide](./ollama_quickstart.md) to install and run Ollama on Intel GPU. Ensure that `ollama serve` is running correctly and can be accessed through a local URL (e.g., `https://127.0.0.1:11434`) or a remote URL (e.g., `http://your_ip:11434`).
|
||||
|
||||
> [!IMPORTANT]
|
||||
> If the `RAGFlow` is not deployed on the same machine where Ollama is running (which means `RAGFlow` needs to connect to a remote Ollama service), you must configure the Ollama service to accept connections from any IP address. To achieve this, set or export the environment variable `OLLAMA_HOST=0.0.0.0` before executing the command `ollama serve`.
|
||||
|
||||
|
||||
```eval_rst
|
||||
.. important::
|
||||
|
||||
If the `RAGFlow` is not deployed on the same machine where Ollama is running (which means `RAGFlow` needs to connect to a remote Ollama service), you must configure the Ollama service to accept connections from any IP address. To achieve this, set or export the environment variable `OLLAMA_HOST=0.0.0.0` before executing the command `ollama serve`.
|
||||
|
||||
.. tip::
|
||||
|
||||
If your local LLM is running on Intel Arc™ A-Series Graphics with Linux OS (Kernel 6.2), it is recommended to additionaly set the following environment variable for optimal performance before executing `ollama serve`:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
|
||||
```
|
||||
> [!TIP]
|
||||
> If your local LLM is running on Intel Arc™ A-Series Graphics with Linux OS (Kernel 6.2), it is recommended to additionaly set the following environment variable for optimal performance before executing `ollama serve`:
|
||||
>
|
||||
> ```bash
|
||||
> export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
|
||||
> ```
|
||||
|
||||
### 2. Pull Model
|
||||
|
||||
Now we need to pull a model for RAG using Ollama. Here we use [Qwen/Qwen2-7B](https://huggingface.co/Qwen/Qwen2-7B) model as an example. Open a new terminal window, run the following command to pull [`qwen2:latest`](https://ollama.com/library/qwen2).
|
||||
|
||||
- For **Linux users**:
|
||||
|
||||
```eval_rst
|
||||
.. tabs::
|
||||
.. tab:: Linux
|
||||
```bash
|
||||
export no_proxy=localhost,127.0.0.1
|
||||
./ollama pull qwen2:latest
|
||||
```
|
||||
|
||||
.. code-block:: bash
|
||||
- For **Windows users**:
|
||||
|
||||
export no_proxy=localhost,127.0.0.1
|
||||
./ollama pull qwen2:latest
|
||||
Please run the following command in Miniforge or Anaconda Prompt.
|
||||
|
||||
.. tab:: Windows
|
||||
```cmd
|
||||
set no_proxy=localhost,127.0.0.1
|
||||
ollama pull qwen2:latest
|
||||
```
|
||||
|
||||
Please run the following command in Miniforge or Anaconda Prompt.
|
||||
|
||||
.. code-block:: cmd
|
||||
|
||||
set no_proxy=localhost,127.0.0.1
|
||||
ollama pull qwen2:latest
|
||||
|
||||
.. seealso::
|
||||
|
||||
Besides Qwen2, there are other LLM models you might want to explore, such as Llama3, Phi3, Mistral, etc. You can find all available models in the `Ollama model library <https://ollama.com/library>`_. Simply search for the model, pull it in a similar manner, and give it a try.
|
||||
```
|
||||
> [!TIP]
|
||||
> Besides Qwen2, there are other LLM models you might want to explore, such as Llama3, Phi3, Mistral, etc. You can find all available models in the [Ollama model library](https://ollama.com/library). Simply search for the model, pull it in a similar manner, and give it a try.
|
||||
|
||||
### 3. Start `RAGFlow` Service
|
||||
|
||||
|
||||
```eval_rst
|
||||
.. note::
|
||||
|
||||
The steps in section 3 is verified on Linux system only.
|
||||
```
|
||||
|
||||
> [!NOTE]
|
||||
> The steps in section 3 is verified on Linux system only.
|
||||
|
||||
#### 3.1 Download `RAGFlow`
|
||||
|
||||
|
|
@ -110,12 +92,8 @@ vm.max_map_count=262144
|
|||
|
||||
Build the pre-built Docker images and start up the server:
|
||||
|
||||
```eval_rst
|
||||
.. note::
|
||||
|
||||
Running the following commands automatically downloads the *dev* version RAGFlow Docker image. To download and run a specified Docker version, update `RAGFLOW_VERSION` in **docker/.env** to the intended version, for example `RAGFLOW_VERSION=v0.7.0`, before running the following commands.
|
||||
```
|
||||
|
||||
> [!NOTE]
|
||||
> Running the following commands automatically downloads the *dev* version RAGFlow Docker image. To download and run a specified Docker version, update `RAGFLOW_VERSION` in **docker/.env** to the intended version, for example `RAGFLOW_VERSION=v0.7.0`, before running the following commands.
|
||||
|
||||
```bash
|
||||
$ export no_proxy=localhost,127.0.0.1
|
||||
|
|
@ -124,12 +102,8 @@ $ chmod +x ./entrypoint.sh
|
|||
$ docker compose up -d
|
||||
```
|
||||
|
||||
|
||||
```eval_rst
|
||||
.. note::
|
||||
|
||||
The core image is about 9 GB in size and may take a while to load.
|
||||
```
|
||||
> [!NOTE]
|
||||
> The core image is about 9 GB in size and may take a while to load.
|
||||
|
||||
Check the server status after having the server up and running:
|
||||
|
||||
|
|
@ -153,18 +127,13 @@ Upon successful deployment, you will see logs in the terminal similar to the fol
|
|||
INFO:werkzeug:Press CTRL+C to quit
|
||||
```
|
||||
|
||||
|
||||
You can now open a browser and access the RAGflow web portal. With the default settings, simply enter `http://IP_OF_YOUR_MACHINE` (without the port number), as the default HTTP serving port `80` can be omitted. If RAGflow is deployed on the same machine as your browser, you can also access the web portal at `http://127.0.0.1` or `http://localhost`.
|
||||
|
||||
|
||||
### 4. Using `RAGFlow`
|
||||
|
||||
```eval_rst
|
||||
.. note::
|
||||
|
||||
For detailed information about how to use RAGFlow, visit the README of `RAGFlow official repository <https://github.com/infiniflow/ragflow>`_.
|
||||
|
||||
```
|
||||
> [!NOTE]
|
||||
> For detailed information about how to use RAGFlow, visit the README of [RAGFlow official repository](https://github.com/infiniflow/ragflow).
|
||||
|
||||
#### Log-in
|
||||
|
||||
|
|
@ -195,11 +164,8 @@ If the connection is successful, you will see the model listed down **Show more
|
|||
<img src="https://llm-assets.readthedocs.io/en/latest/_images/ragflow-add-ollama2.png" width="100%" />
|
||||
</a>
|
||||
|
||||
```eval_rst
|
||||
.. note::
|
||||
|
||||
If you want to use an Ollama server hosted at a different URL, simply update the **Ollama Base URL** to the new URL and press the **OK** button again to re-confirm the connection to Ollama.
|
||||
```
|
||||
> [!NOTE]
|
||||
> If you want to use an Ollama server hosted at a different URL, simply update the **Ollama Base URL** to the new URL and press the **OK** button again to re-confirm the connection to Ollama.
|
||||
|
||||
#### Create Knowledge Base
|
||||
|
||||
|
|
@ -248,24 +214,19 @@ Start new conversations by clicking **Chat** in the top navbar.
|
|||
On the left side, create a conversation by clicking **Create an Assistant**. Under **Assistant Setting**, give it a name and select your knowledge bases.
|
||||
|
||||
|
||||
<a href="https://llm-assets.readthedocs.io/en/latest/_images/ragflow-chat.png" target="_blank">
|
||||
<img src="https://llm-assets.readthedocs.io/en/latest/_images/ragflow-chat.png" width="100%" />
|
||||
</a>
|
||||
<a href="https://llm-assets.readthedocs.io/en/latest/_images/ragflow-chat.png" target="_blank">
|
||||
<img src="https://llm-assets.readthedocs.io/en/latest/_images/ragflow-chat.png" width="100%" />
|
||||
</a>
|
||||
|
||||
|
||||
Next, go to **Model Setting**, choose your model added by Ollama, and disable the **Max Tokens** toggle. Finally, click **OK** to start.
|
||||
|
||||
```eval_rst
|
||||
.. tip::
|
||||
> [!TIP]
|
||||
> Enabling the **Max Tokens** toggle may result in very short answers.
|
||||
|
||||
Enabling the **Max Tokens** toggle may result in very short answers.
|
||||
```
|
||||
|
||||
<a href="https://llm-assets.readthedocs.io/en/latest/_images/ragflow-chat2.png" target="_blank">
|
||||
<img src="https://llm-assets.readthedocs.io/en/latest/_images/ragflow-chat2.png" width="100%" />
|
||||
</a>
|
||||
|
||||
<br/>
|
||||
<a href="https://llm-assets.readthedocs.io/en/latest/_images/ragflow-chat2.png" target="_blank">
|
||||
<img src="https://llm-assets.readthedocs.io/en/latest/_images/ragflow-chat2.png" width="100%" />
|
||||
</a>
|
||||
|
||||
Input your questions into the **Message Resume Assistant** textbox at the bottom, and click the button on the right to get responses.
|
||||
|
||||
|
|
|
|||
Loading…
Reference in a new issue