Small update for GPU configuration related doc (#10770)

* Small doc fix for dGPU type name

* Further fixes

* Further fix

* Small fix
This commit is contained in:
Yuwen Hu 2024-04-15 18:43:29 +08:00 committed by GitHub
parent 73a67804a4
commit 1abd77507e
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
4 changed files with 17 additions and 6 deletions

View file

@ -135,7 +135,7 @@ Please also set the following environment variable if you would like to run LLMs
set SYCL_CACHE_PERSISTENT=1
set BIGDL_LLM_XMX_DISABLED=1
.. tab:: Intel Arc™ A-Series
.. tab:: Intel Arc™ A-Series Graphics
.. code-block:: cmd
@ -581,6 +581,12 @@ To use GPU acceleration on Linux, several environment variables are required or
```
```eval_rst
.. note::
For **the first time** that **each model** runs on Intel iGPU/Intel Arc™ A300-Series or Pro A60, it may take several minutes to compile.
```
### Known issues
#### 1. Potential suboptimal performance with Linux kernel 6.2.0

View file

@ -67,7 +67,7 @@ IPEX-LLM currently supports the Ubuntu 20.04 operating system and later, and sup
sudo tee /etc/apt/sources.list.d/intel-gpu-jammy.list
```
> <img src="https://llm-assets.readthedocs.io/en/latest/_images/wget.png" width=100%; />
<img src="https://llm-assets.readthedocs.io/en/latest/_images/wget.png" width=100%; />
* Install drivers
@ -89,7 +89,7 @@ IPEX-LLM currently supports the Ubuntu 20.04 operating system and later, and sup
sudo reboot
```
> <img src="https://llm-assets.readthedocs.io/en/latest/_images/gawk.png" width=100%; />
<img src="https://llm-assets.readthedocs.io/en/latest/_images/gawk.png" width=100%; />
* Configure permissions
@ -229,6 +229,12 @@ To use GPU acceleration on Linux, several environment variables are required or
```
```eval_rst
.. seealso::
Please refer to `this guide <../Overview/install_gpu.html#id5>`_ for more details regarding runtime configuration.
```
## A Quick Example
Now let's play with a real LLM. We'll be using the [phi-1.5](https://huggingface.co/microsoft/phi-1_5) model, a 1.3 billion parameter LLM for this demostration. Follow the steps below to setup and run the model, and observe how it responds to a prompt "What is AI?".

View file

@ -125,7 +125,7 @@ You can verify if `ipex-llm` is successfully installed following below steps.
```eval_rst
.. seealso::
For other Intel dGPU Series, please refer to `this guide <https://ipex-llm.readthedocs.io/en/latest/doc/LLM/Overview/install_gpu.html#runtime-configuration>`_ for more details regarding runtime configuration.
For other Intel dGPU Series, please refer to `this guide <../Overview/install_gpu.html#runtime-configuration>`_ for more details regarding runtime configuration.
```
### Step 2: Run Python Code

View file

@ -59,11 +59,10 @@ Configure oneAPI variables by running the following command in **Anaconda Prompt
```
```cmd
call "C:\Program Files (x86)\Intel\oneAPI\setvars.bat"
set SYCL_CACHE_PERSISTENT=1
```
If you're running on iGPU, set additional environment variables by running the following commands:
```cmd
set SYCL_CACHE_PERSISTENT=1
set BIGDL_LLM_XMX_DISABLED=1
```