Enhance LLM GPU installation document (#9828)

* Improve gpu install doc

* Add troubleshooting - setvars.sh not done properly.

* Further improvements

* 2024.x.x -> 2024.0

* Fixes

* Fix Install BigDL-LLM From Wheel : bigdl-llm[xpu_2.0]

* Remove "export USE_XETLA=OFF" for Max GPU
This commit is contained in:
Cheen Hau, 俊豪 2024-01-09 16:30:50 +08:00 committed by GitHub
parent aebed4b7bc
commit b2aa267f50
2 changed files with 31 additions and 16 deletions

View file

@ -84,9 +84,9 @@ set BIGDL_LLM_XMX_DISABLED=1
For the first time that **each model** runs on **iGPU**, it may take around several minutes to compile.
```
### Troubleshooting
<!-- ### Troubleshooting -->
todo
<!-- todo -->
## Linux
@ -114,10 +114,10 @@ BigDL-LLM for GPU supports on Linux has been verified on:
.. tabs::
.. tab:: PyTorch 2.1
To enable BigDL-LLM for Intel GPUs with PyTorch 2.1, here're several prerequisite steps for tools installation and environment preparation:
To enable BigDL-LLM for Intel GPUs with PyTorch 2.1, here are several prerequisite steps for tools installation and environment preparation:
* Step 1: Install Intel GPU Driver version >= stable_775_20_20231219. Highly recommend installing the latest version of intel-i915-dkms using apt.
* Step 1: Install Intel GPU Driver version >= stable_775_20_20231219. We highly recommend installing the latest version of intel-i915-dkms using apt.
.. seealso::
@ -195,7 +195,7 @@ We recommend using [miniconda](https://docs.conda.io/en/latest/miniconda.html) t
### Install BigDL-LLM From Wheel
If you encounter network issues when installing IPEX, you can also install BigDL-LLM dependencies for Intel XPU from source achieves. First you need to download and install torch/torchvision/ipex from wheels listed below before installing `bigdl-llm`.
If you encounter network issues when installing IPEX, you can also install BigDL-LLM dependencies for Intel XPU from source archives. First you need to download and install torch/torchvision/ipex from wheels listed below before installing `bigdl-llm`.
```eval_rst
.. tabs::
@ -218,7 +218,7 @@ If you encounter network issues when installing IPEX, you can also install BigDL
pip install intel_extension_for_pytorch-2.1.10+xpu-cp39-cp39-linux_x86_64.whl
# install bigdl-llm for Intel GPU
pip install --pre --upgrade bigdl-llm[xpu_2.1]
pip install --pre --upgrade bigdl-llm[xpu]
.. tab:: PyTorch 2.0
@ -239,7 +239,7 @@ If you encounter network issues when installing IPEX, you can also install BigDL
pip install intel_extension_for_pytorch-2.0.110+xpu-cp39-cp39-linux_x86_64.whl
# install bigdl-llm for Intel GPU
pip install --pre --upgrade bigdl-llm[xpu]
pip install --pre --upgrade bigdl-llm[xpu_2.0]
```
@ -255,9 +255,10 @@ To use GPU acceleration on Linux, several environment variables are required or
.. code-block:: bash
# configures OneAPI environment variables
# Required step. Configure OneAPI environment variables
source /opt/intel/oneapi/setvars.sh
# Recommended Environment Variables
export USE_XETLA=OFF
export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
@ -267,9 +268,10 @@ To use GPU acceleration on Linux, several environment variables are required or
.. code-block:: bash
# configures OneAPI environment variables
# Required step. Configure OneAPI environment variables
source /opt/intel/oneapi/setvars.sh
# Recommended Environment Variables
export LD_PRELOAD=${LD_PRELOAD}:${CONDA_PREFIX}/lib/libtcmalloc.so
export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
export ENABLE_SDP_FUSION=1
@ -280,15 +282,15 @@ To use GPU acceleration on Linux, several environment variables are required or
### Known issues
#### 1. Ubuntu 22.04 and Linux kernel 6.2.0 may cause performance bad (driver version < stable_775_20_20231219)
#### 1. Potential suboptimal performance with Linux kernel 6.2.0
For driver version < stable_775_20_20231219, the performance on Linux kernel 6.2.0 is worse than Linux kernel 5.19.0. You can use `sudo apt update && sudo apt install -y intel-i915-dkms intel-fw-gpu` to install the latest driver to solve this issue (need reboot OS).
For Ubuntu 22.04 and driver version < stable_775_20_20231219, the performance on Linux kernel 6.2.0 is worse than Linux kernel 5.19.0. You can use `sudo apt update && sudo apt install -y intel-i915-dkms intel-fw-gpu` to install the latest driver to solve this issue (need to reboot OS).
Tips: You can use `sudo apt list --installed | grep intel-i915-dkms` to check your intel-i915-dkms's version, the version should be latest and >= `1.23.9.11.231003.15+i19-1`.
#### 2. Driver installation meet unmet dependencies: intel-i915-dkms
#### 2. Driver installation unmet dependencies error: intel-i915-dkms
The last apt install command of the driver installation may get following error:
The last apt install command of the driver installation may produce the following error:
```
The following packages have unmet dependencies:
@ -296,8 +298,22 @@ The following packages have unmet dependencies:
Conflicts: intel-platform-vsec-dkms
```
You can use `sudo apt install -y intel-i915-dkms intel-fw-gpu` to instead. As the intel-platform-cse-dkms and intel-platform-vsec-dkms are already provided by intel-i915-dkms.
You can use `sudo apt install -y intel-i915-dkms intel-fw-gpu` to install instead. As the intel-platform-cse-dkms and intel-platform-vsec-dkms are already provided by intel-i915-dkms.
### Troubleshooting
todo
#### 1. Cannot open shared object file: No such file or directory
Error where libmkl file is not found, for example,
```
OSError: libmkl_intel_lp64.so.2: cannot open shared object file: No such file or directory
```
```
Error: libmkl_sycl_blas.so.4: cannot open shared object file: No such file or directory
```
The reason for such errors is that oneAPI has not been initialized properly before running BigDL-LLM code or before importing IPEX package.
* Step 1: Make sure you execute setvars.sh of oneAPI Base Toolkit before running BigDL-LLM code.
* Step 2: Make sure you install matching versions of BigDL-LLM/pytorch/IPEX and oneAPI Base Toolkit. BigDL-LLM with Pytorch 2.1 should be used with oneAPI Base Toolkit version 2024.0. BigDL-LLM with Pytorch 2.0 should be used with oneAPI Base Toolkit version 2023.2.

View file

@ -1,6 +1,5 @@
source /opt/intel/oneapi/setvars.sh
export LD_PRELOAD=${LD_PRELOAD}:${CONDA_PREFIX}/lib/libtcmalloc.so
export USE_XETLA=OFF
export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
export ENABLE_SDP_FUSION=1