ipex-llm/python/llm/example/gpu
Ruonan Wang bf51ec40b2 LLM: Fix empty cache (#9024)
* fix

* fix

* update example
2023-09-21 17:16:07 +08:00
..
hf-transformers-models LLM: Fix empty cache (#9024) 2023-09-21 17:16:07 +08:00
pytorch-models LLM: refactor gpu examples (#8963) 2023-09-13 14:47:47 +08:00
qlora_finetuning Experiment XPU QLora Finetuning (#8937) 2023-09-19 10:15:44 -07:00
README.md LLM: add baichuan2 example for arc (#8994) 2023-09-18 14:32:27 +08:00

BigDL-LLM INT4 Optimization for Large Language Model on Intel GPUs

You can use BigDL-LLM to run almost every Huggingface Transformer models with INT4 optimizations on your laptops with Intel GPUs. Moreover, you can also use optimize_model API to accelerate general PyTorch models on Intel GPUs.

Verified models

Model Example
Baichuan link
Baichuan2 link
ChatGLM2 link
Chinese Llama2 link
Falcon link
GPT-J link
InternLM link
LLaMA 2 link
MPT link
Qwen link
StarCoder link
Whisper link

Verified Hardware Platforms

  • Intel Arc™ A-Series Graphics
  • Intel Data Center GPU Flex Series

To apply Intel GPU acceleration, therere several steps for tools installation and environment preparation.

Step 1, only Linux system is supported now, Ubuntu 22.04 is prefered.

Step 2, please refer to our driver installation for general purpose GPU capabilities.

Note

: IPEX 2.0.110+xpu requires Intel GPU Driver version is Stable 647.21.

Step 3, you also need to download and install Intel® oneAPI Base Toolkit. OneMKL and DPC++ compiler are needed, others are optional.

Note

: IPEX 2.0.110+xpu requires Intel® oneAPI Base Toolkit's version >= 2023.2.0.

Best Known Configuration on Linux

For better performance, it is recommended to set environment variables on Linux:

export USE_XETLA=OFF
export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1