ipex-llm/python/llm/example/GPU/HF-Transformers-AutoModels/Model
2023-11-01 09:50:05 +08:00
..
aquila LLM: update some gpu examples (#9136) 2023-10-11 14:23:56 +08:00
baichuan LLM: update example layout (#9046) 2023-10-09 15:36:39 +08:00
baichuan2 LLM: update example layout (#9046) 2023-10-09 15:36:39 +08:00
chatglm2 LLM: update example layout (#9046) 2023-10-09 15:36:39 +08:00
chatglm3 LLM: add chatglm3 examples (#9305) 2023-11-01 09:50:05 +08:00
chinese-llama2 LLM: update example layout (#9046) 2023-10-09 15:36:39 +08:00
dolly-v1 LLM: update some gpu examples (#9136) 2023-10-11 14:23:56 +08:00
dolly-v2 LLM: update some gpu examples (#9136) 2023-10-11 14:23:56 +08:00
falcon add position_ids and fuse embedding for falcon (#9242) 2023-10-24 09:58:20 +08:00
flan-t5 add cpu and gpu examples of flan-t5 (#9171) 2023-10-24 15:24:01 +08:00
gpt-j LLM: update example layout (#9046) 2023-10-09 15:36:39 +08:00
internlm LLM: update example layout (#9046) 2023-10-09 15:36:39 +08:00
llama2 LLM: update example layout (#9046) 2023-10-09 15:36:39 +08:00
mistral LLM: add mistral examples (#9121) 2023-10-11 13:38:15 +08:00
mpt LLM: update example layout (#9046) 2023-10-09 15:36:39 +08:00
phi-1_5 phi-1_5 CPU and GPU examples (#9173) 2023-10-24 15:08:04 +08:00
qwen LLM: support kv_cache optimization for Qwen-VL-Chat (#9193) 2023-10-17 13:33:56 +08:00
replit LLM: Add Replit CPU and GPU example (#9028) 2023-10-12 13:42:14 +08:00
starcoder LLM: update example layout (#9046) 2023-10-09 15:36:39 +08:00
vicuna LLM: update some gpu examples (#9136) 2023-10-11 14:23:56 +08:00
voiceassistant LLM: update example layout (#9046) 2023-10-09 15:36:39 +08:00
whisper LLM: update example layout (#9046) 2023-10-09 15:36:39 +08:00
README.md LLM: add chatglm3 examples (#9305) 2023-11-01 09:50:05 +08:00

BigDL-LLM Transformers INT4 Optimization for Large Language Model on Intel GPUs

You can use BigDL-LLM to run almost every Huggingface Transformer models with INT4 optimizations on your laptops with Intel GPUs. This directory contains example scripts to help you quickly get started using BigDL-LLM to run some popular open-source models in the community. Each model has its own dedicated folder, where you can find detailed instructions on how to install and run it.

Verified Hardware Platforms

  • Intel Arc™ A-Series Graphics
  • Intel Data Center GPU Flex Series
  • Intel Data Center GPU Max Series

To apply Intel GPU acceleration, therere several steps for tools installation and environment preparation.

Step 1, only Linux system is supported now, Ubuntu 22.04 is prefered.

Step 2, please refer to our driver installation for general purpose GPU capabilities.

Note

: IPEX 2.0.110+xpu requires Intel GPU Driver version is Stable 647.21.

Step 3, you also need to download and install Intel® oneAPI Base Toolkit. OneMKL and DPC++ compiler are needed, others are optional.

Note

: IPEX 2.0.110+xpu requires Intel® oneAPI Base Toolkit's version >= 2023.2.0.

Best Known Configuration on Linux

For better performance, it is recommended to set environment variables on Linux:

export USE_XETLA=OFF
export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1