ipex-llm/python/llm/example/GPU/PyTorch-Models/Model
Zheng, Yi 0674146cfb Add cpu and gpu examples of distil-whisper (#9374)
* Add distil-whisper examples

* Fixes based on comments

* Minor fixes

---------

Co-authored-by: Ariadne330 <wyn2000330@126.com>
2023-11-10 16:09:55 +08:00
..
aquila2 LLM: add aquila2 model example (#9356) 2023-11-06 15:47:39 +08:00
baichuan LLM: add baichuan and baichuan2 to gpu pytorch model example (#9152) 2023-10-13 13:44:31 +08:00
baichuan2 LLM: add baichuan and baichuan2 to gpu pytorch model example (#9152) 2023-10-13 13:44:31 +08:00
chatglm2 LLM: add gpu pytorch-models example llama2 and chatglm2 (#9142) 2023-10-12 13:41:48 +08:00
chatglm3 LLM: add chatglm3 examples (#9305) 2023-11-01 09:50:05 +08:00
codellama LLM: add CodeLlama CPU and GPU examples (#9338) 2023-11-02 15:34:25 +08:00
distil-whisper Add cpu and gpu examples of distil-whisper (#9374) 2023-11-10 16:09:55 +08:00
dolly-v1 LLM: add dolly-v1 and dolly-v2 to gpu pytorch model example (#9153) 2023-10-13 15:43:35 +08:00
dolly-v2 LLM: add dolly-v1 and dolly-v2 to gpu pytorch model example (#9153) 2023-10-13 15:43:35 +08:00
flan-t5 add cpu and gpu examples of flan-t5 (#9171) 2023-10-24 15:24:01 +08:00
llama2 LLM: add gpu pytorch-models example llama2 and chatglm2 (#9142) 2023-10-12 13:41:48 +08:00
llava add llava gpu example (#9324) 2023-11-02 14:48:29 +08:00
mistral LLM: highlight transformers version requirement in mistral examples (#9380) 2023-11-08 16:05:03 +08:00
phi-1_5 phi-1_5 CPU and GPU examples (#9173) 2023-10-24 15:08:04 +08:00
qwen-vl LLM: Add qwen-vl gpu example (#9290) 2023-11-01 11:01:39 +08:00
replit LLM: add replit and starcoder to gpu pytorch model example (#9154) 2023-10-13 15:44:17 +08:00
starcoder LLM: add replit and starcoder to gpu pytorch model example (#9154) 2023-10-13 15:44:17 +08:00
README.md LLM: add chatglm3 examples (#9305) 2023-11-01 09:50:05 +08:00

BigDL-LLM INT4 Optimization for Large Language Model on Intel GPUs

You can use optimize_model API to accelerate general PyTorch models on Intel GPUs. This directory contains example scripts to help you quickly get started using BigDL-LLM to run some popular open-source models in the community. Each model has its own dedicated folder, where you can find detailed instructions on how to install and run it.

Verified Hardware Platforms

  • Intel Arc™ A-Series Graphics
  • Intel Data Center GPU Flex Series
  • Intel Data Center GPU Max Series

To apply Intel GPU acceleration, therere several steps for tools installation and environment preparation.

Step 1, only Linux system is supported now, Ubuntu 22.04 is prefered.

Step 2, please refer to our driver installation for general purpose GPU capabilities.

Note

: IPEX 2.0.110+xpu requires Intel GPU Driver version is Stable 647.21.

Step 3, you also need to download and install Intel® oneAPI Base Toolkit. OneMKL and DPC++ compiler are needed, others are optional.

Note

: IPEX 2.0.110+xpu requires Intel® oneAPI Base Toolkit's version >= 2023.2.0.

Best Known Configuration on Linux

For better performance, it is recommended to set environment variables on Linux:

export USE_XETLA=OFF
export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1