ipex-llm/python/llm/example/GPU
Wang, Jian4 496bb2e845 LLM: Support load BaiChuan model family gguf model (#9685)
* support baichuan model family gguf model

* update gguf generate.py

* add verify models

* add support model_family

* update

* update style

* update type

* update readme

* update

* remove support model_family
2023-12-15 13:34:33 +08:00
..
Deepspeed-AutoTP Add deepspeed autotp example readme (#9289) 2023-10-27 13:04:38 -07:00
HF-Transformers-AutoModels LLM: Support load BaiChuan model family gguf model (#9685) 2023-12-15 13:34:33 +08:00
PyTorch-Models Update readme (#9692) 2023-12-14 19:50:21 +08:00
QLoRA-FineTuning Support peft LoraConfig (#9636) 2023-12-08 16:13:03 +08:00
vLLM-Serving fix doc (#9599) 2023-12-05 13:49:31 +08:00
README.md Update readme (#9692) 2023-12-14 19:50:21 +08:00

BigDL-LLM Examples on Intel GPU

This folder contains examples of running BigDL-LLM on Intel GPU:

  • HF-Transformers-AutoModels: running any Hugging Face Transformers model on BigDL-LLM (using the standard AutoModel APIs)
  • QLoRA-FineTuning: running QLoRA finetuning using BigDL-LLM on Intel GPUs
  • vLLM-Serving: running vLLM serving framework on intel GPUs (with BigDL-LLM low-bit optimized models)
  • Deepspeed-AutoTP: running distributed inference using DeepSpeed AutoTP (with BigDL-LLM low-bit optimized models) on Intel GPUs
  • PyTorch-Models: running any PyTorch model on BigDL-LLM (with "one-line code change")

System Support

Hardware:

  • Intel Arc™ A-Series Graphics
  • Intel Data Center GPU Flex Series
  • Intel Data Center GPU Max Series

Operating System:

  • Ubuntu 20.04 or later (Ubuntu 22.04 is preferred)

Requirements

To apply Intel GPU acceleration, therere several steps for tools installation and environment preparation. See the GPU installation guide for mode details.

Step 1, please refer to our driver installation for general purpose GPU capabilities.

Note

: IPEX 2.0.110+xpu requires Intel GPU Driver version is Stable 647.21.

Step 2, you also need to download and install Intel® oneAPI Base Toolkit. OneMKL and DPC++ compiler are needed, others are optional.

Note

: IPEX 2.0.110+xpu requires Intel® oneAPI Base Toolkit's version == 2023.2.0.

Best Known Configuration on Linux

For better performance, it is recommended to set environment variables on Linux:

export USE_XETLA=OFF
export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1