ipex-llm/python/llm/example/GPU/HF-Transformers-AutoModels/Model
Ziteng Zhang 44b4a0c9c5 [LLM] Correct prompt format of Yi, Llama2 and Qwen in generate.py (#9786)
* correct prompt format of Yi

* correct prompt format of llama2 in cpu generate.py

* correct prompt format of Qwen in GPU example
2023-12-26 16:57:55 +08:00
..
aquila Revert "[LLM] IPEX auto importer turn on by default for XPU (#9730)" (#9759) 2023-12-22 16:38:24 +08:00
aquila2 Revert "[LLM] IPEX auto importer turn on by default for XPU (#9730)" (#9759) 2023-12-22 16:38:24 +08:00
baichuan Revert "[LLM] IPEX auto importer turn on by default for XPU (#9730)" (#9759) 2023-12-22 16:38:24 +08:00
baichuan2 Revert "[LLM] IPEX auto importer turn on by default for XPU (#9730)" (#9759) 2023-12-22 16:38:24 +08:00
bluelm Revert "[LLM] IPEX auto importer turn on by default for XPU (#9730)" (#9759) 2023-12-22 16:38:24 +08:00
chatglm2 Revert "[LLM] IPEX auto importer turn on by default for XPU (#9730)" (#9759) 2023-12-22 16:38:24 +08:00
chatglm3 Revert "[LLM] IPEX auto importer turn on by default for XPU (#9730)" (#9759) 2023-12-22 16:38:24 +08:00
chinese-llama2 Revert "[LLM] IPEX auto importer turn on by default for XPU (#9730)" (#9759) 2023-12-22 16:38:24 +08:00
codellama Revert "[LLM] IPEX auto importer turn on by default for XPU (#9730)" (#9759) 2023-12-22 16:38:24 +08:00
codeshell add codeshell example (#9743) 2023-12-25 10:54:01 +08:00
distil-whisper Revert "[LLM] IPEX auto importer turn on by default for XPU (#9730)" (#9759) 2023-12-22 16:38:24 +08:00
dolly-v1 Revert "[LLM] IPEX auto importer turn on by default for XPU (#9730)" (#9759) 2023-12-22 16:38:24 +08:00
dolly-v2 Revert "[LLM] IPEX auto importer turn on by default for XPU (#9730)" (#9759) 2023-12-22 16:38:24 +08:00
falcon Revert "[LLM] IPEX auto importer turn on by default for XPU (#9730)" (#9759) 2023-12-22 16:38:24 +08:00
flan-t5 Revert "[LLM] IPEX auto importer turn on by default for XPU (#9730)" (#9759) 2023-12-22 16:38:24 +08:00
gpt-j Revert "[LLM] IPEX auto importer turn on by default for XPU (#9730)" (#9759) 2023-12-22 16:38:24 +08:00
internlm Revert "[LLM] IPEX auto importer turn on by default for XPU (#9730)" (#9759) 2023-12-22 16:38:24 +08:00
llama2 Revert "[LLM] IPEX auto importer turn on by default for XPU (#9730)" (#9759) 2023-12-22 16:38:24 +08:00
mistral Revert "[LLM] IPEX auto importer turn on by default for XPU (#9730)" (#9759) 2023-12-22 16:38:24 +08:00
mixtral Revert "[LLM] IPEX auto importer turn on by default for XPU (#9730)" (#9759) 2023-12-22 16:38:24 +08:00
mpt Revert "[LLM] IPEX auto importer turn on by default for XPU (#9730)" (#9759) 2023-12-22 16:38:24 +08:00
phi-1_5 Revert "[LLM] IPEX auto importer turn on by default for XPU (#9730)" (#9759) 2023-12-22 16:38:24 +08:00
qwen [LLM] Correct prompt format of Yi, Llama2 and Qwen in generate.py (#9786) 2023-12-26 16:57:55 +08:00
qwen-vl Revert "[LLM] IPEX auto importer turn on by default for XPU (#9730)" (#9759) 2023-12-22 16:38:24 +08:00
replit Revert "[LLM] IPEX auto importer turn on by default for XPU (#9730)" (#9759) 2023-12-22 16:38:24 +08:00
starcoder Revert "[LLM] IPEX auto importer turn on by default for XPU (#9730)" (#9759) 2023-12-22 16:38:24 +08:00
vicuna Revert "[LLM] IPEX auto importer turn on by default for XPU (#9730)" (#9759) 2023-12-22 16:38:24 +08:00
voiceassistant Revert "[LLM] IPEX auto importer turn on by default for XPU (#9730)" (#9759) 2023-12-22 16:38:24 +08:00
whisper Revert "[LLM] IPEX auto importer turn on by default for XPU (#9730)" (#9759) 2023-12-22 16:38:24 +08:00
yi [LLM] Correct prompt format of Yi, Llama2 and Qwen in generate.py (#9786) 2023-12-26 16:57:55 +08:00
README.md Update readme (#9692) 2023-12-14 19:50:21 +08:00

BigDL-LLM Transformers INT4 Optimization for Large Language Model on Intel GPUs

You can use BigDL-LLM to run almost every Huggingface Transformer models with INT4 optimizations on your laptops with Intel GPUs. This directory contains example scripts to help you quickly get started using BigDL-LLM to run some popular open-source models in the community. Each model has its own dedicated folder, where you can find detailed instructions on how to install and run it.

Verified Hardware Platforms

  • Intel Arc™ A-Series Graphics
  • Intel Data Center GPU Flex Series
  • Intel Data Center GPU Max Series

To apply Intel GPU acceleration, therere several steps for tools installation and environment preparation. See the GPU installation guide for mode details.

Step 1, only Linux system is supported now, Ubuntu 22.04 is prefered.

Step 2, please refer to our driver installation for general purpose GPU capabilities.

Note

: IPEX 2.0.110+xpu requires Intel GPU Driver version is Stable 647.21.

Step 3, you also need to download and install Intel® oneAPI Base Toolkit. OneMKL and DPC++ compiler are needed, others are optional.

Note

: IPEX 2.0.110+xpu requires Intel® oneAPI Base Toolkit's version == 2023.2.0.

Best Known Configuration on Linux

For better performance, it is recommended to set environment variables on Linux:

export USE_XETLA=OFF
export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1