ipex-llm/python/llm/example/transformers/transformers_int4/GPU
Ruonan Wang 1a7b698a83 [LLM] support ipex arc int4 & add basic llama2 example (#8700)
* first support of xpu

* make it works on gpu

update setup

update

add GPU llama2 examples

add use_optimize flag to disbale optimize for gpu

fix style

update gpu exmaple readme

fix

* update example, and update env

* fix setup to add cpp files

* replace jit with aot to avoid data leak

* rename to bigdl-core-xe

* update installation in example readme
2023-08-09 22:20:32 +08:00
..
llama2 [LLM] support ipex arc int4 & add basic llama2 example (#8700) 2023-08-09 22:20:32 +08:00
README.md [LLM] support ipex arc int4 & add basic llama2 example (#8700) 2023-08-09 22:20:32 +08:00

BigDL-LLM Transformers INT4 Optimization for Large Language Model on Intel® Arc™ A-Series Graphics

You can use BigDL-LLM to run almost every Huggingface Transformer models with INT4 optimizations on your laptops with Intel® Arc™ A-Series Graphics. This directory contains example scripts to help you quickly get started using BigDL-LLM to run some popular open-source models in the community. Each model has its own dedicated folder, where you can find detailed instructions on how to install and run it.

To apply Intel® Arc™ A-Series Graphics acceleration, therere several steps for tools installation and environment preparation. Step 1, only Linux system is supported now, Ubuntu 22.04 is prefered. Step 2, please refer to our drive installation for general purpose GPU capabilities. Step 3, you also need to download and install Intel® oneAPI Base Toolkit. OneMKL and DPC++ compiler are needed, others are optional.

Best Known Configuration on Linux

For better performance, it is recommended to set environment variables on Linux:

export USE_XETLA=OFF
export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1