* renew chatglm3-6b gpu example readme fix fix fix * fix for comments * fix * fix * fix * fix * fix * apply on HF-Transformers-AutoModels * apply on PyTorch-Models * fix * fix  | 
			||
|---|---|---|
| .. | ||
| generate.py | ||
| low_memory_generate.py | ||
| README.md | ||
Llama2
In this directory, you will find examples on how you could use IPEX-LLM optimize_model API to accelerate Llama2 models. For illustration purposes, we utilize the meta-llama/Llama-2-7b-chat-hf, meta-llama/Llama-2-13b-chat-hf and meta-llama/Llama-2-70b-chat-hf as reference Llama2 models.
Requirements
To run these examples with IPEX-LLM on Intel GPUs, we have some recommended requirements for your machine, please refer to here for more information.
Example 1 - Basic Version: Predict Tokens using generate() API
In the example generate.py, we show a basic use case for a Llama2 model to predict the next N tokens using generate() API, with IPEX-LLM INT4 optimizations on Intel GPUs.
1. Install
1.1 Installation on Linux
We suggest using conda to manage environment:
conda create -n llm python=3.11
conda activate llm
# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
1.2 Installation on Windows
We suggest using conda to manage environment:
conda create -n llm python=3.11 libuv
conda activate llm
# below command will use pip to install the Intel oneAPI Base Toolkit 2024.0
pip install dpcpp-cpp-rt==2024.0.2 mkl-dpcpp==2024.0.0 onednn==2024.0.0
# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
2. Configures OneAPI environment variables for Linux
Note
Skip this step if you are running on Windows.
This is a required step on Linux for APT or offline installed oneAPI. Skip this step for PIP-installed oneAPI.
source /opt/intel/oneapi/setvars.sh
3. Runtime Configurations
For optimal performance, it is recommended to set several environment variables. Please check out the suggestions based on your device.
3.1 Configurations for Linux
For Intel Arc™ A-Series Graphics and Intel Data Center GPU Flex Series
export USE_XETLA=OFF
export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
export SYCL_CACHE_PERSISTENT=1
For Intel Data Center GPU Max Series
export LD_PRELOAD=${LD_PRELOAD}:${CONDA_PREFIX}/lib/libtcmalloc.so
export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
export SYCL_CACHE_PERSISTENT=1
export ENABLE_SDP_FUSION=1
Note: Please note that
libtcmalloc.socan be installed byconda install -c conda-forge -y gperftools=2.10.
For Intel iGPU
export SYCL_CACHE_PERSISTENT=1
export BIGDL_LLM_XMX_DISABLED=1
3.2 Configurations for Windows
For Intel iGPU
set SYCL_CACHE_PERSISTENT=1
set BIGDL_LLM_XMX_DISABLED=1
For Intel Arc™ A-Series Graphics
set SYCL_CACHE_PERSISTENT=1
Note
For the first time that each model runs on Intel iGPU/Intel Arc™ A300-Series or Pro A60, it may take several minutes to compile.
4. Running examples
python ./generate.py --prompt 'What is AI?'
In the example, several arguments can be passed to satisfy your requirements:
--repo-id-or-model-path REPO_ID_OR_MODEL_PATH: argument defining the huggingface repo id for the Llama2 model (e.g.meta-llama/Llama-2-7b-chat-hfandmeta-llama/Llama-2-13b-chat-hf) to be downloaded, or the path to the huggingface checkpoint folder. It is default to be'meta-llama/Llama-2-7b-chat-hf'.--prompt PROMPT: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be'What is AI?'.--n-predict N_PREDICT: argument defining the max number of tokens to predict. It is default to be32.
Sample Output
meta-llama/Llama-2-7b-chat-hf
Inference time: xxxx s
-------------------- Output --------------------
### HUMAN:
What is AI?
### RESPONSE:
AI is a field of computer science that focuses on creating intelligent machines that can perform tasks that typically require human intelligence, such as understanding natural language,
meta-llama/Llama-2-13b-chat-hf
Inference time: xxxx s
-------------------- Output --------------------
### HUMAN:
What is AI?
### RESPONSE:
AI, or artificial intelligence, refers to the ability of machines to perform tasks that would typically require human intelligence, such as learning, problem-solving,
Example 2 - Low Memory Version: Predict Tokens using generate() API
If you're not able to load the full 4-bit model (e.g. meta-llama/Llama-2-70b-chat-hf) in one GPU as shown in Example 1, you may try this example instead.
In low_memory_generate.py, we show a way to load very large models with very low GPU memory footprint. However this could be much slower than the standard way. The implementation is adapted from here.
1. Environment setup
Please refer to Example 1 for more information.
2. Run
python ./low_memory_generate.py --split-weight --splitted-weights-path ${SPLITTED_WEIGHTS_PATH}
In the example, besides arguments in Example 1, several other arguments can be passed to satisfy your requirements:
--splitted-weights-path: argument defining folder saving per-layer weights.--split-weight: argument defining whether to split weights by layer. If this argument is enabled, per-layer weights will be generated and saved to--splitted-weights-path. This argument only needs to be enabled once for the same model.--max-cache-num: argument defining the maximum number of weights saved in the cache. You can adjust this argument based on your GPU memory. It is default to be 200. For meta-llama/Llama-2-70b-chat-hf, GPU peak memory is around 3G when it is set to 0 and 15G when it is set to 200.