ipex-llm/python/llm/example/NPU/HF-Transformers-AutoModels/LLM
2024-08-29 18:25:17 +08:00
..
baichuan2.py Update npu baichuan2 (#11939) 2024-08-27 16:56:26 +08:00
generate.py Update npu example and all in one benckmark (#11766) 2024-08-12 16:46:46 +08:00
llama.py update llama3 npu example (#11933) 2024-08-27 13:03:18 +08:00
minicpm.py [NPU] Add example for NPU multi-processing minicpm-1b model (#11935) 2024-08-27 14:57:46 +08:00
qwen2.py Update qwen2-7b example script (#11961) 2024-08-29 18:25:17 +08:00
README.md Update qwen2-7b example script (#11961) 2024-08-29 18:25:17 +08:00

Run Large Language Model on Intel NPU

In this directory, you will find examples on how you could apply IPEX-LLM INT4 or INT8 optimizations on LLM models on Intel NPUs. See the table blow for verified models.

Verified Models

Model Model Link
Llama2 meta-llama/Llama-2-7b-chat-hf
Llama3 meta-llama/Meta-Llama-3-8B-Instruct
Chatglm3 THUDM/chatglm3-6b
Chatglm2 THUDM/chatglm2-6b
Qwen2 Qwen/Qwen2-7B-Instruct, Qwen/Qwen2-1.5B-Instruct
MiniCPM openbmb/MiniCPM-2B-sft-bf16
Phi-3 microsoft/Phi-3-mini-4k-instruct
Stablelm stabilityai/stablelm-zephyr-3b
Baichuan2 baichuan-inc/Baichuan2-7B-Chat
Deepseek deepseek-ai/deepseek-coder-6.7b-instruct
Mistral mistralai/Mistral-7B-Instruct-v0.1

0. Requirements

To run these examples with IPEX-LLM on Intel NPUs, make sure to install the newest driver version of Intel NPU. Go to https://www.intel.com/content/www/us/en/download/794734/intel-npu-driver-windows.html to download and unzip the driver. Then go to Device Manager, find Neural Processors -> Intel(R) AI Boost. Right click and select Update Driver. And then manually select the folder unzipped from the driver.

1. Install

1.1 Installation on Windows

We suggest using conda to manage environment:

conda create -n llm python=3.10
conda activate llm

# install ipex-llm with 'npu' option
pip install --pre --upgrade ipex-llm[npu]

2. Runtime Configurations

For optimal performance, it is recommended to set several environment variables. Please check out the suggestions based on your device.

2.1 Configurations for Windows

Note

For optimal performance, we recommend running code in conhost rather than Windows Terminal:

  • Press Win+R and input conhost, then press Enter to launch conhost.
  • Run following command to use conda in conhost. Replace <your conda install location> with your conda install location.
call <your conda install location>\Scripts\activate

Following envrionment variables are required:

set BIGDL_USE_NPU=1

3. Run models

In the example generate.py, we show a basic use case for a Llama2 model to predict the next N tokens using generate() API, with IPEX-LLM INT4 optimizations on Intel NPUs.

python ./generate.py

Arguments info:

  • --repo-id-or-model-path REPO_ID_OR_MODEL_PATH: argument defining the huggingface repo id for the Llama2 model (e.g. meta-llama/Llama-2-7b-chat-hf) to be downloaded, or the path to the huggingface checkpoint folder. It is default to be 'meta-llama/Llama-2-7b-chat-hf', and more verified models please see the list in Verified Models.
  • --prompt PROMPT: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be 'Once upon a time, there existed a little girl who liked to have adventures. She wanted to go to places and meet new people, and have fun'.
  • --n-predict N_PREDICT: argument defining the max number of tokens to predict. It is default to be 32.
  • --load_in_low_bit: argument defining the load_in_low_bit format used. It is default to be sym_int8, sym_int4 can also be used.

Sample Output

meta-llama/Llama-2-7b-chat-hf

Inference time: xxxx s
-------------------- Output --------------------
<s> Once upon a time, there existed a little girl who liked to have adventures. She wanted to go to places and meet new people, and have fun. But her parents were always telling her to stay at home and be careful. They were worried about her safety, and they didn't want her to
--------------------------------------------------------------------------------
done

4. Run Optimized Models (Experimental)

The example below shows how to run the optimized model implementations on Intel NPU, including

32.0.100.2625

Supported models: Llama2-7B, Qwen2-1.5B, Qwen2-7B, MiniCPM-1B, Baichuan2-7B

32.0.101.2715

Supported models: Llama3-8B, MiniCPM-2B

Run Models

# to run Llama-2-7b-chat-hf
python llama.py

# to run Meta-Llama-3-8B-Instruct (LNL driver version: 32.0.101.2715)
python llama.py --repo-id-or-model-path meta-llama/Meta-Llama-3-8B-Instruct

# to run Qwen2-1.5B-Instruct
python qwen2.py

# to run Qwen2-7B-Instruct
python qwen2.py  --repo-id-or-model-path Qwen/Qwen2-7B-Instruct

# to run MiniCPM-1B-sft-bf16
python minicpm.py

# to run MiniCPM-2B-sft-bf16 (LNL driver version: 32.0.101.2715)
python minicpm.py --repo-id-or-model-path openbmb/MiniCPM-2B-sft-bf16

# to run Baichuan2-7B-Chat
python baichuan2.py

Arguments info:

  • --repo-id-or-model-path REPO_ID_OR_MODEL_PATH: argument defining the huggingface repo id for the Llama2 model (i.e. meta-llama/Llama-2-7b-chat-hf) to be downloaded, or the path to the huggingface checkpoint folder. It is default to be 'meta-llama/Llama-2-7b-chat-hf'.
  • --prompt PROMPT: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be What is AI?.
  • --n-predict N_PREDICT: argument defining the max number of tokens to predict. It is default to be 32.
  • --max-output-len MAX_OUTPUT_LEN: Defines the maximum sequence length for both input and output tokens. It is default to be 1024.
  • --max-prompt-len MAX_PROMPT_LEN: Defines the maximum number of tokens that the input prompt can contain. It is default to be 512.
  • --disable-transpose-value-cache: Disable the optimization of transposing value cache.

Troubleshooting

Output Problem

If you encounter output problem, please try to disable the optimization of transposing value cache with following command:

# to run Llama-2-7b-chat-hf
python  llama.py --disable-transpose-value-cache

# to run Meta-Llama-3-8B-Instruct (LNL driver version: 32.0.101.2715)
python llama.py --repo-id-or-model-path meta-llama/Meta-Llama-3-8B-Instruct --disable-transpose-value-cache

# to run Qwen2-1.5B-Instruct
python qwen2.py --disable-transpose-value-cache

# to run MiniCPM-1B-sft-bf16
python minicpm.py --disable-transpose-value-cache

# to run MiniCPM-2B-sft-bf16 (LNL driver version: 32.0.101.2715)
python minicpm.py --repo-id-or-model-path openbmb/MiniCPM-2B-sft-bf16 --disable-transpose-value-cache

High CPU Utilization

You can reduce CPU utilization by setting the environment variable with set IPEX_LLM_CPU_LM_HEAD=0.

Sample Output

meta-llama/Llama-2-7b-chat-hf

Inference time: xxxx s
-------------------- Input --------------------
<s><s> [INST] <<SYS>>

<</SYS>>

What is AI? [/INST]
-------------------- Output --------------------
<s><s> [INST] <<SYS>>

<</SYS>>

What is AI? [/INST]  AI (Artificial Intelligence) is a field of computer science and engineering that focuses on the development of intelligent machines that can perform tasks