* modify aquila * modify aquila2 * add baichuan * modify baichuan2 * modify blue-lm * modify chatglm3 * modify chinese-llama2 * modiy codellama * modify distil-whisper * modify dolly-v1 * modify dolly-v2 * modify falcon * modify flan-t5 * modify gpt-j * modify internlm * modify llama2 * modify mistral * modify mixtral * modify mpt * modify phi-1_5 * modify qwen * modify qwen-vl * modify replit * modify solar * modify starcoder * modify vicuna * modify voiceassistant * modify whisper * modify yi * modify aquila2 * modify baichuan * modify baichuan2 * modify blue-lm * modify chatglm2 * modify chatglm3 * modify codellama * modify distil-whisper * modify dolly-v1 * modify dolly-v2 * modify flan-t5 * modify llama2 * modify llava * modify mistral * modify mixtral * modify phi-1_5 * modify qwen-vl * modify replit * modify solar * modify starcoder * modify yi * correct the comments * remove cpu_embedding in code for whisper and distil-whisper * remove comment * remove cpu_embedding for voice assistant * revert modify voice assistant * modify for voice assistant * add comment for voice assistant * fix comments * fix comments
		
			
				
	
	
	
	
		
			4.7 KiB
		
	
	
	
	
	
	
	
			
		
		
	
	Qwen
In this directory, you will find examples on how you could apply BigDL-LLM INT4 optimizations on Qwen models on Intel GPUs. For illustration purposes, we utilize the Qwen-7B-Chat as a reference Qwen model.
0. Requirements
To run these examples with BigDL-LLM on Intel GPUs, we have some recommended requirements for your machine, please refer to here for more information.
Example: Predict Tokens using generate() API
In the example generate.py, we show a basic use case for a Qwen model to predict the next N tokens using generate() API, with BigDL-LLM INT4 optimizations on Intel GPUs.
1. Install
1.1 Installation on Linux
We suggest using conda to manage environment:
conda create -n llm python=3.9
conda activate llm
# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
pip install --pre --upgrade bigdl-llm[xpu] -f https://developer.intel.com/ipex-whl-stable-xpu
pip install tiktoken einops transformers_stream_generator  # additional package required for Qwen-7B-Chat to conduct generation
1.2 Installation on Windows
We suggest using conda to manage environment:
conda create -n llm python=3.9 libuv
conda activate llm
# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
pip install --pre --upgrade bigdl-llm[xpu] -f https://developer.intel.com/ipex-whl-stable-xpu
pip install tiktoken einops transformers_stream_generator  # additional package required for Qwen-7B-Chat to conduct generation
2. Configures OneAPI environment variables
2.1 Configurations for Linux
source /opt/intel/oneapi/setvars.sh
2.2 Configurations for Windows
call "C:\Program Files (x86)\Intel\oneAPI\setvars.bat"
Note: Please make sure you are using CMD (Anaconda Prompt if using conda) to run the command as PowerShell is not supported.
3. Runtime Configurations
For optimal performance, it is recommended to set several environment variables. Please check out the suggestions based on your device.
3.1 Configurations for Linux
For Intel Arc™ A-Series Graphics and Intel Data Center GPU Flex Series
export USE_XETLA=OFF
export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
For Intel Data Center GPU Max Series
export LD_PRELOAD=${LD_PRELOAD}:${CONDA_PREFIX}/lib/libtcmalloc.so
export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
export ENABLE_SDP_FUSION=1
Note: Please note that
libtcmalloc.socan be installed byconda install -c conda-forge -y gperftools=2.10.
3.2 Configurations for Windows
For Intel iGPU
set SYCL_CACHE_PERSISTENT=1
set BIGDL_LLM_XMX_DISABLED=1
For Intel Arc™ A300-Series or Pro A60
set SYCL_CACHE_PERSISTENT=1
For other Intel dGPU Series
There is no need to set further environment variables.
Note: For the first time that each model runs on Intel iGPU/Intel Arc™ A300-Series or Pro A60, it may take several minutes to compile.
4. Running examples
python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROMPT --n-predict N_PREDICT
Arguments info:
--repo-id-or-model-path REPO_ID_OR_MODEL_PATH: argument defining the huggingface repo id for the Qwen model (e.gQwen/Qwen-7B-Chat) to be downloaded, or the path to the huggingface checkpoint folder. It is default to be'Qwen/Qwen-7B-Chat'.--prompt PROMPT: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be'AI是什么?'.--n-predict N_PREDICT: argument defining the max number of tokens to predict. It is default to be32.
Sample Output
Qwen/Qwen-7B-Chat
Inference time: xxxx s
-------------------- Prompt --------------------
<human>AI是什么? <bot>
-------------------- Output --------------------
<human>AI是什么? <bot>AI,即人工智能,是指计算机科学的一个分支,它企图创造能够完成任务的智能机器,这些任务通常需要人类智能才能完成。
Inference time: xxxx s
-------------------- Prompt --------------------
<human>What is AI? <bot>
-------------------- Output --------------------
<human>What is AI? <bot>AI, or artificial intelligence, refers to the ability of a machine or computer program to perform tasks that typically require human intelligence, such as visual perception, speech recognition