LLM: add aquila2 model example (#9356)
This commit is contained in:
		
							parent
							
								
									1420e45cc0
								
							
						
					
					
						commit
						e6b6afa316
					
				
					 9 changed files with 518 additions and 0 deletions
				
			
		| 
						 | 
					@ -151,6 +151,7 @@ Over 20 models have been optimized/verified on `bigdl-llm`, including *LLaMA/LLa
 | 
				
			||||||
| Qwen       | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/qwen)      | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/qwen)       |
 | 
					| Qwen       | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/qwen)      | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/qwen)       |
 | 
				
			||||||
| Qwen-VL    | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/qwen-vl)   | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/qwen-vl)    |
 | 
					| Qwen-VL    | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/qwen-vl)   | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/qwen-vl)    |
 | 
				
			||||||
| Aquila     | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/aquila)    | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/aquila)     |
 | 
					| Aquila     | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/aquila)    | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/aquila)     |
 | 
				
			||||||
 | 
					| Aquila2     | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/aquila2)    | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/aquila2)     |
 | 
				
			||||||
| MOSS       | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/moss)      |    | 
 | 
					| MOSS       | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/moss)      |    | 
 | 
				
			||||||
| Whisper    | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/whisper)   | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/whisper)    |
 | 
					| Whisper    | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/whisper)   | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/whisper)    |
 | 
				
			||||||
| Phi-1_5    | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/phi-1_5)   | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/phi-1_5)    |
 | 
					| Phi-1_5    | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/phi-1_5)   | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/phi-1_5)    |
 | 
				
			||||||
| 
						 | 
					
 | 
				
			||||||
| 
						 | 
					@ -0,0 +1,68 @@
 | 
				
			||||||
 | 
					# Aquila2
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					In this directory, you will find examples on how you could apply BigDL-LLM INT4 optimizations on Aquila2 models. For illustration purposes, we utilize the [BAAI/AquilaChat2-7B](https://huggingface.co/BAAI/AquilaChat2-7B) as a reference Aquila2 model.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					> **Note**: If you want to download the Hugging Face *Transformers* model, please refer to [here](https://huggingface.co/docs/hub/models-downloading#using-git).
 | 
				
			||||||
 | 
					>
 | 
				
			||||||
 | 
					> BigDL-LLM optimizes the *Transformers* model in INT4 precision at runtime, and thus no explicit conversion is needed.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					## Requirements
 | 
				
			||||||
 | 
					To run these examples with BigDL-LLM, we have some recommended requirements for your machine, please refer to [here](../README.md#recommended-requirements) for more information.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					## Example: Predict Tokens using `generate()` API
 | 
				
			||||||
 | 
					In the example [generate.py](./generate.py), we show a basic use case for a Aquila2 model to predict the next N tokens using `generate()` API, with BigDL-LLM INT4 optimizations.
 | 
				
			||||||
 | 
					### 1. Install
 | 
				
			||||||
 | 
					We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					After installing conda, create a Python environment for BigDL-LLM:
 | 
				
			||||||
 | 
					```bash
 | 
				
			||||||
 | 
					conda create -n llm python=3.9 # recommend to use Python 3.9
 | 
				
			||||||
 | 
					conda activate llm
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					pip install --pre --upgrade bigdl-llm[all] # install the latest bigdl-llm nightly build with 'all' option
 | 
				
			||||||
 | 
					```
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					### 2. Run
 | 
				
			||||||
 | 
					After setting up the Python environment, you could run the example by following steps.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					> **Note**: When loading the model in 4-bit, BigDL-LLM converts linear layers in the model into INT4 format. In theory, a *X*B model saved in 16-bit will requires approximately 2*X* GB of memory for loading, and ~0.5*X* GB memory for further inference.
 | 
				
			||||||
 | 
					>
 | 
				
			||||||
 | 
					> Please select the appropriate size of the Aquila2 model based on the capabilities of your machine.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					#### 2.1 Client
 | 
				
			||||||
 | 
					On client Windows machines, it is recommended to run directly with full utilization of all cores:
 | 
				
			||||||
 | 
					```powershell
 | 
				
			||||||
 | 
					python ./generate.py --prompt 'AI是什么?'
 | 
				
			||||||
 | 
					```
 | 
				
			||||||
 | 
					More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					#### 2.2 Server
 | 
				
			||||||
 | 
					For optimal performance on server, it is recommended to set several environment variables (refer to [here](../README.md#best-known-configuration-on-linux) for more information), and run the example with all the physical cores of a single socket.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					E.g. on Linux,
 | 
				
			||||||
 | 
					```bash
 | 
				
			||||||
 | 
					# set BigDL-Nano env variables
 | 
				
			||||||
 | 
					source bigdl-nano-init
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					# e.g. for a server with 48 cores per socket
 | 
				
			||||||
 | 
					export OMP_NUM_THREADS=48
 | 
				
			||||||
 | 
					numactl -C 0-47 -m 0 python ./generate.py --prompt 'AI是什么?'
 | 
				
			||||||
 | 
					```
 | 
				
			||||||
 | 
					More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					#### 2.3 Arguments Info
 | 
				
			||||||
 | 
					In the example, several arguments can be passed to satisfy your requirements:
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					- `--repo-id-or-model-path`: str, argument defining the huggingface repo id for the Aquila2 model to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'BAAI/AquilaChat2-7B'`.
 | 
				
			||||||
 | 
					- `--prompt`: str, argument defining the prompt to be inferred (with integrated prompt format for chat). It is default to be `'AI是什么?'`.
 | 
				
			||||||
 | 
					- `--n-predict`: int, argument defining the max number of tokens to predict. It is default to be `32`.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					#### 2.4 Sample Output
 | 
				
			||||||
 | 
					#### [BAAI/AquilaChat2-7B](https://huggingface.co/BAAI/AquilaChat2-7B)
 | 
				
			||||||
 | 
					```log
 | 
				
			||||||
 | 
					Inference time: xxxx s
 | 
				
			||||||
 | 
					-------------------- Prompt --------------------
 | 
				
			||||||
 | 
					<|startofpiece|>AI是什么?<|endofpiece|>
 | 
				
			||||||
 | 
					-------------------- Output --------------------
 | 
				
			||||||
 | 
					<|startofpiece|>AI是什么?<|endofpiece|>人工智能(Artificial Intelligence,简称AI)是计算机科学中一个极为重要的研究领域,旨在让计算机模仿人类的智能,包括学习、推理、识别物体
 | 
				
			||||||
 | 
					```
 | 
				
			||||||
| 
						 | 
					@ -0,0 +1,68 @@
 | 
				
			||||||
 | 
					#
 | 
				
			||||||
 | 
					# Copyright 2016 The BigDL Authors.
 | 
				
			||||||
 | 
					#
 | 
				
			||||||
 | 
					# Licensed under the Apache License, Version 2.0 (the "License");
 | 
				
			||||||
 | 
					# you may not use this file except in compliance with the License.
 | 
				
			||||||
 | 
					# You may obtain a copy of the License at
 | 
				
			||||||
 | 
					#
 | 
				
			||||||
 | 
					#     http://www.apache.org/licenses/LICENSE-2.0
 | 
				
			||||||
 | 
					#
 | 
				
			||||||
 | 
					# Unless required by applicable law or agreed to in writing, software
 | 
				
			||||||
 | 
					# distributed under the License is distributed on an "AS IS" BASIS,
 | 
				
			||||||
 | 
					# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 | 
				
			||||||
 | 
					# See the License for the specific language governing permissions and
 | 
				
			||||||
 | 
					# limitations under the License.
 | 
				
			||||||
 | 
					#
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					import torch
 | 
				
			||||||
 | 
					import time
 | 
				
			||||||
 | 
					import argparse
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					from bigdl.llm.transformers import AutoModelForCausalLM
 | 
				
			||||||
 | 
					from transformers import AutoTokenizer
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					# you could tune the prompt based on your own model,
 | 
				
			||||||
 | 
					# here the prompt tuning refers to https://huggingface.co/BAAI/AquilaChat2-7B/tree/main/predict.py
 | 
				
			||||||
 | 
					AQUILA2_PROMPT_FORMAT = "<|startofpiece|>{prompt}<|endofpiece|>"
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					if __name__ == '__main__':
 | 
				
			||||||
 | 
					    parser = argparse.ArgumentParser(description='Predict Tokens using `generate()` API for Aquila2 model')
 | 
				
			||||||
 | 
					    parser.add_argument('--repo-id-or-model-path', type=str, default="BAAI/AquilaChat2-7B",
 | 
				
			||||||
 | 
					                        help='The huggingface repo id for the Aquila2 model to be downloaded'
 | 
				
			||||||
 | 
					                             ', or the path to the huggingface checkpoint folder')
 | 
				
			||||||
 | 
					    parser.add_argument('--prompt', type=str, default="AI是什么?",
 | 
				
			||||||
 | 
					                        help='Prompt to infer')
 | 
				
			||||||
 | 
					    parser.add_argument('--n-predict', type=int, default=32,
 | 
				
			||||||
 | 
					                        help='Max tokens to predict')
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					    args = parser.parse_args()
 | 
				
			||||||
 | 
					    model_path = args.repo_id_or_model_path
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					    # Load model in 4 bit,
 | 
				
			||||||
 | 
					    # which convert the relevant layers in the model into INT4 format
 | 
				
			||||||
 | 
					    model = AutoModelForCausalLM.from_pretrained(model_path,
 | 
				
			||||||
 | 
					                                                 load_in_4bit=True,
 | 
				
			||||||
 | 
					                                                 trust_remote_code=True)
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					    # Load tokenizer
 | 
				
			||||||
 | 
					    tokenizer = AutoTokenizer.from_pretrained(model_path,
 | 
				
			||||||
 | 
					                                              trust_remote_code=True)
 | 
				
			||||||
 | 
					    
 | 
				
			||||||
 | 
					    # Generate predicted tokens
 | 
				
			||||||
 | 
					    with torch.inference_mode():
 | 
				
			||||||
 | 
					        prompt = AQUILA2_PROMPT_FORMAT.format(prompt=args.prompt)
 | 
				
			||||||
 | 
					        input_ids = tokenizer.encode(prompt, return_tensors="pt")
 | 
				
			||||||
 | 
					        st = time.time()
 | 
				
			||||||
 | 
					        # if your selected model is capable of utilizing previous key/value attentions
 | 
				
			||||||
 | 
					        # to enhance decoding speed, but has `"use_cache": false` in its model config,
 | 
				
			||||||
 | 
					        # it is important to set `use_cache=True` explicitly in the `generate` function
 | 
				
			||||||
 | 
					        # to obtain optimal performance with BigDL-LLM INT4 optimizations
 | 
				
			||||||
 | 
					        output = model.generate(input_ids,
 | 
				
			||||||
 | 
					                                max_new_tokens=args.n_predict)
 | 
				
			||||||
 | 
					        end = time.time()
 | 
				
			||||||
 | 
					        output_str = tokenizer.decode(output[0], skip_special_tokens=True)
 | 
				
			||||||
 | 
					        print(f'Inference time: {end-st} s')
 | 
				
			||||||
 | 
					        print('-'*20, 'Prompt', '-'*20)
 | 
				
			||||||
 | 
					        print(prompt)
 | 
				
			||||||
 | 
					        print('-'*20, 'Output', '-'*20)
 | 
				
			||||||
 | 
					        print(output_str)
 | 
				
			||||||
| 
						 | 
					@ -0,0 +1,59 @@
 | 
				
			||||||
 | 
					# Aquila2
 | 
				
			||||||
 | 
					In this directory, you will find examples on how you could use BigDL-LLM `optimize_model` API to accelerate Aquila2 models. For illustration purposes, we utilize the [BAAI/AquilaChat2-7B](https://huggingface.co/BAAI/AquilaChat2-7B) as reference Aquila2 models.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					## Requirements
 | 
				
			||||||
 | 
					To run these examples with BigDL-LLM, we have some recommended requirements for your machine, please refer to [here](../README.md#recommended-requirements) for more information.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					## Example: Predict Tokens using `generate()` API
 | 
				
			||||||
 | 
					In the example [generate.py](./generate.py), we show a basic use case for a Aquila2 model to predict the next N tokens using `generate()` API, with BigDL-LLM INT4 optimizations.
 | 
				
			||||||
 | 
					### 1. Install
 | 
				
			||||||
 | 
					We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					After installing conda, create a Python environment for BigDL-LLM:
 | 
				
			||||||
 | 
					```bash
 | 
				
			||||||
 | 
					conda create -n llm python=3.9 # recommend to use Python 3.9
 | 
				
			||||||
 | 
					conda activate llm
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					pip install --pre --upgrade bigdl-llm[all] # install the latest bigdl-llm nightly build with 'all' option
 | 
				
			||||||
 | 
					```
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					### 2. Run
 | 
				
			||||||
 | 
					After setting up the Python environment, you could run the example by following steps.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					#### 2.1 Client
 | 
				
			||||||
 | 
					On client Windows machines, it is recommended to run directly with full utilization of all cores:
 | 
				
			||||||
 | 
					```powershell
 | 
				
			||||||
 | 
					python ./generate.py --prompt 'What is AI?'
 | 
				
			||||||
 | 
					```
 | 
				
			||||||
 | 
					More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					#### 2.2 Server
 | 
				
			||||||
 | 
					For optimal performance on server, it is recommended to set several environment variables (refer to [here](../README.md#best-known-configuration-on-linux) for more information), and run the example with all the physical cores of a single socket.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					E.g. on Linux,
 | 
				
			||||||
 | 
					```bash
 | 
				
			||||||
 | 
					# set BigDL-Nano env variables
 | 
				
			||||||
 | 
					source bigdl-nano-init
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					# e.g. for a server with 48 cores per socket
 | 
				
			||||||
 | 
					export OMP_NUM_THREADS=48
 | 
				
			||||||
 | 
					numactl -C 0-47 -m 0 python ./generate.py --prompt 'What is AI?'
 | 
				
			||||||
 | 
					```
 | 
				
			||||||
 | 
					More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					#### 2.3 Arguments Info
 | 
				
			||||||
 | 
					In the example, several arguments can be passed to satisfy your requirements:
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the Aquila2 model to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'BAAI/AquilaChat2-7B'`.
 | 
				
			||||||
 | 
					- `--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `'AI是什么?'`.
 | 
				
			||||||
 | 
					- `--n-predict N_PREDICT`: argument defining the max number of tokens to predict. It is default to be `32`.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					#### 2.3 Sample Output
 | 
				
			||||||
 | 
					#### [BAAI/AquilaChat2-7B](https://huggingface.co/BAAI/AquilaChat2-7B)
 | 
				
			||||||
 | 
					```log
 | 
				
			||||||
 | 
					Inference time: xxxx s
 | 
				
			||||||
 | 
					-------------------- Prompt --------------------
 | 
				
			||||||
 | 
					<|startofpiece|>AI是什么?<|endofpiece|>
 | 
				
			||||||
 | 
					-------------------- Output --------------------
 | 
				
			||||||
 | 
					<|startofpiece|>AI是什么?<|endofpiece|>人工智能(Artificial Intelligence,简称AI)是计算机科学中一个极为重要的研究领域,旨在让计算机模仿人类的智能,包括学习、推理、识别物体
 | 
				
			||||||
 | 
					```
 | 
				
			||||||
| 
						 | 
					@ -0,0 +1,64 @@
 | 
				
			||||||
 | 
					#
 | 
				
			||||||
 | 
					# Copyright 2016 The BigDL Authors.
 | 
				
			||||||
 | 
					#
 | 
				
			||||||
 | 
					# Licensed under the Apache License, Version 2.0 (the "License");
 | 
				
			||||||
 | 
					# you may not use this file except in compliance with the License.
 | 
				
			||||||
 | 
					# You may obtain a copy of the License at
 | 
				
			||||||
 | 
					#
 | 
				
			||||||
 | 
					#     http://www.apache.org/licenses/LICENSE-2.0
 | 
				
			||||||
 | 
					#
 | 
				
			||||||
 | 
					# Unless required by applicable law or agreed to in writing, software
 | 
				
			||||||
 | 
					# distributed under the License is distributed on an "AS IS" BASIS,
 | 
				
			||||||
 | 
					# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 | 
				
			||||||
 | 
					# See the License for the specific language governing permissions and
 | 
				
			||||||
 | 
					# limitations under the License.
 | 
				
			||||||
 | 
					#
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					import torch
 | 
				
			||||||
 | 
					import time
 | 
				
			||||||
 | 
					import argparse
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					from transformers import AutoModelForCausalLM, AutoTokenizer
 | 
				
			||||||
 | 
					from bigdl.llm import optimize_model
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					# you could tune the prompt based on your own model,
 | 
				
			||||||
 | 
					# here the prompt tuning refers to https://huggingface.co/BAAI/AquilaChat2-7B/tree/main/predict.py
 | 
				
			||||||
 | 
					AQUILA2_PROMPT_FORMAT = "<|startofpiece|>{prompt}<|endofpiece|>"
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					if __name__ == '__main__':
 | 
				
			||||||
 | 
					    parser = argparse.ArgumentParser(description='Predict Tokens using `generate()` API for Mistral model')
 | 
				
			||||||
 | 
					    parser.add_argument('--repo-id-or-model-path', type=str, default="BAAI/AquilaChat2-7B",
 | 
				
			||||||
 | 
					                        help='The huggingface repo id for the Aquila2 model to be downloaded'
 | 
				
			||||||
 | 
					                             ', or the path to the huggingface checkpoint folder')
 | 
				
			||||||
 | 
					    parser.add_argument('--prompt', type=str, default="AI是什么?",
 | 
				
			||||||
 | 
					                        help='Prompt to infer')
 | 
				
			||||||
 | 
					    parser.add_argument('--n-predict', type=int, default=32,
 | 
				
			||||||
 | 
					                        help='Max tokens to predict')
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					    args = parser.parse_args()
 | 
				
			||||||
 | 
					    model_path = args.repo_id_or_model_path
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					    # Load model
 | 
				
			||||||
 | 
					    model = AutoModelForCausalLM.from_pretrained(model_path,
 | 
				
			||||||
 | 
					                                                 trust_remote_code=True,
 | 
				
			||||||
 | 
					                                                 torch_dtype='auto',
 | 
				
			||||||
 | 
					                                                 low_cpu_mem_usage=True)
 | 
				
			||||||
 | 
					    
 | 
				
			||||||
 | 
					    # With only one line to enable BigDL-LLM optimization on model
 | 
				
			||||||
 | 
					    model = optimize_model(model)
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					    # Load tokenizer
 | 
				
			||||||
 | 
					    tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
 | 
				
			||||||
 | 
					    
 | 
				
			||||||
 | 
					    # Generate predicted tokens
 | 
				
			||||||
 | 
					    with torch.inference_mode():
 | 
				
			||||||
 | 
					        prompt = AQUILA2_PROMPT_FORMAT.format(prompt=args.prompt)
 | 
				
			||||||
 | 
					        input_ids = tokenizer.encode(prompt, return_tensors="pt")
 | 
				
			||||||
 | 
					        st = time.time()
 | 
				
			||||||
 | 
					        output = model.generate(input_ids,
 | 
				
			||||||
 | 
					                                max_new_tokens=args.n_predict)
 | 
				
			||||||
 | 
					        end = time.time()
 | 
				
			||||||
 | 
					        output_str = tokenizer.decode(output[0], skip_special_tokens=True)
 | 
				
			||||||
 | 
					        print(f'Inference time: {end-st} s')
 | 
				
			||||||
 | 
					        print('-'*20, 'Output', '-'*20)
 | 
				
			||||||
 | 
					        print(output_str)
 | 
				
			||||||
| 
						 | 
					@ -0,0 +1,57 @@
 | 
				
			||||||
 | 
					# Aquila2
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					In this directory, you will find examples on how you could apply BigDL-LLM INT4 optimizations on Aquila2 models. For illustration purposes, we utilize the [BAAI/AquilaChat2-7B](https://huggingface.co/BAAI/AquilaChat2-7B) as a reference Aquila2 model.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					> **Note**: If you want to download the Hugging Face *Transformers* model, please refer to [here](https://huggingface.co/docs/hub/models-downloading#using-git).
 | 
				
			||||||
 | 
					>
 | 
				
			||||||
 | 
					> BigDL-LLM optimizes the *Transformers* model in INT4 precision at runtime, and thus no explicit conversion is needed.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					## Requirements
 | 
				
			||||||
 | 
					To run these examples with BigDL-LLM, we have some recommended requirements for your machine, please refer to [here](../README.md#recommended-requirements) for more information.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					## Example: Predict Tokens using `generate()` API
 | 
				
			||||||
 | 
					In the example [generate.py](./generate.py), we show a basic use case for a Aquila2 model to predict the next N tokens using `generate()` API, with BigDL-LLM INT4 optimizations.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					### 1. Install
 | 
				
			||||||
 | 
					We suggest using conda to manage environment:
 | 
				
			||||||
 | 
					```bash
 | 
				
			||||||
 | 
					conda create -n llm python=3.9
 | 
				
			||||||
 | 
					conda activate llm
 | 
				
			||||||
 | 
					# below command will install intel_extension_for_pytorch==2.0.110+xpu as default
 | 
				
			||||||
 | 
					# you can install specific ipex/torch version for your need
 | 
				
			||||||
 | 
					pip install --pre --upgrade bigdl-llm[xpu] -f https://developer.intel.com/ipex-whl-stable-xpu
 | 
				
			||||||
 | 
					```
 | 
				
			||||||
 | 
					### 2. Configures OneAPI environment variables
 | 
				
			||||||
 | 
					```bash
 | 
				
			||||||
 | 
					source /opt/intel/oneapi/setvars.sh
 | 
				
			||||||
 | 
					```
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					### 3. Run
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					For optimal performance on Arc, it is recommended to set several environment variables.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					```bash
 | 
				
			||||||
 | 
					export USE_XETLA=OFF
 | 
				
			||||||
 | 
					export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
 | 
				
			||||||
 | 
					```
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					```
 | 
				
			||||||
 | 
					python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROMPT --n-predict N_PREDICT
 | 
				
			||||||
 | 
					```
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					Arguments Info
 | 
				
			||||||
 | 
					In the example, several arguments can be passed to satisfy your requirements:
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					- `--repo-id-or-model-path`: str, argument defining the huggingface repo id for the Aquila2 model to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'BAAI/AquilaChat2-7B'`.
 | 
				
			||||||
 | 
					- `--prompt`: str, argument defining the prompt to be inferred (with integrated prompt format for chat). It is default to be `'AI是什么?'`.
 | 
				
			||||||
 | 
					- `--n-predict`: int, argument defining the max number of tokens to predict. It is default to be `32`.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					#### Sample Output
 | 
				
			||||||
 | 
					#### [BAAI/AquilaChat2-7B](https://huggingface.co/BAAI/AquilaChat2-7B)
 | 
				
			||||||
 | 
					```log
 | 
				
			||||||
 | 
					Inference time: xxxx s
 | 
				
			||||||
 | 
					-------------------- Prompt --------------------
 | 
				
			||||||
 | 
					<|startofpiece|>AI是什么?<|endofpiece|>
 | 
				
			||||||
 | 
					-------------------- Output --------------------
 | 
				
			||||||
 | 
					<|startofpiece|>AI是什么?<|endofpiece|>人工智能(Artificial Intelligence,简称AI)是计算机科学中一个极为重要的研究领域,旨在让计算机模仿人类的智能,包括学习、推理、识别物体
 | 
				
			||||||
 | 
					```
 | 
				
			||||||
| 
						 | 
					@ -0,0 +1,73 @@
 | 
				
			||||||
 | 
					#
 | 
				
			||||||
 | 
					# Copyright 2016 The BigDL Authors.
 | 
				
			||||||
 | 
					#
 | 
				
			||||||
 | 
					# Licensed under the Apache License, Version 2.0 (the "License");
 | 
				
			||||||
 | 
					# you may not use this file except in compliance with the License.
 | 
				
			||||||
 | 
					# You may obtain a copy of the License at
 | 
				
			||||||
 | 
					#
 | 
				
			||||||
 | 
					#     http://www.apache.org/licenses/LICENSE-2.0
 | 
				
			||||||
 | 
					#
 | 
				
			||||||
 | 
					# Unless required by applicable law or agreed to in writing, software
 | 
				
			||||||
 | 
					# distributed under the License is distributed on an "AS IS" BASIS,
 | 
				
			||||||
 | 
					# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 | 
				
			||||||
 | 
					# See the License for the specific language governing permissions and
 | 
				
			||||||
 | 
					# limitations under the License.
 | 
				
			||||||
 | 
					#
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					import torch
 | 
				
			||||||
 | 
					import intel_extension_for_pytorch as ipex
 | 
				
			||||||
 | 
					import time
 | 
				
			||||||
 | 
					import argparse
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					from bigdl.llm.transformers import AutoModelForCausalLM
 | 
				
			||||||
 | 
					from transformers import AutoTokenizer
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					# you could tune the prompt based on your own model,
 | 
				
			||||||
 | 
					# here the prompt tuning refers to https://huggingface.co/BAAI/AquilaChat2-7B/tree/main/predict.py
 | 
				
			||||||
 | 
					AQUILA2_PROMPT_FORMAT = "<|startofpiece|>{prompt}<|endofpiece|>"
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					if __name__ == '__main__':
 | 
				
			||||||
 | 
					    parser = argparse.ArgumentParser(
 | 
				
			||||||
 | 
					        description='Predict Tokens using `generate()` API for Aquila2 model')
 | 
				
			||||||
 | 
					    parser.add_argument('--repo-id-or-model-path', type=str, default="BAAI/AquilaChat2-7B",
 | 
				
			||||||
 | 
					                        help='The huggingface repo id for the Aquila2 model to be downloaded'
 | 
				
			||||||
 | 
					                             ', or the path to the huggingface checkpoint folder')
 | 
				
			||||||
 | 
					    parser.add_argument('--prompt', type=str, default="AI是什么?",
 | 
				
			||||||
 | 
					                        help='Prompt to infer')
 | 
				
			||||||
 | 
					    parser.add_argument('--n-predict', type=int, default=32,
 | 
				
			||||||
 | 
					                        help='Max tokens to predict')
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					    args = parser.parse_args()
 | 
				
			||||||
 | 
					    model_path = args.repo_id_or_model_path
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					    # Load model in 4 bit,
 | 
				
			||||||
 | 
					    # which convert the relevant layers in the model into INT4 format
 | 
				
			||||||
 | 
					    model = AutoModelForCausalLM.from_pretrained(model_path,
 | 
				
			||||||
 | 
					                                                 load_in_4bit=True,
 | 
				
			||||||
 | 
					                                                 trust_remote_code=True)
 | 
				
			||||||
 | 
					    model = model.to('xpu')
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					    # Load tokenizer
 | 
				
			||||||
 | 
					    tokenizer = AutoTokenizer.from_pretrained(model_path,
 | 
				
			||||||
 | 
					                                              trust_remote_code=True)
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					    # Generate predicted tokens
 | 
				
			||||||
 | 
					    with torch.inference_mode():
 | 
				
			||||||
 | 
					        prompt = AQUILA2_PROMPT_FORMAT.format(prompt=args.prompt)
 | 
				
			||||||
 | 
					        input_ids = tokenizer.encode(prompt, return_tensors="pt").to('xpu')
 | 
				
			||||||
 | 
					        st = time.time()
 | 
				
			||||||
 | 
					        # if your selected model is capable of utilizing previous key/value attentions
 | 
				
			||||||
 | 
					        # to enhance decoding speed, but has `"use_cache": false` in its model config,
 | 
				
			||||||
 | 
					        # it is important to set `use_cache=True` explicitly in the `generate` function
 | 
				
			||||||
 | 
					        # to obtain optimal performance with BigDL-LLM INT4 optimizations
 | 
				
			||||||
 | 
					        output = model.generate(input_ids,
 | 
				
			||||||
 | 
					                                max_new_tokens=args.n_predict)
 | 
				
			||||||
 | 
					        torch.xpu.synchronize()
 | 
				
			||||||
 | 
					        end = time.time()
 | 
				
			||||||
 | 
					        output = output.cpu()
 | 
				
			||||||
 | 
					        output_str = tokenizer.decode(output[0], skip_special_tokens=True)
 | 
				
			||||||
 | 
					        print(f'Inference time: {end - st} s')
 | 
				
			||||||
 | 
					        print('-' * 20, 'Prompt', '-' * 20)
 | 
				
			||||||
 | 
					        print(prompt)
 | 
				
			||||||
 | 
					        print('-' * 20, 'Output', '-' * 20)
 | 
				
			||||||
 | 
					        print(output_str)
 | 
				
			||||||
| 
						 | 
					@ -0,0 +1,54 @@
 | 
				
			||||||
 | 
					# Aquila2
 | 
				
			||||||
 | 
					In this directory, you will find examples on how you could use BigDL-LLM `optimize_model` API to accelerate Aquila2 models. For illustration purposes, we utilize the [BAAI/AquilaChat2-7B](https://huggingface.co/BAAI/AquilaChat2-7B) as reference Aquila2 models.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					## Requirements
 | 
				
			||||||
 | 
					To run these examples with BigDL-LLM on Intel GPUs, we have some recommended requirements for your machine, please refer to [here](../README.md#recommended-requirements) for more information.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					## Example: Predict Tokens using `generate()` API
 | 
				
			||||||
 | 
					In the example [generate.py](./generate.py), we show a basic use case for a Aquila2 model to predict the next N tokens using `generate()` API, with BigDL-LLM INT4 optimizations on Intel GPUs.
 | 
				
			||||||
 | 
					### 1. Install
 | 
				
			||||||
 | 
					We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					After installing conda, create a Python environment for BigDL-LLM:
 | 
				
			||||||
 | 
					```bash
 | 
				
			||||||
 | 
					conda create -n llm python=3.9 # recommend to use Python 3.9
 | 
				
			||||||
 | 
					conda activate llm
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					# below command will install intel_extension_for_pytorch==2.0.110+xpu as default
 | 
				
			||||||
 | 
					# you can install specific ipex/torch version for your need
 | 
				
			||||||
 | 
					pip install --pre --upgrade bigdl-llm[xpu] -f https://developer.intel.com/ipex-whl-stable-xpu
 | 
				
			||||||
 | 
					```
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					### 2. Configures OneAPI environment variables
 | 
				
			||||||
 | 
					```bash
 | 
				
			||||||
 | 
					source /opt/intel/oneapi/setvars.sh
 | 
				
			||||||
 | 
					```
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					### 3. Run
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					For optimal performance on Arc, it is recommended to set several environment variables.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					```bash
 | 
				
			||||||
 | 
					export USE_XETLA=OFF
 | 
				
			||||||
 | 
					export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
 | 
				
			||||||
 | 
					```
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					```bash
 | 
				
			||||||
 | 
					python ./generate.py --prompt 'AI是什么?'
 | 
				
			||||||
 | 
					```
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					In the example, several arguments can be passed to satisfy your requirements:
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the Aquila2 model to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'BAAI/AquilaChat2-7B'`.
 | 
				
			||||||
 | 
					- `--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `'AI是什么?'`.
 | 
				
			||||||
 | 
					- `--n-predict N_PREDICT`: argument defining the max number of tokens to predict. It is default to be `32`.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					#### 2.3 Sample Output
 | 
				
			||||||
 | 
					#### [BAAI/AquilaChat2-7B](https://huggingface.co/BAAI/AquilaChat2-7B)
 | 
				
			||||||
 | 
					```log
 | 
				
			||||||
 | 
					Inference time: xxxx s
 | 
				
			||||||
 | 
					-------------------- Prompt --------------------
 | 
				
			||||||
 | 
					<|startofpiece|>AI是什么?<|endofpiece|>
 | 
				
			||||||
 | 
					-------------------- Output --------------------
 | 
				
			||||||
 | 
					<|startofpiece|>AI是什么?<|endofpiece|>人工智能(Artificial Intelligence,简称AI)是计算机科学中一个极为重要的研究领域,旨在让计算机模仿人类的智能,包括学习、推理、识别物体
 | 
				
			||||||
 | 
					```
 | 
				
			||||||
| 
						 | 
					@ -0,0 +1,74 @@
 | 
				
			||||||
 | 
					#
 | 
				
			||||||
 | 
					# Copyright 2016 The BigDL Authors.
 | 
				
			||||||
 | 
					#
 | 
				
			||||||
 | 
					# Licensed under the Apache License, Version 2.0 (the "License");
 | 
				
			||||||
 | 
					# you may not use this file except in compliance with the License.
 | 
				
			||||||
 | 
					# You may obtain a copy of the License at
 | 
				
			||||||
 | 
					#
 | 
				
			||||||
 | 
					#     http://www.apache.org/licenses/LICENSE-2.0
 | 
				
			||||||
 | 
					#
 | 
				
			||||||
 | 
					# Unless required by applicable law or agreed to in writing, software
 | 
				
			||||||
 | 
					# distributed under the License is distributed on an "AS IS" BASIS,
 | 
				
			||||||
 | 
					# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 | 
				
			||||||
 | 
					# See the License for the specific language governing permissions and
 | 
				
			||||||
 | 
					# limitations under the License.
 | 
				
			||||||
 | 
					#
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					import torch
 | 
				
			||||||
 | 
					import intel_extension_for_pytorch as ipex
 | 
				
			||||||
 | 
					import time
 | 
				
			||||||
 | 
					import argparse
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					from transformers import AutoModelForCausalLM, AutoTokenizer
 | 
				
			||||||
 | 
					from bigdl.llm import optimize_model
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					# you could tune the prompt based on your own model,
 | 
				
			||||||
 | 
					# here the prompt tuning refers to https://huggingface.co/BAAI/AquilaChat2-7B/tree/main/predict.py
 | 
				
			||||||
 | 
					AQUILA2_PROMPT_FORMAT = "<|startofpiece|>{prompt}<|endofpiece|>"
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					if __name__ == '__main__':
 | 
				
			||||||
 | 
					    parser = argparse.ArgumentParser(description='Predict Tokens using `generate()` API for Aquila2 model')
 | 
				
			||||||
 | 
					    parser.add_argument('--repo-id-or-model-path', type=str, default="BAAI/AquilaChat2-7B",
 | 
				
			||||||
 | 
					                        help='The huggingface repo id for the Aquila2 model to be downloaded'
 | 
				
			||||||
 | 
					                             ', or the path to the huggingface checkpoint folder')
 | 
				
			||||||
 | 
					    parser.add_argument('--prompt', type=str, default="AI是什么?",
 | 
				
			||||||
 | 
					                        help='Prompt to infer')
 | 
				
			||||||
 | 
					    parser.add_argument('--n-predict', type=int, default=32,
 | 
				
			||||||
 | 
					                        help='Max tokens to predict')
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					    args = parser.parse_args()
 | 
				
			||||||
 | 
					    model_path = args.repo_id_or_model_path
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					    # Load model
 | 
				
			||||||
 | 
					    model = AutoModelForCausalLM.from_pretrained(model_path,
 | 
				
			||||||
 | 
					                                                 trust_remote_code=True,
 | 
				
			||||||
 | 
					                                                 torch_dtype='auto',
 | 
				
			||||||
 | 
					                                                 low_cpu_mem_usage=True)
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					    # With only one line to enable BigDL-LLM optimization on model
 | 
				
			||||||
 | 
					    model = optimize_model(model)
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					    model = model.to('xpu')
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					    # Load tokenizer
 | 
				
			||||||
 | 
					    tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
 | 
				
			||||||
 | 
					    
 | 
				
			||||||
 | 
					    # Generate predicted tokens
 | 
				
			||||||
 | 
					    with torch.inference_mode():
 | 
				
			||||||
 | 
					        prompt = AQUILA2_PROMPT_FORMAT.format(prompt=args.prompt)
 | 
				
			||||||
 | 
					        input_ids = tokenizer.encode(prompt, return_tensors="pt").to('xpu')
 | 
				
			||||||
 | 
					        # ipex model needs a warmup, then inference time can be accurate
 | 
				
			||||||
 | 
					        output = model.generate(input_ids,
 | 
				
			||||||
 | 
					                                max_new_tokens=args.n_predict)
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					        # start inference
 | 
				
			||||||
 | 
					        st = time.time()
 | 
				
			||||||
 | 
					        output = model.generate(input_ids,
 | 
				
			||||||
 | 
					                                max_new_tokens=args.n_predict)
 | 
				
			||||||
 | 
					        torch.xpu.synchronize()
 | 
				
			||||||
 | 
					        end = time.time()
 | 
				
			||||||
 | 
					        output = output.cpu()
 | 
				
			||||||
 | 
					        output_str = tokenizer.decode(output[0], skip_special_tokens=True)
 | 
				
			||||||
 | 
					        print(f'Inference time: {end-st} s')
 | 
				
			||||||
 | 
					        print('-'*20, 'Output', '-'*20)
 | 
				
			||||||
 | 
					        print(output_str)
 | 
				
			||||||
		Loading…
	
		Reference in a new issue