Add Llama3.1 example (#11689)
* Add Llama3.1 example Add Llama3.1 example for Linux arc and Windows MTL * Changes made to adjust compatibilities transformers changed to 4.43.1 * Update index.rst * Update README.md * Update index.rst * Update index.rst * Update index.rst
This commit is contained in:
		
							parent
							
								
									6e3ce28173
								
							
						
					
					
						commit
						5079ed9e06
					
				
					 5 changed files with 398 additions and 0 deletions
				
			
		| 
						 | 
				
			
			@ -247,6 +247,7 @@ Over 50 models have been optimized/verified on `ipex-llm`, including *LLaMA/LLaM
 | 
			
		|||
| LLaMA *(such as Vicuna, Guanaco, Koala, Baize, WizardLM, etc.)* | [link1](python/llm/example/CPU/Native-Models), [link2](python/llm/example/CPU/HF-Transformers-AutoModels/Model/vicuna) |[link](python/llm/example/GPU/HuggingFace/LLM/vicuna)|
 | 
			
		||||
| LLaMA 2    | [link1](python/llm/example/CPU/Native-Models), [link2](python/llm/example/CPU/HF-Transformers-AutoModels/Model/llama2) | [link](python/llm/example/GPU/HuggingFace/LLM/llama2)  |
 | 
			
		||||
| LLaMA 3    | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/llama3) | [link](python/llm/example/GPU/HuggingFace/LLM/llama3)  |
 | 
			
		||||
| LLaMA 3.1    | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/llama3.1) | [link](python/llm/example/GPU/HuggingFace/LLM/llama3.1)  |
 | 
			
		||||
| ChatGLM    | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/chatglm)   |    | 
 | 
			
		||||
| ChatGLM2   | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/chatglm2)  | [link](python/llm/example/GPU/HuggingFace/LLM/chatglm2)   |
 | 
			
		||||
| ChatGLM3   | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/chatglm3)  | [link](python/llm/example/GPU/HuggingFace/LLM/chatglm3)   |
 | 
			
		||||
| 
						 | 
				
			
			
 | 
			
		|||
| 
						 | 
				
			
			@ -0,0 +1,86 @@
 | 
			
		|||
# Llama3.1
 | 
			
		||||
In this directory, you will find examples on how you could apply IPEX-LLM INT4 optimizations on Llama3.1 models. For illustration purposes, we utilize the [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) as a reference Llama3.1 model.
 | 
			
		||||
 | 
			
		||||
## 0. Requirements
 | 
			
		||||
To run these examples with IPEX-LLM, we have some recommended requirements for your machine, please refer to [here](../README.md#recommended-requirements) for more information.
 | 
			
		||||
 | 
			
		||||
## Example: Predict Tokens using `generate()` API
 | 
			
		||||
In the example [generate.py](./generate.py), we show a basic use case for a Llama3.1 model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations.
 | 
			
		||||
### 1. Install
 | 
			
		||||
We suggest using conda to manage environment:
 | 
			
		||||
 | 
			
		||||
On Linux:
 | 
			
		||||
 | 
			
		||||
```bash
 | 
			
		||||
conda create -n llm python=3.11
 | 
			
		||||
conda activate llm
 | 
			
		||||
 | 
			
		||||
# install ipex-llm with 'all' option
 | 
			
		||||
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
 | 
			
		||||
 | 
			
		||||
# transformers>=4.43.1 is required for Llama3.1 with IPEX-LLM optimizations
 | 
			
		||||
pip install transformers==4.43.1
 | 
			
		||||
pip install trl 
 | 
			
		||||
```
 | 
			
		||||
On Windows:
 | 
			
		||||
 | 
			
		||||
```cmd
 | 
			
		||||
conda create -n llm python=3.11
 | 
			
		||||
conda activate llm
 | 
			
		||||
 | 
			
		||||
pip install --pre --upgrade ipex-llm[all]
 | 
			
		||||
 | 
			
		||||
pip install transformers==4.43.1
 | 
			
		||||
pip install trl 
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
### 2. Run
 | 
			
		||||
```
 | 
			
		||||
python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROMPT --n-predict N_PREDICT
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
Arguments info:
 | 
			
		||||
- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the Llama3.1 model (e.g. `meta-llama/Meta-Llama-3.1-8B-Instruct`) to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'meta-llama/Meta-Llama-3.1-8B-Instruct'`.
 | 
			
		||||
- `--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `'What is AI?'`.
 | 
			
		||||
- `--n-predict N_PREDICT`: argument defining the max number of tokens to predict. It is default to be `32`.
 | 
			
		||||
 | 
			
		||||
> **Note**: When loading the model in 4-bit, IPEX-LLM converts linear layers in the model into INT4 format. In theory, a *X*B model saved in 16-bit will requires approximately 2*X* GB of memory for loading, and ~0.5*X* GB memory for further inference.
 | 
			
		||||
>
 | 
			
		||||
> Please select the appropriate size of the Llama3.1 model based on the capabilities of your machine.
 | 
			
		||||
 | 
			
		||||
#### 2.1 Client
 | 
			
		||||
On client Windows machine, it is recommended to run directly with full utilization of all cores:
 | 
			
		||||
```cmd
 | 
			
		||||
python ./generate.py 
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
#### 2.2 Server
 | 
			
		||||
For optimal performance on server, it is recommended to set several environment variables (refer to [here](../README.md#best-known-configuration-on-linux) for more information), and run the example with all the physical cores of a single socket.
 | 
			
		||||
 | 
			
		||||
E.g. on Linux,
 | 
			
		||||
```bash
 | 
			
		||||
# set IPEX-LLM env variables
 | 
			
		||||
source ipex-llm-init
 | 
			
		||||
 | 
			
		||||
# e.g. for a server with 48 cores per socket
 | 
			
		||||
export OMP_NUM_THREADS=48
 | 
			
		||||
numactl -C 0-47 -m 0 python ./generate.py
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
#### 2.3 Sample Output
 | 
			
		||||
#### [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct)
 | 
			
		||||
```log
 | 
			
		||||
Inference time: xxxx s
 | 
			
		||||
-------------------- Prompt --------------------
 | 
			
		||||
<|begin_of_text|><|start_header_id|>user<|end_header_id|>
 | 
			
		||||
 | 
			
		||||
What is AI?<|eot_id|><|start_header_id|>assistant<|end_header_id|>
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
-------------------- Output (skip_special_tokens=False) --------------------
 | 
			
		||||
<|begin_of_text|><|begin_of_text|><|start_header_id|>user<|end_header_id|>
 | 
			
		||||
 | 
			
		||||
What is AI?<|eot_id|><|start_header_id|>assistant<|end_header_id|>
 | 
			
		||||
 | 
			
		||||
Artificial intelligence (AI) refers to the simulation of human intelligence in machines that are programmed to think and learn like humans. The term may also be applied to
 | 
			
		||||
```
 | 
			
		||||
| 
						 | 
				
			
			@ -0,0 +1,81 @@
 | 
			
		|||
#
 | 
			
		||||
# Copyright 2016 The BigDL Authors.
 | 
			
		||||
#
 | 
			
		||||
# Licensed under the Apache License, Version 2.0 (the "License");
 | 
			
		||||
# you may not use this file except in compliance with the License.
 | 
			
		||||
# You may obtain a copy of the License at
 | 
			
		||||
#
 | 
			
		||||
#     http://www.apache.org/licenses/LICENSE-2.0
 | 
			
		||||
#
 | 
			
		||||
# Unless required by applicable law or agreed to in writing, software
 | 
			
		||||
# distributed under the License is distributed on an "AS IS" BASIS,
 | 
			
		||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 | 
			
		||||
# See the License for the specific language governing permissions and
 | 
			
		||||
# limitations under the License.
 | 
			
		||||
#
 | 
			
		||||
 | 
			
		||||
import torch
 | 
			
		||||
import time
 | 
			
		||||
import argparse
 | 
			
		||||
 | 
			
		||||
from ipex_llm.transformers import AutoModelForCausalLM
 | 
			
		||||
from transformers import AutoTokenizer
 | 
			
		||||
 | 
			
		||||
# you could tune the prompt based on your own model,
 | 
			
		||||
# here the prompt tuning refers to https://llama.meta.com/docs/model-cards-and-prompt-formats/llama3_1
 | 
			
		||||
DEFAULT_SYSTEM_PROMPT = """\
 | 
			
		||||
"""
 | 
			
		||||
 | 
			
		||||
def get_prompt(user_input: str, chat_history: list[tuple[str, str]],
 | 
			
		||||
               system_prompt: str) -> str:
 | 
			
		||||
    prompt_texts = [f'<|begin_of_text|>']
 | 
			
		||||
 | 
			
		||||
    if system_prompt != '':
 | 
			
		||||
        prompt_texts.append(f'<|start_header_id|>system<|end_header_id|>\n\n{system_prompt}<|eot_id|>')
 | 
			
		||||
 | 
			
		||||
    for history_input, history_response in chat_history:
 | 
			
		||||
        prompt_texts.append(f'<|start_header_id|>user<|end_header_id|>\n\n{history_input.strip()}<|eot_id|>')
 | 
			
		||||
        prompt_texts.append(f'<|start_header_id|>assistant<|end_header_id|>\n\n{history_response.strip()}<|eot_id|>')
 | 
			
		||||
 | 
			
		||||
    prompt_texts.append(f'<|start_header_id|>user<|end_header_id|>\n\n{user_input.strip()}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n')
 | 
			
		||||
    return ''.join(prompt_texts)
 | 
			
		||||
 | 
			
		||||
if __name__ == '__main__':
 | 
			
		||||
    parser = argparse.ArgumentParser(description='Predict Tokens using `generate()` API for Llama3.1 model')
 | 
			
		||||
    parser.add_argument('--repo-id-or-model-path', type=str, default="meta-llama/Meta-Llama-3.1-8B-Instruct",
 | 
			
		||||
                        help='The huggingface repo id for the Llama3 (e.g. `meta-llama/Meta-Llama-3.1-8B-Instruct`) to be downloaded'
 | 
			
		||||
                             ', or the path to the huggingface checkpoint folder')
 | 
			
		||||
    parser.add_argument('--prompt', type=str, default="What is AI?",
 | 
			
		||||
                        help='Prompt to infer')
 | 
			
		||||
    parser.add_argument('--n-predict', type=int, default=32,
 | 
			
		||||
                        help='Max tokens to predict')
 | 
			
		||||
 | 
			
		||||
    args = parser.parse_args()
 | 
			
		||||
    model_path = args.repo_id_or_model_path
 | 
			
		||||
 | 
			
		||||
    # Load model in 4 bit,
 | 
			
		||||
    # which convert the relevant layers in the model into INT4 format
 | 
			
		||||
    model = AutoModelForCausalLM.from_pretrained(model_path,
 | 
			
		||||
                                                 load_in_4bit=True,
 | 
			
		||||
                                                 optimize_model=True,
 | 
			
		||||
                                                 trust_remote_code=True,
 | 
			
		||||
                                                 use_cache=True)
 | 
			
		||||
 | 
			
		||||
    # Load tokenizer
 | 
			
		||||
    tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
 | 
			
		||||
    
 | 
			
		||||
    # Generate predicted tokens
 | 
			
		||||
    with torch.inference_mode():
 | 
			
		||||
        prompt = get_prompt(args.prompt, [], system_prompt=DEFAULT_SYSTEM_PROMPT)
 | 
			
		||||
        input_ids = tokenizer.encode(prompt, return_tensors="pt")
 | 
			
		||||
        st = time.time()
 | 
			
		||||
        output = model.generate(input_ids,
 | 
			
		||||
                                max_new_tokens=args.n_predict)
 | 
			
		||||
        end = time.time()
 | 
			
		||||
        output_str = tokenizer.decode(output[0], skip_special_tokens=False)
 | 
			
		||||
        print(f'Inference time: {end-st} s')
 | 
			
		||||
        print('-'*20, 'Prompt', '-'*20)
 | 
			
		||||
        print(prompt)
 | 
			
		||||
        print('-'*20, 'Output (skip_special_tokens=False)', '-'*20)
 | 
			
		||||
        print(output_str)
 | 
			
		||||
 | 
			
		||||
							
								
								
									
										140
									
								
								python/llm/example/GPU/HuggingFace/LLM/llama3.1/README.md
									
									
									
									
									
										Normal file
									
								
							
							
						
						
									
										140
									
								
								python/llm/example/GPU/HuggingFace/LLM/llama3.1/README.md
									
									
									
									
									
										Normal file
									
								
							| 
						 | 
				
			
			@ -0,0 +1,140 @@
 | 
			
		|||
# Llama3.1
 | 
			
		||||
In this directory, you will find examples on how you could apply IPEX-LLM INT4 optimizations on Llama3.1 models on [Intel GPUs](../../../README.md). For illustration purposes, we utilize the [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) as a reference Llama3.1 models.
 | 
			
		||||
 | 
			
		||||
## 0. Requirements
 | 
			
		||||
To run these examples with IPEX-LLM on Intel GPUs, we have some recommended requirements for your machine, please refer to [here](../../../README.md#requirements) for more information.
 | 
			
		||||
 | 
			
		||||
## Example: Predict Tokens using `generate()` API
 | 
			
		||||
In the example [generate.py](./generate.py), we show a basic use case for a Llama3.1 model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations on Intel GPUs.
 | 
			
		||||
### 1. Install
 | 
			
		||||
#### 1.1 Installation on Linux
 | 
			
		||||
We suggest using conda to manage environment:
 | 
			
		||||
```bash
 | 
			
		||||
conda create -n llm python=3.11
 | 
			
		||||
conda activate llm
 | 
			
		||||
# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
 | 
			
		||||
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
 | 
			
		||||
 | 
			
		||||
# transformers>=4.43.1 is required for Llama3.1 with IPEX-LLM optimizations
 | 
			
		||||
pip install transformers==4.43.1
 | 
			
		||||
pip install trl
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
#### 1.2 Installation on Windows
 | 
			
		||||
We suggest using conda to manage environment:
 | 
			
		||||
```bash
 | 
			
		||||
conda create -n llm python=3.11 libuv
 | 
			
		||||
conda activate llm
 | 
			
		||||
 | 
			
		||||
# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
 | 
			
		||||
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
 | 
			
		||||
 | 
			
		||||
# transformers>=4.43.1 is required for Llama3.1 with IPEX-LLM optimizations
 | 
			
		||||
pip install transformers==4.43.1
 | 
			
		||||
pip install trl 
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
### 2. Configures OneAPI environment variables for Linux
 | 
			
		||||
 | 
			
		||||
> [!NOTE]
 | 
			
		||||
> Skip this step if you are running on Windows.
 | 
			
		||||
 | 
			
		||||
This is a required step on Linux for APT or offline installed oneAPI. Skip this step for PIP-installed oneAPI.
 | 
			
		||||
 | 
			
		||||
```bash
 | 
			
		||||
source /opt/intel/oneapi/setvars.sh
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
### 3. Runtime Configurations
 | 
			
		||||
For optimal performance, it is recommended to set several environment variables. Please check out the suggestions based on your device.
 | 
			
		||||
#### 3.1 Configurations for Linux
 | 
			
		||||
<details>
 | 
			
		||||
 | 
			
		||||
<summary>For Intel Arc™ A-Series Graphics and Intel Data Center GPU Flex Series</summary>
 | 
			
		||||
 | 
			
		||||
```bash
 | 
			
		||||
export USE_XETLA=OFF
 | 
			
		||||
export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
 | 
			
		||||
export SYCL_CACHE_PERSISTENT=1
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
</details>
 | 
			
		||||
 | 
			
		||||
<details>
 | 
			
		||||
 | 
			
		||||
<summary>For Intel Data Center GPU Max Series</summary>
 | 
			
		||||
 | 
			
		||||
```bash
 | 
			
		||||
export LD_PRELOAD=${LD_PRELOAD}:${CONDA_PREFIX}/lib/libtcmalloc.so
 | 
			
		||||
export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
 | 
			
		||||
export SYCL_CACHE_PERSISTENT=1
 | 
			
		||||
export ENABLE_SDP_FUSION=1
 | 
			
		||||
```
 | 
			
		||||
> Note: Please note that `libtcmalloc.so` can be installed by `conda install -c conda-forge -y gperftools=2.10`.
 | 
			
		||||
</details>
 | 
			
		||||
 | 
			
		||||
<details>
 | 
			
		||||
 | 
			
		||||
<summary>For Intel iGPU</summary>
 | 
			
		||||
 | 
			
		||||
```bash
 | 
			
		||||
export SYCL_CACHE_PERSISTENT=1
 | 
			
		||||
export BIGDL_LLM_XMX_DISABLED=1
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
</details>
 | 
			
		||||
 | 
			
		||||
#### 3.2 Configurations for Windows
 | 
			
		||||
<details>
 | 
			
		||||
 | 
			
		||||
<summary>For Intel iGPU</summary>
 | 
			
		||||
 | 
			
		||||
```cmd
 | 
			
		||||
set SYCL_CACHE_PERSISTENT=1
 | 
			
		||||
set BIGDL_LLM_XMX_DISABLED=1
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
</details>
 | 
			
		||||
 | 
			
		||||
<details>
 | 
			
		||||
 | 
			
		||||
<summary>For Intel Arc™ A-Series Graphics</summary>
 | 
			
		||||
 | 
			
		||||
```cmd
 | 
			
		||||
set SYCL_CACHE_PERSISTENT=1
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
</details>
 | 
			
		||||
 | 
			
		||||
> [!NOTE]
 | 
			
		||||
> For the first time that each model runs on Intel iGPU/Intel Arc™ A300-Series or Pro A60, it may take several minutes to compile.
 | 
			
		||||
### 4. Running examples
 | 
			
		||||
 | 
			
		||||
```
 | 
			
		||||
python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROMPT --n-predict N_PREDICT
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
Arguments info:
 | 
			
		||||
- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the Llama3.1 model (e.g. `meta-llama/Meta-Llama-3.1-8B-Instruct`) to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'meta-llama/Meta-Llama-3.1-8B-Instruct'`.
 | 
			
		||||
- `--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `'What is AI?'`.
 | 
			
		||||
- `--n-predict N_PREDICT`: argument defining the max number of tokens to predict. It is default to be `32`.
 | 
			
		||||
 | 
			
		||||
#### Sample Output
 | 
			
		||||
#### [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct)
 | 
			
		||||
```log
 | 
			
		||||
Inference time: xxxx s
 | 
			
		||||
-------------------- Prompt --------------------
 | 
			
		||||
<|begin_of_text|><|start_header_id|>user<|end_header_id|>
 | 
			
		||||
 | 
			
		||||
What is AI?<|eot_id|><|start_header_id|>assistant<|end_header_id|>
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
-------------------- Output (skip_special_tokens=False) --------------------
 | 
			
		||||
<|begin_of_text|><|begin_of_text|><|start_header_id|>user<|end_header_id|>
 | 
			
		||||
 | 
			
		||||
What is AI?<|eot_id|><|start_header_id|>assistant<|end_header_id|>
 | 
			
		||||
 | 
			
		||||
AI, or Artificial Intelligence, refers to the development of computer systems that can perform tasks that typically require human intelligence, such as:
 | 
			
		||||
 | 
			
		||||
1. **Learning**: AI
 | 
			
		||||
```
 | 
			
		||||
							
								
								
									
										90
									
								
								python/llm/example/GPU/HuggingFace/LLM/llama3.1/generate.py
									
									
									
									
									
										Normal file
									
								
							
							
						
						
									
										90
									
								
								python/llm/example/GPU/HuggingFace/LLM/llama3.1/generate.py
									
									
									
									
									
										Normal file
									
								
							| 
						 | 
				
			
			@ -0,0 +1,90 @@
 | 
			
		|||
#
 | 
			
		||||
# Copyright 2016 The BigDL Authors.
 | 
			
		||||
#
 | 
			
		||||
# Licensed under the Apache License, Version 2.0 (the "License");
 | 
			
		||||
# you may not use this file except in compliance with the License.
 | 
			
		||||
# You may obtain a copy of the License at
 | 
			
		||||
#
 | 
			
		||||
#     http://www.apache.org/licenses/LICENSE-2.0
 | 
			
		||||
#
 | 
			
		||||
# Unless required by applicable law or agreed to in writing, software
 | 
			
		||||
# distributed under the License is distributed on an "AS IS" BASIS,
 | 
			
		||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 | 
			
		||||
# See the License for the specific language governing permissions and
 | 
			
		||||
# limitations under the License.
 | 
			
		||||
#
 | 
			
		||||
 | 
			
		||||
import torch
 | 
			
		||||
import time
 | 
			
		||||
import argparse
 | 
			
		||||
 | 
			
		||||
from ipex_llm.transformers import AutoModelForCausalLM
 | 
			
		||||
from transformers import AutoTokenizer
 | 
			
		||||
 | 
			
		||||
# you could tune the prompt based on your own model,
 | 
			
		||||
# here the prompt tuning refers to https://llama.meta.com/docs/model-cards-and-prompt-formats/llama3_1
 | 
			
		||||
DEFAULT_SYSTEM_PROMPT = """\
 | 
			
		||||
"""
 | 
			
		||||
 | 
			
		||||
def get_prompt(user_input: str, chat_history: list[tuple[str, str]],
 | 
			
		||||
               system_prompt: str) -> str:
 | 
			
		||||
    prompt_texts = [f'<|begin_of_text|>']
 | 
			
		||||
 | 
			
		||||
    if system_prompt != '':
 | 
			
		||||
        prompt_texts.append(f'<|start_header_id|>system<|end_header_id|>\n\n{system_prompt}<|eot_id|>')
 | 
			
		||||
 | 
			
		||||
    for history_input, history_response in chat_history:
 | 
			
		||||
        prompt_texts.append(f'<|start_header_id|>user<|end_header_id|>\n\n{history_input.strip()}<|eot_id|>')
 | 
			
		||||
        prompt_texts.append(f'<|start_header_id|>assistant<|end_header_id|>\n\n{history_response.strip()}<|eot_id|>')
 | 
			
		||||
 | 
			
		||||
    prompt_texts.append(f'<|start_header_id|>user<|end_header_id|>\n\n{user_input.strip()}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n')
 | 
			
		||||
    return ''.join(prompt_texts)
 | 
			
		||||
 | 
			
		||||
if __name__ == '__main__':
 | 
			
		||||
    parser = argparse.ArgumentParser(description='Predict Tokens using `generate()` API for Llama3.1 model')
 | 
			
		||||
    parser.add_argument('--repo-id-or-model-path', type=str, default="meta-llama/Meta-Llama-3.1-8B-Instruct",
 | 
			
		||||
                        help='The huggingface repo id for the Llama3 (e.g. `meta-llama/Meta-Llama-3.1-8B-Instruct`) to be downloaded'
 | 
			
		||||
                             ', or the path to the huggingface checkpoint folder')
 | 
			
		||||
    parser.add_argument('--prompt', type=str, default="What is AI?",
 | 
			
		||||
                        help='Prompt to infer')
 | 
			
		||||
    parser.add_argument('--n-predict', type=int, default=32,
 | 
			
		||||
                        help='Max tokens to predict')
 | 
			
		||||
 | 
			
		||||
    args = parser.parse_args()
 | 
			
		||||
    model_path = args.repo_id_or_model_path
 | 
			
		||||
 | 
			
		||||
    # Load model in 4 bit,
 | 
			
		||||
    # which convert the relevant layers in the model into INT4 format
 | 
			
		||||
    # When running LLMs on Intel iGPUs for Windows users, we recommend setting `cpu_embedding=True` in the from_pretrained function.
 | 
			
		||||
    # This will allow the memory-intensive embedding layer to utilize the CPU instead of iGPU.
 | 
			
		||||
    model = AutoModelForCausalLM.from_pretrained(model_path,
 | 
			
		||||
                                                 load_in_4bit=True,
 | 
			
		||||
                                                 optimize_model=True,
 | 
			
		||||
                                                 trust_remote_code=True,
 | 
			
		||||
                                                 use_cache=True)
 | 
			
		||||
    model = model.half().to('xpu')
 | 
			
		||||
 | 
			
		||||
    # Load tokenizer
 | 
			
		||||
    tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
 | 
			
		||||
 | 
			
		||||
    # Generate predicted tokens
 | 
			
		||||
    with torch.inference_mode():
 | 
			
		||||
        prompt = get_prompt(args.prompt, [], system_prompt=DEFAULT_SYSTEM_PROMPT)
 | 
			
		||||
        input_ids = tokenizer.encode(prompt, return_tensors="pt").to('xpu')
 | 
			
		||||
        # ipex_llm model needs a warmup, then inference time can be accurate
 | 
			
		||||
        output = model.generate(input_ids,
 | 
			
		||||
                                max_new_tokens=args.n_predict)
 | 
			
		||||
 | 
			
		||||
        # start inference
 | 
			
		||||
        st = time.time()
 | 
			
		||||
        output = model.generate(input_ids,
 | 
			
		||||
                                max_new_tokens=args.n_predict)
 | 
			
		||||
        torch.xpu.synchronize()
 | 
			
		||||
        end = time.time()
 | 
			
		||||
        output = output.cpu()
 | 
			
		||||
        output_str = tokenizer.decode(output[0], skip_special_tokens=False)
 | 
			
		||||
        print(f'Inference time: {end-st} s')
 | 
			
		||||
        print('-'*20, 'Prompt', '-'*20)
 | 
			
		||||
        print(prompt)
 | 
			
		||||
        print('-'*20, 'Output (skip_special_tokens=False)', '-'*20)
 | 
			
		||||
        print(output_str)
 | 
			
		||||
		Loading…
	
		Reference in a new issue