Add CPU and GPU example for MiniCPM (#11202)
* Change installation address Change former address: "https://docs.conda.io/en/latest/miniconda.html#" to new address: "https://conda-forge.org/download/" for 63 occurrences under python\llm\example * Change Prompt Change "Anaconda Prompt" to "Miniforge Prompt" for 1 occurrence * Create and update model minicpm * Update model minicpm Update model minicpm under GPU/PyTorch-Models * Update readme and generate.py change "prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=False)" and delete "pip install transformers==4.37.0 " * Update comments for minicpm GPU Update comments for generate.py at minicpm GPU * Add CPU example for MiniCPM * Update minicpm README for CPU * Update README for MiniCPM and Llama3 * Update Readme for Llama3 CPU Pytorch * Update and fix comments for MiniCPM
This commit is contained in:
		
							parent
							
								
									a27a559650
								
							
						
					
					
						commit
						bfa1367149
					
				
					 13 changed files with 710 additions and 2 deletions
				
			
		| 
						 | 
					@ -207,6 +207,7 @@ Over 50 models have been optimized/verified on `ipex-llm`, including *LLaMA/LLaM
 | 
				
			||||||
| CodeGemma | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/codegemma) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/codegemma) |
 | 
					| CodeGemma | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/codegemma) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/codegemma) |
 | 
				
			||||||
| Command-R/cohere | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/cohere) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/cohere) |
 | 
					| Command-R/cohere | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/cohere) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/cohere) |
 | 
				
			||||||
| CodeGeeX2 | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/codegeex2) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/codegeex2) |
 | 
					| CodeGeeX2 | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/codegeex2) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/codegeex2) |
 | 
				
			||||||
 | 
					| MiniCPM | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/minicpm) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/minicpm) |
 | 
				
			||||||
 | 
					
 | 
				
			||||||
## Get Support
 | 
					## Get Support
 | 
				
			||||||
- Please report a bug or raise a feature request by opening a [Github Issue](https://github.com/intel-analytics/ipex-llm/issues)
 | 
					- Please report a bug or raise a feature request by opening a [Github Issue](https://github.com/intel-analytics/ipex-llm/issues)
 | 
				
			||||||
| 
						 | 
					
 | 
				
			||||||
| 
						 | 
					@ -618,6 +618,13 @@ Verified Models
 | 
				
			||||||
         <td>
 | 
					         <td>
 | 
				
			||||||
           <a href="https://github.com/intel-analytics/ipex-llm/tree/main/python/llm/example/GPU/HF-Transformers-AutoModels/Model/codegeex2">link</a></td>
 | 
					           <a href="https://github.com/intel-analytics/ipex-llm/tree/main/python/llm/example/GPU/HF-Transformers-AutoModels/Model/codegeex2">link</a></td>
 | 
				
			||||||
       </tr>
 | 
					       </tr>
 | 
				
			||||||
 | 
					       <tr>
 | 
				
			||||||
 | 
					         <td>MiniCPM</td>
 | 
				
			||||||
 | 
					         <td>
 | 
				
			||||||
 | 
					           <a href="https://github.com/intel-analytics/ipex-llm/tree/main/python/llm/example/CPU/HF-Transformers-AutoModels/Model/minicpm">link</a></td>
 | 
				
			||||||
 | 
					         <td>
 | 
				
			||||||
 | 
					           <a href="https://github.com/intel-analytics/ipex-llm/tree/main/python/llm/example/GPU/HF-Transformers-AutoModels/Model/minicpm">link</a></td>
 | 
				
			||||||
 | 
					       </tr>
 | 
				
			||||||
     </tbody>
 | 
					     </tbody>
 | 
				
			||||||
   </table>
 | 
					   </table>
 | 
				
			||||||
 | 
					
 | 
				
			||||||
| 
						 | 
					
 | 
				
			||||||
| 
						 | 
					@ -0,0 +1,71 @@
 | 
				
			||||||
 | 
					# MiniCPM
 | 
				
			||||||
 | 
					In this directory, you will find examples on how you could apply IPEX-LLM INT4 optimizations on MiniCPM models. For illustration purposes, we utilize the [openbmb/MiniCPM-2B-sft-bf16](https://huggingface.co/openbmb/MiniCPM-2B-sft-bf16) as a reference MiniCPM model.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					## 0. Requirements
 | 
				
			||||||
 | 
					To run these examples with IPEX-LLM, we have some recommended requirements for your machine, please refer to [here](../README.md#recommended-requirements) for more information.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					## Example: Predict Tokens using `generate()` API
 | 
				
			||||||
 | 
					In the example [generate.py](./generate.py), we show a basic use case for a MiniCPM model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations.
 | 
				
			||||||
 | 
					### 1. Install
 | 
				
			||||||
 | 
					We suggest using conda to manage environment:
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					On Linux:
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					```bash
 | 
				
			||||||
 | 
					conda create -n llm python=3.11
 | 
				
			||||||
 | 
					conda activate llm
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					# install ipex-llm with 'all' option
 | 
				
			||||||
 | 
					pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
 | 
				
			||||||
 | 
					```
 | 
				
			||||||
 | 
					On Windows:
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					```cmd
 | 
				
			||||||
 | 
					conda create -n llm python=3.11
 | 
				
			||||||
 | 
					conda activate llm
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					pip install --pre --upgrade ipex-llm[all]
 | 
				
			||||||
 | 
					```
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					### 2. Run
 | 
				
			||||||
 | 
					```
 | 
				
			||||||
 | 
					python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROMPT --n-predict N_PREDICT
 | 
				
			||||||
 | 
					```
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					Arguments info:
 | 
				
			||||||
 | 
					- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the MiniCPM model (e.g. `openbmb/MiniCPM-2B-sft-bf16`) to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'openbmb/MiniCPM-2B-sft-bf16'`.
 | 
				
			||||||
 | 
					- `--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `'What is AI?'`.
 | 
				
			||||||
 | 
					- `--n-predict N_PREDICT`: argument defining the max number of tokens to predict. It is default to be `32`.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					> **Note**: When loading the model in 4-bit, IPEX-LLM converts linear layers in the model into INT4 format. In theory, a *X*B model saved in 16-bit will requires approximately 2*X* GB of memory for loading, and ~0.5*X* GB memory for further inference.
 | 
				
			||||||
 | 
					>
 | 
				
			||||||
 | 
					> Please select the appropriate size of the MiniCPM model based on the capabilities of your machine.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					#### 2.1 Client
 | 
				
			||||||
 | 
					On client Windows machine, it is recommended to run directly with full utilization of all cores:
 | 
				
			||||||
 | 
					```cmd
 | 
				
			||||||
 | 
					python ./generate.py 
 | 
				
			||||||
 | 
					```
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					#### 2.2 Server
 | 
				
			||||||
 | 
					For optimal performance on server, it is recommended to set several environment variables (refer to [here](../README.md#best-known-configuration-on-linux) for more information), and run the example with all the physical cores of a single socket.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					E.g. on Linux,
 | 
				
			||||||
 | 
					```bash
 | 
				
			||||||
 | 
					# set IPEX-LLM env variables
 | 
				
			||||||
 | 
					source ipex-llm-init
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					# e.g. for a server with 48 cores per socket
 | 
				
			||||||
 | 
					export OMP_NUM_THREADS=48
 | 
				
			||||||
 | 
					numactl -C 0-47 -m 0 python ./generate.py
 | 
				
			||||||
 | 
					```
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					#### 2.3 Sample Output
 | 
				
			||||||
 | 
					#### [openbmb/MiniCPM-2B-sft-bf16](https://huggingface.co/openbmb/MiniCPM-2B-sft-bf16)
 | 
				
			||||||
 | 
					```log
 | 
				
			||||||
 | 
					Inference time: xxxx s
 | 
				
			||||||
 | 
					-------------------- Prompt --------------------
 | 
				
			||||||
 | 
					<用户>what is AI?<AI>
 | 
				
			||||||
 | 
					-------------------- Output --------------------
 | 
				
			||||||
 | 
					<s> <用户>what is AI?<AI> AI, or Artificial Intelligence, refers to the simulation of human intelligence in machines that are programmed to think and learn like humans. It is a broad field of computer
 | 
				
			||||||
 | 
					```
 | 
				
			||||||
| 
						 | 
					@ -0,0 +1,72 @@
 | 
				
			||||||
 | 
					#
 | 
				
			||||||
 | 
					# Copyright 2016 The BigDL Authors.
 | 
				
			||||||
 | 
					#
 | 
				
			||||||
 | 
					# Licensed under the Apache License, Version 2.0 (the "License");
 | 
				
			||||||
 | 
					# you may not use this file except in compliance with the License.
 | 
				
			||||||
 | 
					# You may obtain a copy of the License at
 | 
				
			||||||
 | 
					#
 | 
				
			||||||
 | 
					#     http://www.apache.org/licenses/LICENSE-2.0
 | 
				
			||||||
 | 
					#
 | 
				
			||||||
 | 
					# Unless required by applicable law or agreed to in writing, software
 | 
				
			||||||
 | 
					# distributed under the License is distributed on an "AS IS" BASIS,
 | 
				
			||||||
 | 
					# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 | 
				
			||||||
 | 
					# See the License for the specific language governing permissions and
 | 
				
			||||||
 | 
					# limitations under the License.
 | 
				
			||||||
 | 
					#
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					import torch
 | 
				
			||||||
 | 
					import time
 | 
				
			||||||
 | 
					import argparse
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					from ipex_llm.transformers import AutoModelForCausalLM
 | 
				
			||||||
 | 
					from transformers import AutoTokenizer
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					if __name__ == '__main__':
 | 
				
			||||||
 | 
					    parser = argparse.ArgumentParser(description='Predict Tokens using `generate()` API for MiniCPM model')
 | 
				
			||||||
 | 
					    parser.add_argument('--repo-id-or-model-path', type=str, default="openbmb/MiniCPM-2B-sft-bf16",
 | 
				
			||||||
 | 
					                        help='The huggingface repo id for the MiniCPM model to be downloaded'
 | 
				
			||||||
 | 
					                             ', or the path to the huggingface checkpoint folder')
 | 
				
			||||||
 | 
					    parser.add_argument('--prompt', type=str, default="What is AI?",
 | 
				
			||||||
 | 
					                        help='Prompt to infer')
 | 
				
			||||||
 | 
					    parser.add_argument('--n-predict', type=int, default=32,
 | 
				
			||||||
 | 
					                        help='Max tokens to predict')
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					    args = parser.parse_args()
 | 
				
			||||||
 | 
					    model_path = args.repo_id_or_model_path
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					    # Load model in 4 bit,
 | 
				
			||||||
 | 
					    # which convert the relevant layers in the model into INT4 format
 | 
				
			||||||
 | 
					    model = AutoModelForCausalLM.from_pretrained(model_path,
 | 
				
			||||||
 | 
					                                                 load_in_4bit=True,
 | 
				
			||||||
 | 
					                                                 optimize_model=True,
 | 
				
			||||||
 | 
					                                                 trust_remote_code=True,
 | 
				
			||||||
 | 
					                                                 use_cache=True)
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					    # Load tokenizer
 | 
				
			||||||
 | 
					    tokenizer = AutoTokenizer.from_pretrained(model_path,
 | 
				
			||||||
 | 
					                                              trust_remote_code=True)
 | 
				
			||||||
 | 
					    
 | 
				
			||||||
 | 
					    # Generate predicted tokens
 | 
				
			||||||
 | 
					    with torch.inference_mode():
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					        # here the prompt formatting refers to: https://huggingface.co/openbmb/MiniCPM-2B-sft-bf16/blob/79fbb1db171e6d8bf77cdb0a94076a43003abd9e/modeling_minicpm.py#L1320
 | 
				
			||||||
 | 
					        chat = [
 | 
				
			||||||
 | 
					            { "role": "user", "content": args.prompt },
 | 
				
			||||||
 | 
					        ]
 | 
				
			||||||
 | 
					        prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=False)
 | 
				
			||||||
 | 
					        input_ids = tokenizer.encode(prompt, return_tensors="pt")
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					        # start inference
 | 
				
			||||||
 | 
					        st = time.time()
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					        output = model.generate(input_ids,
 | 
				
			||||||
 | 
					                                do_sample=False,
 | 
				
			||||||
 | 
					                                max_new_tokens=args.n_predict)
 | 
				
			||||||
 | 
					        end = time.time()
 | 
				
			||||||
 | 
					        output_str = tokenizer.decode(output[0], skip_special_tokens=False)
 | 
				
			||||||
 | 
					        print(f'Inference time: {end-st} s')
 | 
				
			||||||
 | 
					        print('-'*20, 'Prompt', '-'*20)
 | 
				
			||||||
 | 
					        print(prompt)
 | 
				
			||||||
 | 
					        print('-'*20, 'Output', '-'*20)
 | 
				
			||||||
 | 
					        print(output_str)
 | 
				
			||||||
| 
						 | 
					@ -76,6 +76,7 @@ In the example, several arguments can be passed to satisfy your requirements:
 | 
				
			||||||
#### 2.4 Sample Output
 | 
					#### 2.4 Sample Output
 | 
				
			||||||
#### [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct)
 | 
					#### [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct)
 | 
				
			||||||
```log
 | 
					```log
 | 
				
			||||||
 | 
					Inference time: xxxx s
 | 
				
			||||||
-------------------- Prompt --------------------
 | 
					-------------------- Prompt --------------------
 | 
				
			||||||
<|user|>
 | 
					<|user|>
 | 
				
			||||||
What is AI?<|end|>
 | 
					What is AI?<|end|>
 | 
				
			||||||
| 
						 | 
					
 | 
				
			||||||
| 
						 | 
					@ -66,7 +66,7 @@ In the example, several arguments can be passed to satisfy your requirements:
 | 
				
			||||||
- `--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `'What is AI?'`.
 | 
					- `--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `'What is AI?'`.
 | 
				
			||||||
- `--n-predict N_PREDICT`: argument defining the max number of tokens to predict. It is default to be `32`.
 | 
					- `--n-predict N_PREDICT`: argument defining the max number of tokens to predict. It is default to be `32`.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
#### 2.3 Sample Output
 | 
					#### 2.4 Sample Output
 | 
				
			||||||
#### [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
 | 
					#### [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
 | 
				
			||||||
```log
 | 
					```log
 | 
				
			||||||
Inference time: xxxx s
 | 
					Inference time: xxxx s
 | 
				
			||||||
| 
						 | 
					
 | 
				
			||||||
| 
						 | 
					@ -0,0 +1,74 @@
 | 
				
			||||||
 | 
					# MiniCPM
 | 
				
			||||||
 | 
					In this directory, you will find examples on how you could use IPEX-LLM `optimize_model` API to accelerate MiniCPM models. For illustration purposes, we utilize the [openbmb/MiniCPM-2B-sft-bf16](https://huggingface.co/openbmb/MiniCPM-2B-sft-bf16) as a reference MiniCPM model.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					## Requirements
 | 
				
			||||||
 | 
					To run these examples with IPEX-LLM, we have some recommended requirements for your machine, please refer to [here](../README.md#recommended-requirements) for more information.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					## Example: Predict Tokens using `generate()` API
 | 
				
			||||||
 | 
					In the example [generate.py](./generate.py), we show a basic use case for a MiniCPM model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations.
 | 
				
			||||||
 | 
					### 1. Install
 | 
				
			||||||
 | 
					We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/).
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					After installing conda, create a Python environment for IPEX-LLM:
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					On Linux:
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					```bash
 | 
				
			||||||
 | 
					conda create -n llm python=3.11 # recommend to use Python 3.11
 | 
				
			||||||
 | 
					conda activate llm
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					# install the latest ipex-llm nightly build with 'all' option
 | 
				
			||||||
 | 
					pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
 | 
				
			||||||
 | 
					```
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					On Windows:
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					```cmd
 | 
				
			||||||
 | 
					conda create -n llm python=3.11
 | 
				
			||||||
 | 
					conda activate llm
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					pip install --pre --upgrade ipex-llm[all]
 | 
				
			||||||
 | 
					```
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					### 2. Run
 | 
				
			||||||
 | 
					```
 | 
				
			||||||
 | 
					python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROMPT --n-predict N_PREDICT
 | 
				
			||||||
 | 
					```
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					Arguments info:
 | 
				
			||||||
 | 
					- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the MiniCPM model (e.g. `openbmb/MiniCPM-2B-sft-bf16`) to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'openbmb/MiniCPM-2B-sft-bf16'`.
 | 
				
			||||||
 | 
					- `--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `'What is AI?'`.
 | 
				
			||||||
 | 
					- `--n-predict N_PREDICT`: argument defining the max number of tokens to predict. It is default to be `32`.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					> **Note**: When loading the model in 4-bit, IPEX-LLM converts linear layers in the model into INT4 format. In theory, a *X*B model saved in 16-bit will requires approximately 2*X* GB of memory for loading, and ~0.5*X* GB memory for further inference.
 | 
				
			||||||
 | 
					>
 | 
				
			||||||
 | 
					> Please select the appropriate size of the MiniCPM model based on the capabilities of your machine.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					#### 2.1 Client
 | 
				
			||||||
 | 
					On client Windows machines, it is recommended to run directly with full utilization of all cores:
 | 
				
			||||||
 | 
					```cmd
 | 
				
			||||||
 | 
					python ./generate.py --prompt 'What is AI?'
 | 
				
			||||||
 | 
					```
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					#### 2.2 Server
 | 
				
			||||||
 | 
					For optimal performance on server, it is recommended to set several environment variables (refer to [here](../README.md#best-known-configuration-on-linux) for more information), and run the example with all the physical cores of a single socket.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					E.g. on Linux,
 | 
				
			||||||
 | 
					```bash
 | 
				
			||||||
 | 
					# set IPEX-LLM env variables
 | 
				
			||||||
 | 
					source ipex-llm-init
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					# e.g. for a server with 48 cores per socket
 | 
				
			||||||
 | 
					export OMP_NUM_THREADS=48
 | 
				
			||||||
 | 
					numactl -C 0-47 -m 0 python ./generate.py --prompt 'What is AI?'
 | 
				
			||||||
 | 
					```
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					#### 2.3 Sample Output
 | 
				
			||||||
 | 
					#### [openbmb/MiniCPM-2B-sft-bf16](https://huggingface.co/openbmb/MiniCPM-2B-sft-bf16)
 | 
				
			||||||
 | 
					```log
 | 
				
			||||||
 | 
					Inference time: xxxx s
 | 
				
			||||||
 | 
					-------------------- Prompt --------------------
 | 
				
			||||||
 | 
					<用户>what is AI?<AI>
 | 
				
			||||||
 | 
					-------------------- Output --------------------
 | 
				
			||||||
 | 
					<s> <用户>what is AI?<AI> AI, or Artificial Intelligence, refers to the simulation of human intelligence in machines that are programmed to think and learn like humans. It is a broad field of computer
 | 
				
			||||||
 | 
					```
 | 
				
			||||||
| 
						 | 
					@ -0,0 +1,74 @@
 | 
				
			||||||
 | 
					#
 | 
				
			||||||
 | 
					# Copyright 2016 The BigDL Authors.
 | 
				
			||||||
 | 
					#
 | 
				
			||||||
 | 
					# Licensed under the Apache License, Version 2.0 (the "License");
 | 
				
			||||||
 | 
					# you may not use this file except in compliance with the License.
 | 
				
			||||||
 | 
					# You may obtain a copy of the License at
 | 
				
			||||||
 | 
					#
 | 
				
			||||||
 | 
					#     http://www.apache.org/licenses/LICENSE-2.0
 | 
				
			||||||
 | 
					#
 | 
				
			||||||
 | 
					# Unless required by applicable law or agreed to in writing, software
 | 
				
			||||||
 | 
					# distributed under the License is distributed on an "AS IS" BASIS,
 | 
				
			||||||
 | 
					# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 | 
				
			||||||
 | 
					# See the License for the specific language governing permissions and
 | 
				
			||||||
 | 
					# limitations under the License.
 | 
				
			||||||
 | 
					#
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					import torch
 | 
				
			||||||
 | 
					import time
 | 
				
			||||||
 | 
					import argparse
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					from transformers import AutoTokenizer, AutoModelForCausalLM
 | 
				
			||||||
 | 
					from ipex_llm import optimize_model
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					if __name__ == '__main__':
 | 
				
			||||||
 | 
					    parser = argparse.ArgumentParser(description='Predict Tokens using `generate()` API for MiniCPM model')
 | 
				
			||||||
 | 
					    parser.add_argument('--repo-id-or-model-path', type=str, default="openbmb/MiniCPM-2B-sft-bf16",
 | 
				
			||||||
 | 
					                        help='The huggingface repo id for the MiniCPM model to be downloaded'
 | 
				
			||||||
 | 
					                             ', or the path to the huggingface checkpoint folder')
 | 
				
			||||||
 | 
					    parser.add_argument('--prompt', type=str, default="What is AI?",
 | 
				
			||||||
 | 
					                        help='Prompt to infer')
 | 
				
			||||||
 | 
					    parser.add_argument('--n-predict', type=int, default=32,
 | 
				
			||||||
 | 
					                        help='Max tokens to predict')
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					    args = parser.parse_args()
 | 
				
			||||||
 | 
					    model_path = args.repo_id_or_model_path
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					    # Load model
 | 
				
			||||||
 | 
					    model = AutoModelForCausalLM.from_pretrained(model_path,
 | 
				
			||||||
 | 
					                                                 trust_remote_code=True,
 | 
				
			||||||
 | 
					                                                 torch_dtype='auto',
 | 
				
			||||||
 | 
					                                                 low_cpu_mem_usage=True,
 | 
				
			||||||
 | 
					                                                 use_cache=True)
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					    # With only one line to enable IPEX-LLM optimization on model
 | 
				
			||||||
 | 
					    model = optimize_model(model)
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					    # Load tokenizer
 | 
				
			||||||
 | 
					    tokenizer = AutoTokenizer.from_pretrained(model_path,
 | 
				
			||||||
 | 
					                                              trust_remote_code=True)
 | 
				
			||||||
 | 
					    
 | 
				
			||||||
 | 
					    # Generate predicted tokens
 | 
				
			||||||
 | 
					    with torch.inference_mode():
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					        # here the prompt formatting refers to: https://huggingface.co/openbmb/MiniCPM-2B-sft-bf16/blob/79fbb1db171e6d8bf77cdb0a94076a43003abd9e/modeling_minicpm.py#L1320
 | 
				
			||||||
 | 
					        chat = [
 | 
				
			||||||
 | 
					            { "role": "user", "content": args.prompt },
 | 
				
			||||||
 | 
					        ]
 | 
				
			||||||
 | 
					        prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=False)
 | 
				
			||||||
 | 
					        input_ids = tokenizer.encode(prompt, return_tensors="pt")
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					        # start inference
 | 
				
			||||||
 | 
					        st = time.time()
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					        output = model.generate(input_ids,
 | 
				
			||||||
 | 
					                                do_sample=False,
 | 
				
			||||||
 | 
					                                max_new_tokens=args.n_predict)
 | 
				
			||||||
 | 
					        end = time.time()
 | 
				
			||||||
 | 
					        output_str = tokenizer.decode(output[0], skip_special_tokens=False)
 | 
				
			||||||
 | 
					        print(f'Inference time: {end-st} s')
 | 
				
			||||||
 | 
					        print('-'*20, 'Prompt', '-'*20)
 | 
				
			||||||
 | 
					        print(prompt)
 | 
				
			||||||
 | 
					        print('-'*20, 'Output', '-'*20)
 | 
				
			||||||
 | 
					        print(output_str)
 | 
				
			||||||
| 
						 | 
					@ -73,6 +73,7 @@ In the example, several arguments can be passed to satisfy your requirements:
 | 
				
			||||||
#### 2.4 Sample Output
 | 
					#### 2.4 Sample Output
 | 
				
			||||||
#### [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct)
 | 
					#### [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct)
 | 
				
			||||||
```log
 | 
					```log
 | 
				
			||||||
 | 
					Inference time: xxxx s
 | 
				
			||||||
-------------------- Prompt --------------------
 | 
					-------------------- Prompt --------------------
 | 
				
			||||||
<|user|>
 | 
					<|user|>
 | 
				
			||||||
What is AI?<|end|>
 | 
					What is AI?<|end|>
 | 
				
			||||||
| 
						 | 
					
 | 
				
			||||||
| 
						 | 
					@ -0,0 +1,123 @@
 | 
				
			||||||
 | 
					# MiniCPM
 | 
				
			||||||
 | 
					In this directory, you will find examples on how you could apply IPEX-LLM INT4 optimizations on MiniCPM models on [Intel GPUs](../../../README.md). For illustration purposes, we utilize the [openbmb/MiniCPM-2B-sft-bf16](https://huggingface.co/openbmb/MiniCPM-2B-sft-bf16) as a reference MiniCPM model.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					## 0. Requirements
 | 
				
			||||||
 | 
					To run these examples with IPEX-LLM on Intel GPUs, we have some recommended requirements for your machine, please refer to [here](../../../README.md#requirements) for more information.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					## Example: Predict Tokens using `generate()` API
 | 
				
			||||||
 | 
					In the example [generate.py](./generate.py), we show a basic use case for a MiniCPM model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations on Intel GPUs.
 | 
				
			||||||
 | 
					### 1. Install
 | 
				
			||||||
 | 
					#### 1.1 Installation on Linux
 | 
				
			||||||
 | 
					We suggest using conda to manage environment:
 | 
				
			||||||
 | 
					```bash
 | 
				
			||||||
 | 
					conda create -n llm python=3.11
 | 
				
			||||||
 | 
					conda activate llm
 | 
				
			||||||
 | 
					# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
 | 
				
			||||||
 | 
					pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
 | 
				
			||||||
 | 
					```
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					#### 1.2 Installation on Windows
 | 
				
			||||||
 | 
					We suggest using conda to manage environment:
 | 
				
			||||||
 | 
					```bash
 | 
				
			||||||
 | 
					conda create -n llm python=3.11 libuv
 | 
				
			||||||
 | 
					conda activate llm
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
 | 
				
			||||||
 | 
					pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
 | 
				
			||||||
 | 
					```
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					### 2. Configures OneAPI environment variables for Linux
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					> [!NOTE]
 | 
				
			||||||
 | 
					> Skip this step if you are running on Windows.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					This is a required step on Linux for APT or offline installed oneAPI. Skip this step for PIP-installed oneAPI.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					```bash
 | 
				
			||||||
 | 
					source /opt/intel/oneapi/setvars.sh
 | 
				
			||||||
 | 
					```
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					### 3. Runtime Configurations
 | 
				
			||||||
 | 
					For optimal performance, it is recommended to set several environment variables. Please check out the suggestions based on your device.
 | 
				
			||||||
 | 
					#### 3.1 Configurations for Linux
 | 
				
			||||||
 | 
					<details>
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					<summary>For Intel Arc™ A-Series Graphics and Intel Data Center GPU Flex Series</summary>
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					```bash
 | 
				
			||||||
 | 
					export USE_XETLA=OFF
 | 
				
			||||||
 | 
					export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
 | 
				
			||||||
 | 
					export SYCL_CACHE_PERSISTENT=1
 | 
				
			||||||
 | 
					```
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					</details>
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					<details>
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					<summary>For Intel Data Center GPU Max Series</summary>
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					```bash
 | 
				
			||||||
 | 
					export LD_PRELOAD=${LD_PRELOAD}:${CONDA_PREFIX}/lib/libtcmalloc.so
 | 
				
			||||||
 | 
					export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
 | 
				
			||||||
 | 
					export SYCL_CACHE_PERSISTENT=1
 | 
				
			||||||
 | 
					export ENABLE_SDP_FUSION=1
 | 
				
			||||||
 | 
					```
 | 
				
			||||||
 | 
					> Note: Please note that `libtcmalloc.so` can be installed by `conda install -c conda-forge -y gperftools=2.10`.
 | 
				
			||||||
 | 
					</details>
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					<details>
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					<summary>For Intel iGPU</summary>
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					```bash
 | 
				
			||||||
 | 
					export SYCL_CACHE_PERSISTENT=1
 | 
				
			||||||
 | 
					export BIGDL_LLM_XMX_DISABLED=1
 | 
				
			||||||
 | 
					```
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					</details>
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					#### 3.2 Configurations for Windows
 | 
				
			||||||
 | 
					<details>
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					<summary>For Intel iGPU</summary>
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					```cmd
 | 
				
			||||||
 | 
					set SYCL_CACHE_PERSISTENT=1
 | 
				
			||||||
 | 
					set BIGDL_LLM_XMX_DISABLED=1
 | 
				
			||||||
 | 
					```
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					</details>
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					<details>
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					<summary>For Intel Arc™ A-Series Graphics</summary>
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					```cmd
 | 
				
			||||||
 | 
					set SYCL_CACHE_PERSISTENT=1
 | 
				
			||||||
 | 
					```
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					</details>
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					> [!NOTE]
 | 
				
			||||||
 | 
					> For the first time that each model runs on Intel iGPU/Intel Arc™ A300-Series or Pro A60, it may take several minutes to compile.
 | 
				
			||||||
 | 
					### 4. Running examples
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					```
 | 
				
			||||||
 | 
					python ./generate.py --prompt 'What is AI?'
 | 
				
			||||||
 | 
					```
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					Arguments info:
 | 
				
			||||||
 | 
					- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the MiniCPM model (e.g. `openbmb/MiniCPM-2B-sft-bf16`) to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'openbmb/MiniCPM-2B-sft-bf16'`.
 | 
				
			||||||
 | 
					- `--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `'What is AI?'`.
 | 
				
			||||||
 | 
					- `--n-predict N_PREDICT`: argument defining the max number of tokens to predict. It is default to be `32`.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					#### Sample Output
 | 
				
			||||||
 | 
					#### [openbmb/MiniCPM-2B-sft-bf16](https://huggingface.co/openbmb/MiniCPM-2B-sft-bf16)
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					```log
 | 
				
			||||||
 | 
					Inference time: xxxx s
 | 
				
			||||||
 | 
					-------------------- Prompt --------------------
 | 
				
			||||||
 | 
					<用户>what is AI?<AI>
 | 
				
			||||||
 | 
					-------------------- Output --------------------
 | 
				
			||||||
 | 
					<s> <用户>what is AI?<AI> AI, or Artificial Intelligence, refers to the simulation of human intelligence in machines that are programmed to think and learn like humans. It is a field of computer science
 | 
				
			||||||
 | 
					```
 | 
				
			||||||
| 
						 | 
					@ -0,0 +1,80 @@
 | 
				
			||||||
 | 
					#
 | 
				
			||||||
 | 
					# Copyright 2016 The BigDL Authors.
 | 
				
			||||||
 | 
					#
 | 
				
			||||||
 | 
					# Licensed under the Apache License, Version 2.0 (the "License");
 | 
				
			||||||
 | 
					# you may not use this file except in compliance with the License.
 | 
				
			||||||
 | 
					# You may obtain a copy of the License at
 | 
				
			||||||
 | 
					#
 | 
				
			||||||
 | 
					#     http://www.apache.org/licenses/LICENSE-2.0
 | 
				
			||||||
 | 
					#
 | 
				
			||||||
 | 
					# Unless required by applicable law or agreed to in writing, software
 | 
				
			||||||
 | 
					# distributed under the License is distributed on an "AS IS" BASIS,
 | 
				
			||||||
 | 
					# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 | 
				
			||||||
 | 
					# See the License for the specific language governing permissions and
 | 
				
			||||||
 | 
					# limitations under the License.
 | 
				
			||||||
 | 
					#
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					import torch
 | 
				
			||||||
 | 
					import time
 | 
				
			||||||
 | 
					import argparse
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					from ipex_llm.transformers import AutoModelForCausalLM
 | 
				
			||||||
 | 
					from transformers import AutoTokenizer
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					if __name__ == '__main__':
 | 
				
			||||||
 | 
					    parser = argparse.ArgumentParser(description='Predict Tokens using `generate()` API for MiniCPM model')
 | 
				
			||||||
 | 
					    parser.add_argument('--repo-id-or-model-path', type=str, default="openbmb/MiniCPM-2B-sft-bf16",
 | 
				
			||||||
 | 
					                        help='The huggingface repo id for the MiniCPM model to be downloaded'
 | 
				
			||||||
 | 
					                             ', or the path to the huggingface checkpoint folder')
 | 
				
			||||||
 | 
					    parser.add_argument('--prompt', type=str, default="What is AI?",
 | 
				
			||||||
 | 
					                        help='Prompt to infer')
 | 
				
			||||||
 | 
					    parser.add_argument('--n-predict', type=int, default=32,
 | 
				
			||||||
 | 
					                        help='Max tokens to predict')
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					    args = parser.parse_args()
 | 
				
			||||||
 | 
					    model_path = args.repo_id_or_model_path
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					    # Load model in 4 bit,
 | 
				
			||||||
 | 
					    # which convert the relevant layers in the model into INT4 format
 | 
				
			||||||
 | 
					    # When running LLMs on Intel iGPUs for Windows users, we recommend setting `cpu_embedding=True` in the from_pretrained function.
 | 
				
			||||||
 | 
					    # This will allow the memory-intensive embedding layer to utilize the CPU instead of iGPU.
 | 
				
			||||||
 | 
					    model = AutoModelForCausalLM.from_pretrained(model_path,
 | 
				
			||||||
 | 
					                                                 load_in_4bit=True,
 | 
				
			||||||
 | 
					                                                 trust_remote_code=True,
 | 
				
			||||||
 | 
					                                                 optimize_model=True,
 | 
				
			||||||
 | 
					                                                 use_cache=True)
 | 
				
			||||||
 | 
					    
 | 
				
			||||||
 | 
					    model = model.to('xpu')
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					    # Load tokenizer
 | 
				
			||||||
 | 
					    tokenizer = AutoTokenizer.from_pretrained(model_path,
 | 
				
			||||||
 | 
					                                              trust_remote_code=True)
 | 
				
			||||||
 | 
					    
 | 
				
			||||||
 | 
					    # Generate predicted tokens
 | 
				
			||||||
 | 
					    with torch.inference_mode():
 | 
				
			||||||
 | 
					        
 | 
				
			||||||
 | 
					        # here the prompt formatting refers to: https://huggingface.co/openbmb/MiniCPM-2B-sft-bf16/blob/79fbb1db171e6d8bf77cdb0a94076a43003abd9e/modeling_minicpm.py#L1320
 | 
				
			||||||
 | 
					        chat = [
 | 
				
			||||||
 | 
					            { "role": "user", "content": args.prompt },
 | 
				
			||||||
 | 
					        ]
 | 
				
			||||||
 | 
					        prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=False)
 | 
				
			||||||
 | 
					        input_ids = tokenizer.encode(prompt, return_tensors="pt").to('xpu')
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					        # ipex_llm model needs a warmup, then inference time can be accurate
 | 
				
			||||||
 | 
					        output = model.generate(input_ids,
 | 
				
			||||||
 | 
					                                max_new_tokens=args.n_predict)
 | 
				
			||||||
 | 
					        # start inference
 | 
				
			||||||
 | 
					        st = time.time()
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					        output = model.generate(input_ids,
 | 
				
			||||||
 | 
					                                do_sample=False,
 | 
				
			||||||
 | 
					                                max_new_tokens=args.n_predict)
 | 
				
			||||||
 | 
					        torch.xpu.synchronize()
 | 
				
			||||||
 | 
					        end = time.time()
 | 
				
			||||||
 | 
					        output_str = tokenizer.decode(output[0], skip_special_tokens=False)
 | 
				
			||||||
 | 
					        print(f'Inference time: {end-st} s')
 | 
				
			||||||
 | 
					        print('-'*20, 'Prompt', '-'*20)
 | 
				
			||||||
 | 
					        print(prompt)
 | 
				
			||||||
 | 
					        print('-'*20, 'Output', '-'*20)
 | 
				
			||||||
 | 
					        print(output_str)
 | 
				
			||||||
							
								
								
									
										123
									
								
								python/llm/example/GPU/PyTorch-Models/Model/minicpm/README.md
									
									
									
									
									
										Normal file
									
								
							
							
						
						
									
										123
									
								
								python/llm/example/GPU/PyTorch-Models/Model/minicpm/README.md
									
									
									
									
									
										Normal file
									
								
							| 
						 | 
					@ -0,0 +1,123 @@
 | 
				
			||||||
 | 
					# MiniCPM
 | 
				
			||||||
 | 
					In this directory, you will find examples on how you could use IPEX-LLM `optimize_model` API to accelerate MiniCPM models on [Intel GPUs](../../../README.md). For illustration purposes, we utilize the [openbmb/MiniCPM-2B-sft-bf16](https://huggingface.co/openbmb/MiniCPM-2B-sft-bf16) as a reference MiniCPM model.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					## 0. Requirements
 | 
				
			||||||
 | 
					To run these examples with IPEX-LLM on Intel GPUs, we have some recommended requirements for your machine, please refer to [here](../../../README.md#requirements) for more information.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					## Example: Predict Tokens using `generate()` API
 | 
				
			||||||
 | 
					In the example [generate.py](./generate.py), we show a basic use case for a MiniCPM model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations on Intel GPUs.
 | 
				
			||||||
 | 
					### 1. Install
 | 
				
			||||||
 | 
					#### 1.1 Installation on Linux
 | 
				
			||||||
 | 
					We suggest using conda to manage environment:
 | 
				
			||||||
 | 
					```bash
 | 
				
			||||||
 | 
					conda create -n llm python=3.11
 | 
				
			||||||
 | 
					conda activate llm
 | 
				
			||||||
 | 
					# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
 | 
				
			||||||
 | 
					pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
 | 
				
			||||||
 | 
					```
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					#### 1.2 Installation on Windows
 | 
				
			||||||
 | 
					We suggest using conda to manage environment:
 | 
				
			||||||
 | 
					```bash
 | 
				
			||||||
 | 
					conda create -n llm python=3.11 libuv
 | 
				
			||||||
 | 
					conda activate llm
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
 | 
				
			||||||
 | 
					pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
 | 
				
			||||||
 | 
					```
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					### 2. Configures OneAPI environment variables for Linux
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					> [!NOTE]
 | 
				
			||||||
 | 
					> Skip this step if you are running on Windows.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					This is a required step on Linux for APT or offline installed oneAPI. Skip this step for PIP-installed oneAPI.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					```bash
 | 
				
			||||||
 | 
					source /opt/intel/oneapi/setvars.sh
 | 
				
			||||||
 | 
					```
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					### 3. Runtime Configurations
 | 
				
			||||||
 | 
					For optimal performance, it is recommended to set several environment variables. Please check out the suggestions based on your device.
 | 
				
			||||||
 | 
					#### 3.1 Configurations for Linux
 | 
				
			||||||
 | 
					<details>
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					<summary>For Intel Arc™ A-Series Graphics and Intel Data Center GPU Flex Series</summary>
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					```bash
 | 
				
			||||||
 | 
					export USE_XETLA=OFF
 | 
				
			||||||
 | 
					export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
 | 
				
			||||||
 | 
					export SYCL_CACHE_PERSISTENT=1
 | 
				
			||||||
 | 
					```
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					</details>
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					<details>
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					<summary>For Intel Data Center GPU Max Series</summary>
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					```bash
 | 
				
			||||||
 | 
					export LD_PRELOAD=${LD_PRELOAD}:${CONDA_PREFIX}/lib/libtcmalloc.so
 | 
				
			||||||
 | 
					export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
 | 
				
			||||||
 | 
					export SYCL_CACHE_PERSISTENT=1
 | 
				
			||||||
 | 
					export ENABLE_SDP_FUSION=1
 | 
				
			||||||
 | 
					```
 | 
				
			||||||
 | 
					> Note: Please note that `libtcmalloc.so` can be installed by `conda install -c conda-forge -y gperftools=2.10`.
 | 
				
			||||||
 | 
					</details>
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					<details>
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					<summary>For Intel iGPU</summary>
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					```bash
 | 
				
			||||||
 | 
					export SYCL_CACHE_PERSISTENT=1
 | 
				
			||||||
 | 
					export BIGDL_LLM_XMX_DISABLED=1
 | 
				
			||||||
 | 
					```
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					</details>
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					#### 3.2 Configurations for Windows
 | 
				
			||||||
 | 
					<details>
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					<summary>For Intel iGPU</summary>
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					```cmd
 | 
				
			||||||
 | 
					set SYCL_CACHE_PERSISTENT=1
 | 
				
			||||||
 | 
					set BIGDL_LLM_XMX_DISABLED=1
 | 
				
			||||||
 | 
					```
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					</details>
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					<details>
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					<summary>For Intel Arc™ A-Series Graphics</summary>
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					```cmd
 | 
				
			||||||
 | 
					set SYCL_CACHE_PERSISTENT=1
 | 
				
			||||||
 | 
					```
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					</details>
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					> [!NOTE]
 | 
				
			||||||
 | 
					> For the first time that each model runs on Intel iGPU/Intel Arc™ A300-Series or Pro A60, it may take several minutes to compile.
 | 
				
			||||||
 | 
					### 4. Running examples
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					```
 | 
				
			||||||
 | 
					python ./generate.py --prompt 'What is AI?'
 | 
				
			||||||
 | 
					```
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					Arguments info:
 | 
				
			||||||
 | 
					- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the MiniCPM model (e.g. `openbmb/MiniCPM-2B-sft-bf16`) to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'openbmb/MiniCPM-2B-sft-bf16'`.
 | 
				
			||||||
 | 
					- `--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `'What is AI?'`.
 | 
				
			||||||
 | 
					- `--n-predict N_PREDICT`: argument defining the max number of tokens to predict. It is default to be `32`.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					#### Sample Output
 | 
				
			||||||
 | 
					#### [openbmb/MiniCPM-2B-sft-bf16](https://huggingface.co/openbmb/MiniCPM-2B-sft-bf16)
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					```log
 | 
				
			||||||
 | 
					Inference time: xxxx s
 | 
				
			||||||
 | 
					-------------------- Prompt --------------------
 | 
				
			||||||
 | 
					<用户>what is AI?<AI>
 | 
				
			||||||
 | 
					-------------------- Output --------------------
 | 
				
			||||||
 | 
					<s> <用户>what is AI?<AI> AI, or Artificial Intelligence, refers to the simulation of human intelligence in machines that are programmed to think and learn like humans. It is a field of computer science
 | 
				
			||||||
 | 
					```
 | 
				
			||||||
| 
						 | 
					@ -0,0 +1,81 @@
 | 
				
			||||||
 | 
					#
 | 
				
			||||||
 | 
					# Copyright 2016 The BigDL Authors.
 | 
				
			||||||
 | 
					#
 | 
				
			||||||
 | 
					# Licensed under the Apache License, Version 2.0 (the "License");
 | 
				
			||||||
 | 
					# you may not use this file except in compliance with the License.
 | 
				
			||||||
 | 
					# You may obtain a copy of the License at
 | 
				
			||||||
 | 
					#
 | 
				
			||||||
 | 
					#     http://www.apache.org/licenses/LICENSE-2.0
 | 
				
			||||||
 | 
					#
 | 
				
			||||||
 | 
					# Unless required by applicable law or agreed to in writing, software
 | 
				
			||||||
 | 
					# distributed under the License is distributed on an "AS IS" BASIS,
 | 
				
			||||||
 | 
					# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 | 
				
			||||||
 | 
					# See the License for the specific language governing permissions and
 | 
				
			||||||
 | 
					# limitations under the License.
 | 
				
			||||||
 | 
					#
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					import torch
 | 
				
			||||||
 | 
					import time
 | 
				
			||||||
 | 
					import argparse
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					from transformers import AutoModelForCausalLM, AutoTokenizer
 | 
				
			||||||
 | 
					from ipex_llm import optimize_model
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					if __name__ == '__main__':
 | 
				
			||||||
 | 
					    parser = argparse.ArgumentParser(description='Predict Tokens using `generate()` API for MiniCPM model')
 | 
				
			||||||
 | 
					    parser.add_argument('--repo-id-or-model-path', type=str, default="openbmb/MiniCPM-2B-sft-bf16",
 | 
				
			||||||
 | 
					                        help='The huggingface repo id for the MiniCPM model to be downloaded'
 | 
				
			||||||
 | 
					                             ', or the path to the huggingface checkpoint folder')
 | 
				
			||||||
 | 
					    parser.add_argument('--prompt', type=str, default="What is AI?",
 | 
				
			||||||
 | 
					                        help='Prompt to infer')
 | 
				
			||||||
 | 
					    parser.add_argument('--n-predict', type=int, default=32,
 | 
				
			||||||
 | 
					                        help='Max tokens to predict')
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					    args = parser.parse_args()
 | 
				
			||||||
 | 
					    model_path = args.repo_id_or_model_path
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					    # Load model
 | 
				
			||||||
 | 
					    model = AutoModelForCausalLM.from_pretrained(model_path,
 | 
				
			||||||
 | 
					                                                 trust_remote_code=True,
 | 
				
			||||||
 | 
					                                                 torch_dtype='auto',
 | 
				
			||||||
 | 
					                                                 low_cpu_mem_usage=True,
 | 
				
			||||||
 | 
					                                                 use_cache=True)
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					    # With only one line to enable IPEX-LLM optimization on model
 | 
				
			||||||
 | 
					    # When running LLMs on Intel iGPUs for Windows users, we recommend setting `cpu_embedding=True` in the optimize_model function.
 | 
				
			||||||
 | 
					    # This will allow the memory-intensive embedding layer to utilize the CPU instead of iGPU.
 | 
				
			||||||
 | 
					    model = optimize_model(model)
 | 
				
			||||||
 | 
					    model = model.to('xpu')
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					    # Load tokenizer
 | 
				
			||||||
 | 
					    tokenizer = AutoTokenizer.from_pretrained(model_path,
 | 
				
			||||||
 | 
					                                              trust_remote_code=True)
 | 
				
			||||||
 | 
					    
 | 
				
			||||||
 | 
					    # Generate predicted tokens
 | 
				
			||||||
 | 
					    with torch.inference_mode():
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					        # here the prompt formatting refers to: https://huggingface.co/openbmb/MiniCPM-2B-sft-bf16/blob/79fbb1db171e6d8bf77cdb0a94076a43003abd9e/modeling_minicpm.py#L1320
 | 
				
			||||||
 | 
					        chat = [
 | 
				
			||||||
 | 
					            { "role": "user", "content": args.prompt },
 | 
				
			||||||
 | 
					        ]
 | 
				
			||||||
 | 
					        prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=False)
 | 
				
			||||||
 | 
					        input_ids = tokenizer.encode(prompt, return_tensors="pt").to('xpu')
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					        # ipex_llm model needs a warmup, then inference time can be accurate
 | 
				
			||||||
 | 
					        output = model.generate(input_ids,
 | 
				
			||||||
 | 
					                                max_new_tokens=args.n_predict)
 | 
				
			||||||
 | 
					        # start inference
 | 
				
			||||||
 | 
					        st = time.time()
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					        output = model.generate(input_ids,
 | 
				
			||||||
 | 
					                                do_sample=False,
 | 
				
			||||||
 | 
					                                max_new_tokens=args.n_predict)
 | 
				
			||||||
 | 
					        torch.xpu.synchronize()
 | 
				
			||||||
 | 
					        end = time.time()
 | 
				
			||||||
 | 
					        output_str = tokenizer.decode(output[0], skip_special_tokens=False)
 | 
				
			||||||
 | 
					        print(f'Inference time: {end-st} s')
 | 
				
			||||||
 | 
					        print('-'*20, 'Prompt', '-'*20)
 | 
				
			||||||
 | 
					        print(prompt)
 | 
				
			||||||
 | 
					        print('-'*20, 'Output', '-'*20)
 | 
				
			||||||
 | 
					        print(output_str)
 | 
				
			||||||
		Loading…
	
		Reference in a new issue