[LLM] add a llama2 gguf example (#9553)
This commit is contained in:
		
							parent
							
								
									7f6465518a
								
							
						
					
					
						commit
						66f5b45f57
					
				
					 2 changed files with 134 additions and 0 deletions
				
			
		
							
								
								
									
										75
									
								
								python/llm/example/CPU/GGUF-Models/llama2/README.md
									
									
									
									
									
										Normal file
									
								
							
							
						
						
									
										75
									
								
								python/llm/example/CPU/GGUF-Models/llama2/README.md
									
									
									
									
									
										Normal file
									
								
							| 
						 | 
					@ -0,0 +1,75 @@
 | 
				
			||||||
 | 
					# Llama2
 | 
				
			||||||
 | 
					In this directory, you will find examples on how you could load gguf Llama2 model and convert it to bigdl-llm model. For illustration purposes, we utilize the [llama-2-7b-chat.Q4_0.gguf](https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGUF/tree/main) and [llama-2-7b-chat.Q4_1.gguf](https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGUF/tree/main) as reference Llama2 gguf models.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					## Requirements
 | 
				
			||||||
 | 
					To run these examples with BigDL-LLM, we have some recommended requirements for your machine, please refer to [here](../README.md#recommended-requirements) for more information.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					## Example: Load gguf model using `from_gguf()` API
 | 
				
			||||||
 | 
					In the example [generate.py](./generate.py), we show a basic use case to load a gguf Llama2 model and convert it to a bigdl-llm model using `from_gguf()` API, with BigDL-LLM optimizations.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					### 1. Install
 | 
				
			||||||
 | 
					We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					After installing conda, create a Python environment for BigDL-LLM:
 | 
				
			||||||
 | 
					```bash
 | 
				
			||||||
 | 
					conda create -n llm python=3.9 # recommend to use Python 3.9
 | 
				
			||||||
 | 
					conda activate llm
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					pip install --pre --upgrade bigdl-llm[all] # install the latest bigdl-llm nightly build with 'all' option
 | 
				
			||||||
 | 
					```
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					### 2. Run
 | 
				
			||||||
 | 
					After setting up the Python environment, you could run the example by following steps.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					#### 2.1 Client
 | 
				
			||||||
 | 
					On client Windows machines, it is recommended to run directly with full utilization of all cores:
 | 
				
			||||||
 | 
					```powershell
 | 
				
			||||||
 | 
					python ./generate.py --model <path_to_gguf_model> --prompt 'What is AI?'
 | 
				
			||||||
 | 
					```
 | 
				
			||||||
 | 
					More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					#### 2.2 Server
 | 
				
			||||||
 | 
					For optimal performance on server, it is recommended to set several environment variables (refer to [here](../README.md#best-known-configuration-on-linux) for more information), and run the example with all the physical cores of a single socket.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					E.g. on Linux,
 | 
				
			||||||
 | 
					```bash
 | 
				
			||||||
 | 
					# set BigDL-LLM env variables
 | 
				
			||||||
 | 
					source bigdl-llm-init
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					# e.g. for a server with 48 cores per socket
 | 
				
			||||||
 | 
					export OMP_NUM_THREADS=48
 | 
				
			||||||
 | 
					numactl -C 0-47 -m 0 python ./generate.py --model <path_to_gguf_model> --prompt 'What is AI?'
 | 
				
			||||||
 | 
					```
 | 
				
			||||||
 | 
					More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					#### 2.3 Arguments Info
 | 
				
			||||||
 | 
					In the example, several arguments can be passed to satisfy your requirements:
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					- `--model`: path to gguf model, it should be a file with name like `llama-2-7b-chat.Q4_0.gguf`
 | 
				
			||||||
 | 
					- `--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `'What is AI?'`.
 | 
				
			||||||
 | 
					- `--n-predict N_PREDICT`: argument defining the max number of tokens to predict. It is default to be `32`.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					#### 2.3 Sample Output
 | 
				
			||||||
 | 
					#### [llama-2-7b-chat.Q4_0.gguf](https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGUF/tree/main)
 | 
				
			||||||
 | 
					```log
 | 
				
			||||||
 | 
					Inference time: xxxx s
 | 
				
			||||||
 | 
					-------------------- Output --------------------
 | 
				
			||||||
 | 
					### HUMAN:
 | 
				
			||||||
 | 
					What is AI?
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					### RESPONSE:
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					AI is a term used to describe a type of computer software that is designed to perform tasks that typically require human intelligence, such as visual perception, speech
 | 
				
			||||||
 | 
					```
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					#### [llama-2-7b-chat.Q4_1.gguf](https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGUF/tree/main)
 | 
				
			||||||
 | 
					```log
 | 
				
			||||||
 | 
					Inference time: xxxx s
 | 
				
			||||||
 | 
					-------------------- Output --------------------
 | 
				
			||||||
 | 
					### HUMAN:
 | 
				
			||||||
 | 
					What is AI?
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					### RESPONSE:
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					Artificial intelligence (AI) is the field of study focused on creating machines that can perform tasks that typically require human intelligence, such as understanding language,
 | 
				
			||||||
 | 
					```
 | 
				
			||||||
							
								
								
									
										59
									
								
								python/llm/example/CPU/GGUF-Models/llama2/generate.py
									
									
									
									
									
										Normal file
									
								
							
							
						
						
									
										59
									
								
								python/llm/example/CPU/GGUF-Models/llama2/generate.py
									
									
									
									
									
										Normal file
									
								
							| 
						 | 
					@ -0,0 +1,59 @@
 | 
				
			||||||
 | 
					#
 | 
				
			||||||
 | 
					# Copyright 2016 The BigDL Authors.
 | 
				
			||||||
 | 
					#
 | 
				
			||||||
 | 
					# Licensed under the Apache License, Version 2.0 (the "License");
 | 
				
			||||||
 | 
					# you may not use this file except in compliance with the License.
 | 
				
			||||||
 | 
					# You may obtain a copy of the License at
 | 
				
			||||||
 | 
					#
 | 
				
			||||||
 | 
					#     http://www.apache.org/licenses/LICENSE-2.0
 | 
				
			||||||
 | 
					#
 | 
				
			||||||
 | 
					# Unless required by applicable law or agreed to in writing, software
 | 
				
			||||||
 | 
					# distributed under the License is distributed on an "AS IS" BASIS,
 | 
				
			||||||
 | 
					# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 | 
				
			||||||
 | 
					# See the License for the specific language governing permissions and
 | 
				
			||||||
 | 
					# limitations under the License.
 | 
				
			||||||
 | 
					#
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					import torch
 | 
				
			||||||
 | 
					import time
 | 
				
			||||||
 | 
					import argparse
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					from transformers import LlamaTokenizer
 | 
				
			||||||
 | 
					from bigdl.llm.transformers import AutoModelForCausalLM
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					# you could tune the prompt based on your own model,
 | 
				
			||||||
 | 
					# here the prompt tuning refers to https://huggingface.co/georgesung/llama2_7b_chat_uncensored#prompt-style
 | 
				
			||||||
 | 
					LLAMA2_PROMPT_FORMAT = """### HUMAN:
 | 
				
			||||||
 | 
					{prompt}
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					### RESPONSE:
 | 
				
			||||||
 | 
					"""
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					if __name__ == '__main__':
 | 
				
			||||||
 | 
					    parser = argparse.ArgumentParser(description='Predict Tokens using `generate()` API for Llama2 model')
 | 
				
			||||||
 | 
					    parser.add_argument('--model', type=str, required=True,
 | 
				
			||||||
 | 
					                        help='Path to a gguf model')
 | 
				
			||||||
 | 
					    parser.add_argument('--prompt', type=str, default="What is AI?",
 | 
				
			||||||
 | 
					                        help='Prompt to infer')
 | 
				
			||||||
 | 
					    parser.add_argument('--n-predict', type=int, default=32,
 | 
				
			||||||
 | 
					                        help='Max tokens to predict')
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					    args = parser.parse_args()
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					    model_path = args.model
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					    # Load gguf model and vocab, then convert them to bigdl-llm model and huggingface tokenizer
 | 
				
			||||||
 | 
					    model, tokenizer = AutoModelForCausalLM.from_gguf(model_path)
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					    # Generate predicted tokens
 | 
				
			||||||
 | 
					    with torch.inference_mode():
 | 
				
			||||||
 | 
					        prompt = LLAMA2_PROMPT_FORMAT.format(prompt=args.prompt)
 | 
				
			||||||
 | 
					        input_ids = tokenizer.encode(prompt, return_tensors="pt")
 | 
				
			||||||
 | 
					        st = time.time()
 | 
				
			||||||
 | 
					        output = model.generate(input_ids,
 | 
				
			||||||
 | 
					                                max_new_tokens=args.n_predict)
 | 
				
			||||||
 | 
					        end = time.time()
 | 
				
			||||||
 | 
					        output_str = tokenizer.decode(output[0], skip_special_tokens=True)
 | 
				
			||||||
 | 
					        print(f'Inference time: {end-st} s')
 | 
				
			||||||
 | 
					        print('-'*20, 'Output', '-'*20)
 | 
				
			||||||
 | 
					        print(output_str)
 | 
				
			||||||
		Loading…
	
		Reference in a new issue