LLM: add optimize_model examples for llama2 and chatglm (#8894)
* add llama2 and chatglm optimize_model examples * update default usage * update command and some descriptions * move folder and remove general_int4 descriptions * change folder name
This commit is contained in:
parent
f00c442d40
commit
2d81521019
7 changed files with 279 additions and 0 deletions
21
python/llm/example/pytorch-models/README.md
Normal file
21
python/llm/example/pytorch-models/README.md
Normal file
|
|
@ -0,0 +1,21 @@
|
||||||
|
# BigDL-LLM INT4 Optimization for Large Language Model
|
||||||
|
You can use `optimize_model` API to accelerate general PyTorch models on Intel servers and PCs. This directory contains example scripts to help you quickly get started using BigDL-LLM to run some popular open-source models in the community. Each model has its own dedicated folder, where you can find detailed instructions on how to install and run it.
|
||||||
|
|
||||||
|
# Verified models
|
||||||
|
| Model | Example |
|
||||||
|
|-----------|----------------------------------------------------------|
|
||||||
|
| LLaMA 2 | [link](llama2) |
|
||||||
|
| ChatGLM | [link](chatglm) |
|
||||||
|
| Openai Whisper | [link](openai-whisper) |
|
||||||
|
|
||||||
|
## Recommended Requirements
|
||||||
|
To run the examples, we recommend using Intel® Xeon® processors (server), or >= 12th Gen Intel® Core™ processor (client).
|
||||||
|
|
||||||
|
For OS, BigDL-LLM supports Ubuntu 20.04 or later, CentOS 7 or later, and Windows 10/11.
|
||||||
|
|
||||||
|
## Best Known Configuration on Linux
|
||||||
|
For better performance, it is recommended to set environment variables on Linux with the help of BigDL-Nano:
|
||||||
|
```bash
|
||||||
|
pip install bigdl-nano
|
||||||
|
source bigdl-nano-init
|
||||||
|
```
|
||||||
58
python/llm/example/pytorch-models/chatglm/README.md
Normal file
58
python/llm/example/pytorch-models/chatglm/README.md
Normal file
|
|
@ -0,0 +1,58 @@
|
||||||
|
# ChatGLM
|
||||||
|
In this directory, you will find examples on how you could use BigDL-LLM `optimize_model` API to accelerate ChatGLM models. For illustration purposes, we utilize the [THUDM/chatglm-6b](https://huggingface.co/THUDM/chatglm-6b) as a reference ChatGLM model.
|
||||||
|
|
||||||
|
## Requirements
|
||||||
|
To run these examples with BigDL-LLM, we have some recommended requirements for your machine, please refer to [here](../README.md#recommended-requirements) for more information.
|
||||||
|
|
||||||
|
## Example: Predict Tokens using `generate()` API
|
||||||
|
In the example [generate.py](./generate.py), we show a basic use case for a ChatGLM model to predict the next N tokens using `generate()` API, with BigDL-LLM INT4 optimizations.
|
||||||
|
### 1. Install
|
||||||
|
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
|
||||||
|
|
||||||
|
After installing conda, create a Python environment for BigDL-LLM:
|
||||||
|
```bash
|
||||||
|
conda create -n llm python=3.9 # recommend to use Python 3.9
|
||||||
|
conda activate llm
|
||||||
|
|
||||||
|
pip install --pre --upgrade bigdl-llm[all] # install the latest bigdl-llm nightly build with 'all' option
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Run
|
||||||
|
After setting up the Python environment, you could run the example by following steps.
|
||||||
|
|
||||||
|
#### 2.1 Client
|
||||||
|
On client Windows machines, it is recommended to run directly with full utilization of all cores:
|
||||||
|
```powershell
|
||||||
|
python ./generate.py --prompt 'AI是什么?'
|
||||||
|
```
|
||||||
|
More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.
|
||||||
|
|
||||||
|
#### 2.2 Server
|
||||||
|
For optimal performance on server, it is recommended to set several environment variables (refer to [here](../README.md#best-known-configuration-on-linux) for more information), and run the example with all the physical cores of a single socket.
|
||||||
|
|
||||||
|
E.g. on Linux,
|
||||||
|
```bash
|
||||||
|
# set BigDL-Nano env variables
|
||||||
|
source bigdl-nano-init
|
||||||
|
|
||||||
|
# e.g. for a server with 48 cores per socket
|
||||||
|
export OMP_NUM_THREADS=48
|
||||||
|
numactl -C 0-47 -m 0 python ./generate.py --prompt 'AI是什么?'
|
||||||
|
```
|
||||||
|
More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.
|
||||||
|
|
||||||
|
#### 2.3 Arguments Info
|
||||||
|
In the example, several arguments can be passed to satisfy your requirements:
|
||||||
|
|
||||||
|
- `--repo-id-or-model-path`: str, argument defining the huggingface repo id for the ChatGLM model to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'THUDM/chatglm-6b'`.
|
||||||
|
- `--prompt`: str, argument defining the prompt to be inferred (with integrated prompt format for chat). It is default to be `'AI是什么?'`.
|
||||||
|
- `--n-predict`: int, argument defining the max number of tokens to predict. It is default to be `32`.
|
||||||
|
|
||||||
|
#### 2.4 Sample Output
|
||||||
|
#### [THUDM/chatglm-6b](https://huggingface.co/THUDM/chatglm-6b)
|
||||||
|
```log
|
||||||
|
Inference time: xxxx s
|
||||||
|
-------------------- Output --------------------
|
||||||
|
问:AI是什么?
|
||||||
|
答: AI是人工智能(Artificial Intelligence)的缩写,指的是一种能够模拟人类智能的技术或系统。AI包括机器学习、深度学习、自然语言处理、计算机视觉
|
||||||
|
```
|
||||||
61
python/llm/example/pytorch-models/chatglm/generate.py
Normal file
61
python/llm/example/pytorch-models/chatglm/generate.py
Normal file
|
|
@ -0,0 +1,61 @@
|
||||||
|
#
|
||||||
|
# Copyright 2016 The BigDL Authors.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
#
|
||||||
|
|
||||||
|
import torch
|
||||||
|
import time
|
||||||
|
import argparse
|
||||||
|
|
||||||
|
from transformers import AutoModel, AutoTokenizer
|
||||||
|
from bigdl.llm import optimize_model
|
||||||
|
|
||||||
|
# you could tune the prompt based on your own model,
|
||||||
|
# here the prompt tuning refers to https://huggingface.co/THUDM/chatglm-6b/blob/294cb13118a1e08ad8449ca542624a5c6aecc401/modeling_chatglm.py#L1281
|
||||||
|
CHATGLM_V1_PROMPT_FORMAT = "问:{prompt}\n答:"
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
parser = argparse.ArgumentParser(description='Predict Tokens using `generate()` API for ChatGLM model')
|
||||||
|
parser.add_argument('--repo-id-or-model-path', type=str, default="THUDM/chatglm-6b",
|
||||||
|
help='The huggingface repo id for the ChatGLM model to be downloaded'
|
||||||
|
', or the path to the huggingface checkpoint folder')
|
||||||
|
parser.add_argument('--prompt', type=str, default="AI是什么?",
|
||||||
|
help='Prompt to infer')
|
||||||
|
parser.add_argument('--n-predict', type=int, default=32,
|
||||||
|
help='Max tokens to predict')
|
||||||
|
|
||||||
|
args = parser.parse_args()
|
||||||
|
model_path = args.repo_id_or_model_path
|
||||||
|
|
||||||
|
# Load model
|
||||||
|
model = AutoModel.from_pretrained(model_path, trust_remote_code=True)
|
||||||
|
|
||||||
|
# With only one line to enable BigDL-LLM optimization on model
|
||||||
|
model = optimize_model(model)
|
||||||
|
|
||||||
|
# Load tokenizer
|
||||||
|
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
|
||||||
|
|
||||||
|
# Generate predicted tokens
|
||||||
|
with torch.inference_mode():
|
||||||
|
prompt = CHATGLM_V1_PROMPT_FORMAT.format(prompt=args.prompt)
|
||||||
|
input_ids = tokenizer.encode(prompt, return_tensors="pt")
|
||||||
|
st = time.time()
|
||||||
|
output = model.generate(input_ids,
|
||||||
|
max_new_tokens=args.n_predict)
|
||||||
|
end = time.time()
|
||||||
|
output_str = tokenizer.decode(output[0], skip_special_tokens=True)
|
||||||
|
print(f'Inference time: {end-st} s')
|
||||||
|
print('-'*20, 'Output', '-'*20)
|
||||||
|
print(output_str)
|
||||||
74
python/llm/example/pytorch-models/llama2/README.md
Normal file
74
python/llm/example/pytorch-models/llama2/README.md
Normal file
|
|
@ -0,0 +1,74 @@
|
||||||
|
# Llama2
|
||||||
|
In this directory, you will find examples on how you could use BigDL-LLM `optimize_model` API to accelerate Llama2 models. For illustration purposes, we utilize the [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) and [meta-llama/Llama-2-13b-chat-hf](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf) as reference Llama2 models.
|
||||||
|
|
||||||
|
## Requirements
|
||||||
|
To run these examples with BigDL-LLM, we have some recommended requirements for your machine, please refer to [here](../README.md#recommended-requirements) for more information.
|
||||||
|
|
||||||
|
## Example: Predict Tokens using `generate()` API
|
||||||
|
In the example [generate.py](./generate.py), we show a basic use case for a Llama2 model to predict the next N tokens using `generate()` API, with BigDL-LLM INT4 optimizations.
|
||||||
|
### 1. Install
|
||||||
|
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
|
||||||
|
|
||||||
|
After installing conda, create a Python environment for BigDL-LLM:
|
||||||
|
```bash
|
||||||
|
conda create -n llm python=3.9 # recommend to use Python 3.9
|
||||||
|
conda activate llm
|
||||||
|
|
||||||
|
pip install --pre --upgrade bigdl-llm[all] # install the latest bigdl-llm nightly build with 'all' option
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Run
|
||||||
|
After setting up the Python environment, you could run the example by following steps.
|
||||||
|
|
||||||
|
#### 2.1 Client
|
||||||
|
On client Windows machines, it is recommended to run directly with full utilization of all cores:
|
||||||
|
```powershell
|
||||||
|
python ./generate.py --prompt 'What is AI?'
|
||||||
|
```
|
||||||
|
More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.
|
||||||
|
|
||||||
|
#### 2.2 Server
|
||||||
|
For optimal performance on server, it is recommended to set several environment variables (refer to [here](../README.md#best-known-configuration-on-linux) for more information), and run the example with all the physical cores of a single socket.
|
||||||
|
|
||||||
|
E.g. on Linux,
|
||||||
|
```bash
|
||||||
|
# set BigDL-Nano env variables
|
||||||
|
source bigdl-nano-init
|
||||||
|
|
||||||
|
# e.g. for a server with 48 cores per socket
|
||||||
|
export OMP_NUM_THREADS=48
|
||||||
|
numactl -C 0-47 -m 0 python ./generate.py --prompt 'What is AI?'
|
||||||
|
```
|
||||||
|
More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.
|
||||||
|
|
||||||
|
#### 2.3 Arguments Info
|
||||||
|
In the example, several arguments can be passed to satisfy your requirements:
|
||||||
|
|
||||||
|
- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the Llama2 model (e.g. `meta-llama/Llama-2-7b-chat-hf` and `meta-llama/Llama-2-13b-chat-hf`) to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'meta-llama/Llama-2-7b-chat-hf'`.
|
||||||
|
- `--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `'What is AI?'`.
|
||||||
|
- `--n-predict N_PREDICT`: argument defining the max number of tokens to predict. It is default to be `32`.
|
||||||
|
|
||||||
|
#### 2.3 Sample Output
|
||||||
|
#### [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)
|
||||||
|
```log
|
||||||
|
Inference time: xxxx s
|
||||||
|
-------------------- Output --------------------
|
||||||
|
### HUMAN:
|
||||||
|
What is AI?
|
||||||
|
|
||||||
|
### RESPONSE:
|
||||||
|
|
||||||
|
AI is a branch of computer science that focuses on creating intelligent machines that can perform tasks that typically require human intelligence, such as understanding natural language,
|
||||||
|
```
|
||||||
|
|
||||||
|
#### [meta-llama/Llama-2-13b-chat-hf](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf)
|
||||||
|
```log
|
||||||
|
Inference time: xxxx s
|
||||||
|
-------------------- Output --------------------
|
||||||
|
### HUMAN:
|
||||||
|
What is AI?
|
||||||
|
|
||||||
|
### RESPONSE:
|
||||||
|
|
||||||
|
AI, or artificial intelligence, refers to the ability of machines to perform tasks that would typically require human intelligence, such as learning, problem-solving,
|
||||||
|
```
|
||||||
65
python/llm/example/pytorch-models/llama2/generate.py
Normal file
65
python/llm/example/pytorch-models/llama2/generate.py
Normal file
|
|
@ -0,0 +1,65 @@
|
||||||
|
#
|
||||||
|
# Copyright 2016 The BigDL Authors.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
#
|
||||||
|
|
||||||
|
import torch
|
||||||
|
import time
|
||||||
|
import argparse
|
||||||
|
|
||||||
|
from transformers import AutoModelForCausalLM, LlamaTokenizer
|
||||||
|
from bigdl.llm import optimize_model
|
||||||
|
|
||||||
|
# you could tune the prompt based on your own model,
|
||||||
|
# here the prompt tuning refers to https://huggingface.co/georgesung/llama2_7b_chat_uncensored#prompt-style
|
||||||
|
LLAMA2_PROMPT_FORMAT = """### HUMAN:
|
||||||
|
{prompt}
|
||||||
|
|
||||||
|
### RESPONSE:
|
||||||
|
"""
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
parser = argparse.ArgumentParser(description='Predict Tokens using `generate()` API for Llama2 model')
|
||||||
|
parser.add_argument('--repo-id-or-model-path', type=str, default="meta-llama/Llama-2-7b-chat-hf",
|
||||||
|
help='The huggingface repo id for the Llama2 (e.g. `meta-llama/Llama-2-7b-chat-hf` and `meta-llama/Llama-2-13b-chat-hf`) to be downloaded'
|
||||||
|
', or the path to the huggingface checkpoint folder')
|
||||||
|
parser.add_argument('--prompt', type=str, default="What is AI?",
|
||||||
|
help='Prompt to infer')
|
||||||
|
parser.add_argument('--n-predict', type=int, default=32,
|
||||||
|
help='Max tokens to predict')
|
||||||
|
|
||||||
|
args = parser.parse_args()
|
||||||
|
model_path = args.repo_id_or_model_path
|
||||||
|
|
||||||
|
# Load model
|
||||||
|
model = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True)
|
||||||
|
|
||||||
|
# With only one line to enable BigDL-LLM optimization on model
|
||||||
|
model = optimize_model(model)
|
||||||
|
|
||||||
|
# Load tokenizer
|
||||||
|
tokenizer = LlamaTokenizer.from_pretrained(model_path, trust_remote_code=True)
|
||||||
|
|
||||||
|
# Generate predicted tokens
|
||||||
|
with torch.inference_mode():
|
||||||
|
prompt = LLAMA2_PROMPT_FORMAT.format(prompt=args.prompt)
|
||||||
|
input_ids = tokenizer.encode(prompt, return_tensors="pt")
|
||||||
|
st = time.time()
|
||||||
|
output = model.generate(input_ids,
|
||||||
|
max_new_tokens=args.n_predict)
|
||||||
|
end = time.time()
|
||||||
|
output_str = tokenizer.decode(output[0], skip_special_tokens=True)
|
||||||
|
print(f'Inference time: {end-st} s')
|
||||||
|
print('-'*20, 'Output', '-'*20)
|
||||||
|
print(output_str)
|
||||||
Loading…
Reference in a new issue