[LLM] Add more transformers int4 example (vicuna) (#8544)
This commit is contained in:
parent
fccae91461
commit
f1fd746722
2 changed files with 147 additions and 0 deletions
|
|
@ -0,0 +1,80 @@
|
||||||
|
# Vicuna
|
||||||
|
In this directory, you will find examples on how you could apply BigDL-LLM INT4 optimizations on Vicuna models. For illustration purposes, we utilize the [lmsys/vicuna-13b-v1.3](https://huggingface.co/lmsys/vicuna-13b-v1.3) and [eachadea/vicuna-7b-1.1](https://huggingface.co/eachadea/vicuna-7b-1.1) as reference Vicuna models.
|
||||||
|
|
||||||
|
## 0. Requirements
|
||||||
|
To run these examples with BigDL-LLM, we have some recommended requirements for your machine, please refer to [here](../README.md#recommended-requirements) for more information.
|
||||||
|
|
||||||
|
## Example: Predict Tokens using `generate()` API
|
||||||
|
In the example [generate.py](./generate.py), we show a basic use case for a Vicuna model to predict the next N tokens using `generate()` API, with BigDL-LLM INT4 optimizations.
|
||||||
|
### 1. Install
|
||||||
|
We suggest using conda to manage environment:
|
||||||
|
```bash
|
||||||
|
conda create -n llm python=3.9
|
||||||
|
conda activate llm
|
||||||
|
|
||||||
|
pip install bigdl-llm[all] # install bigdl-llm with 'all' option
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Run
|
||||||
|
```
|
||||||
|
python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROMPT --n-predict N_PREDICT
|
||||||
|
```
|
||||||
|
|
||||||
|
Arguments info:
|
||||||
|
- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the Vicuna model (e.g. `lmsys/vicuna-13b-v1.3` and `eachadea/vicuna-7b-1.1`) to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'lmsys/vicuna-13b-v1.3'`.
|
||||||
|
- `--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `'What is AI?'`.
|
||||||
|
- `--n-predict N_PREDICT`: argument defining the max number of tokens to predict. It is default to be `32`.
|
||||||
|
|
||||||
|
> **Note**: When loading the model in 4-bit, BigDL-LLM converts linear layers in the model into INT4 format. In theory, a *X*B model saved in 16-bit will requires approximately 2*X* GB of memory for loading, and ~0.5*X* GB memory for further inference.
|
||||||
|
>
|
||||||
|
> Please select the appropriate size of the Vicuna model based on the capabilities of your machine.
|
||||||
|
|
||||||
|
#### 2.1 Client
|
||||||
|
On client Windows machine, it is recommended to run directly with full utilization of all cores:
|
||||||
|
```powershell
|
||||||
|
python ./generate.py
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 2.2 Server
|
||||||
|
For optimal performance on server, it is recommended to set several environment variables (refer to [here](../README.md#best-known-configuration-on-linux) for more information), and run the example with all the physical cores of a single socket.
|
||||||
|
|
||||||
|
E.g. on Linux,
|
||||||
|
```bash
|
||||||
|
# set BigDL-Nano env variables
|
||||||
|
source bigdl-nano-init
|
||||||
|
|
||||||
|
# e.g. for a server with 48 cores per socket
|
||||||
|
export OMP_NUM_THREADS=48
|
||||||
|
numactl -C 0-47 -m 0 python ./generate.py
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 2.3 Sample Output
|
||||||
|
#### [lmsys/vicuna-13b-v1.3](https://huggingface.co/lmsys/vicuna-13b-v1.3)
|
||||||
|
```log
|
||||||
|
Inference time: xxxx s
|
||||||
|
-------------------- Prompt --------------------
|
||||||
|
### Human:
|
||||||
|
What is AI?
|
||||||
|
### Assistant:
|
||||||
|
|
||||||
|
-------------------- Output --------------------
|
||||||
|
### Human:
|
||||||
|
What is AI?
|
||||||
|
### Assistant:
|
||||||
|
AI, or Artificial Intelligence, refers to the development of computer systems that can perform tasks that typically require human intelligence, such as visual perception,
|
||||||
|
```
|
||||||
|
|
||||||
|
#### [eachadea/vicuna-7b-1.1](https://huggingface.co/eachadea/vicuna-7b-1.1)
|
||||||
|
```log
|
||||||
|
Inference time: xxxx s
|
||||||
|
-------------------- Prompt --------------------
|
||||||
|
### Human:
|
||||||
|
What is AI?
|
||||||
|
### Assistant:
|
||||||
|
|
||||||
|
-------------------- Output --------------------
|
||||||
|
### Human:
|
||||||
|
What is AI?
|
||||||
|
### Assistant:
|
||||||
|
AI, or artificial intelligence, refers to the ability of a machine or computer program to mimic human intelligence and perform tasks that would normally require human intelligence to
|
||||||
|
```
|
||||||
|
|
@ -0,0 +1,67 @@
|
||||||
|
#
|
||||||
|
# Copyright 2016 The BigDL Authors.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
#
|
||||||
|
|
||||||
|
import torch
|
||||||
|
import time
|
||||||
|
import argparse
|
||||||
|
|
||||||
|
from bigdl.llm.transformers import AutoModelForCausalLM
|
||||||
|
from transformers import LlamaTokenizer
|
||||||
|
|
||||||
|
# you could tune the prompt based on your own model,
|
||||||
|
# here the prompt tuning refers to https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md#example-prompt-weights-v0
|
||||||
|
Vicuna_PROMPT_FORMAT = "### Human:\n{prompt} \n ### Assistant:\n"
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
parser = argparse.ArgumentParser(description='Predict Tokens using `generate()` API for Vicuna model')
|
||||||
|
parser.add_argument('--repo-id-or-model-path', type=str, default="lmsys/vicuna-13b-v1.3",
|
||||||
|
help='The huggingface repo id for the Vicuna (e.g. `lmsys/vicuna-13b-v1.3` and `eachadea/vicuna-7b-1.1`) to be downloaded'
|
||||||
|
', or the path to the huggingface checkpoint folder')
|
||||||
|
parser.add_argument('--prompt', type=str, default="What is AI?",
|
||||||
|
help='Prompt to infer')
|
||||||
|
parser.add_argument('--n-predict', type=int, default=32,
|
||||||
|
help='Max tokens to predict')
|
||||||
|
|
||||||
|
args = parser.parse_args()
|
||||||
|
model_path = args.repo_id_or_model_path
|
||||||
|
|
||||||
|
# Load model in 4 bit,
|
||||||
|
# which convert the relevant layers in the model into INT4 format
|
||||||
|
model = AutoModelForCausalLM.from_pretrained(model_path,
|
||||||
|
load_in_4bit=True)
|
||||||
|
|
||||||
|
# Load tokenizer
|
||||||
|
tokenizer = LlamaTokenizer.from_pretrained(model_path)
|
||||||
|
|
||||||
|
# Generate predicted tokens
|
||||||
|
with torch.inference_mode():
|
||||||
|
prompt = Vicuna_PROMPT_FORMAT.format(prompt=args.prompt)
|
||||||
|
input_ids = tokenizer.encode(prompt, return_tensors="pt")
|
||||||
|
st = time.time()
|
||||||
|
# enabling `use_cache=True` allows the model to utilize the previous
|
||||||
|
# key/values attentions to speed up decoding;
|
||||||
|
# to obtain optimal performance with BigDL-LLM INT4 optimizations,
|
||||||
|
# it is important to set use_cache=True for vicuna-v1.3 models
|
||||||
|
output = model.generate(input_ids,
|
||||||
|
use_cache=True,
|
||||||
|
max_new_tokens=args.n_predict)
|
||||||
|
end = time.time()
|
||||||
|
output_str = tokenizer.decode(output[0], skip_special_tokens=True)
|
||||||
|
print(f'Inference time: {end-st} s')
|
||||||
|
print('-'*20, 'Prompt', '-'*20)
|
||||||
|
print(prompt)
|
||||||
|
print('-'*20, 'Output', '-'*20)
|
||||||
|
print(output_str)
|
||||||
Loading…
Reference in a new issue