LLM: add dolly-v1 and dolly-v2 to gpu pytorch model example (#9153)

This commit is contained in:
Jin Qiao 2023-10-13 15:43:35 +08:00 committed by GitHub
parent 259cbb4126
commit 797b156a0d
4 changed files with 302 additions and 0 deletions

View file

@ -0,0 +1,58 @@
# Dolly v1
In this directory, you will find examples on how you could use BigDL-LLM `optimize_model` API to accelerate Dolly v1 models. For illustration purposes, we utilize the [databricks/dolly-v1-6b](https://huggingface.co/databricks/dolly-v1-6b) as reference Dolly v1 models.
## Requirements
To run these examples with BigDL-LLM on Intel GPUs, we have some recommended requirements for your machine, please refer to [here](../README.md#recommended-requirements) for more information.
## Example: Predict Tokens using `generate()` API
In the example [generate.py](./generate.py), we show a basic use case for a Dolly v1 model to predict the next N tokens using `generate()` API, with BigDL-LLM INT4 optimizations on Intel GPUs.
### 1. Install
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
After installing conda, create a Python environment for BigDL-LLM:
```bash
conda create -n llm python=3.9 # recommend to use Python 3.9
conda activate llm
# below command will install intel_extension_for_pytorch==2.0.110+xpu as default
# you can install specific ipex/torch version for your need
pip install --pre --upgrade bigdl-llm[xpu] -f https://developer.intel.com/ipex-whl-stable-xpu
```
### 2. Configures OneAPI environment variables
```bash
source /opt/intel/oneapi/setvars.sh
```
### 3. Run
For optimal performance on Arc, it is recommended to set several environment variables.
```bash
export USE_XETLA=OFF
export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
```
```bash
python ./generate.py --prompt 'What is AI?'
```
In the example, several arguments can be passed to satisfy your requirements:
- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the Dolly v1 model to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'databricks/dolly-v1-6b'`.
- `--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `'What is AI?'`.
- `--n-predict N_PREDICT`: argument defining the max number of tokens to predict. It is default to be `32`.
#### 2.3 Sample Output
#### [databricks/dolly-v1-6b](https://huggingface.co/databricks/dolly-v1-6b)
```log
Inference time: xxxx s
-------------------- Output --------------------
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
What is AI?
### Response:
AI is an umbrella term for a variety of technologies that enable computers to think and act like humans. AI can be used to automate tasks, analyze data, and
```

View file

@ -0,0 +1,87 @@
#
# Copyright 2016 The BigDL Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import torch
import intel_extension_for_pytorch as ipex
import time
import argparse
from transformers import AutoModelForCausalLM, AutoTokenizer
from bigdl.llm import optimize_model
# you could tune the prompt based on your own model,
# here the prompt tuning refers to https://huggingface.co/databricks/dolly-v1-6b#generate-text
DOLLY_V1_PROMPT_FORMAT = """Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
"""
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='Predict Tokens using `generate()` API for Dolly v1 model')
parser.add_argument('--repo-id-or-model-path', type=str, default="databricks/dolly-v1-6b",
help='The huggingface repo id for the Dolly v1 model to be downloaded'
', or the path to the huggingface checkpoint folder')
parser.add_argument('--prompt', type=str, default="What is AI?",
help='Prompt to infer')
parser.add_argument('--n-predict', type=int, default=32,
help='Max tokens to predict')
args = parser.parse_args()
model_path = args.repo_id_or_model_path
# Load model
model = AutoModelForCausalLM.from_pretrained(model_path,
trust_remote_code=True,
torch_dtype='auto',
low_cpu_mem_usage=True)
# With only one line to enable BigDL-LLM optimization on model
model = optimize_model(model)
model = model.to('xpu')
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
# Generate predicted tokens
with torch.inference_mode():
prompt = DOLLY_V1_PROMPT_FORMAT.format(prompt=args.prompt)
input_ids = tokenizer.encode(prompt, return_tensors="pt").to('xpu')
end_key_token_id=tokenizer.encode("### End")[0]
# ipex model needs a warmup, then inference time can be accurate
output = model.generate(input_ids,
use_cache=True,
max_new_tokens=args.n_predict,
pad_token_id=tokenizer.pad_token_id,
eos_token_id=end_key_token_id)
# start inference
st = time.time()
output = model.generate(input_ids,
use_cache=True,
max_new_tokens=args.n_predict,
pad_token_id=tokenizer.pad_token_id,
eos_token_id=end_key_token_id)
torch.xpu.synchronize()
end = time.time()
output = output.cpu()
output_str = tokenizer.decode(output[0], skip_special_tokens=False)
print(f'Inference time: {end-st} s')
print('-'*20, 'Output', '-'*20)
print(output_str)

View file

@ -0,0 +1,72 @@
# Dolly v2
In this directory, you will find examples on how you could use BigDL-LLM `optimize_model` API to accelerate Dolly v2 models. For illustration purposes, we utilize the [databricks/dolly-v2-12b](https://huggingface.co/databricks/dolly-v2-12b) and [databricks/dolly-v2-7b](https://huggingface.co/databricks/dolly-v2-7b) as reference Dolly v2 models.
## Requirements
To run these examples with BigDL-LLM on Intel GPUs, we have some recommended requirements for your machine, please refer to [here](../README.md#recommended-requirements) for more information.
## Example: Predict Tokens using `generate()` API
In the example [generate.py](./generate.py), we show a basic use case for a Dolly v2 model to predict the next N tokens using `generate()` API, with BigDL-LLM INT4 optimizations on Intel GPUs.
### 1. Install
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
After installing conda, create a Python environment for BigDL-LLM:
```bash
conda create -n llm python=3.9 # recommend to use Python 3.9
conda activate llm
# below command will install intel_extension_for_pytorch==2.0.110+xpu as default
# you can install specific ipex/torch version for your need
pip install --pre --upgrade bigdl-llm[xpu] -f https://developer.intel.com/ipex-whl-stable-xpu
```
### 2. Configures OneAPI environment variables
```bash
source /opt/intel/oneapi/setvars.sh
```
### 3. Run
For optimal performance on Arc, it is recommended to set several environment variables.
```bash
export USE_XETLA=OFF
export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
```
```bash
python ./generate.py --prompt 'What is AI?'
```
In the example, several arguments can be passed to satisfy your requirements:
- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the Dolly v2 model (e.g. `databricks/dolly-v2-12b` and `databricks/dolly-v2-7b`) to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'databricks/dolly-v2-12b'`.
- `--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `'What is AI?'`.
- `--n-predict N_PREDICT`: argument defining the max number of tokens to predict. It is default to be `32`.
#### 2.3 Sample Output
#### [databricks/dolly-v2-12b](https://huggingface.co/databricks/dolly-v2-12b)
```log
Inference time: xxxx s
-------------------- Output --------------------
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
What is AI?
### Response:
Artificial Intelligence (AI) is a term generally used to describe computer systems that can perform tasks that typically require human intelligence. AI has a broad range of applications
```
#### [databricks/dolly-v2-7b](https://huggingface.co/databricks/dolly-v2-7b)
```log
Inference time: xxxx s
-------------------- Output --------------------
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
What is AI?
### Response:
Artificial Intelligence (AI) is a field of computer science, artificial intelligence, and robotics that focuses on understanding and mastering the principles of intelligence and making
```

View file

@ -0,0 +1,85 @@
#
# Copyright 2016 The BigDL Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import torch
import intel_extension_for_pytorch as ipex
import time
import argparse
from transformers import AutoModelForCausalLM, AutoTokenizer
from bigdl.llm import optimize_model
# you could tune the prompt based on your own model,
# here the prompt tuning refers to https://huggingface.co/databricks/dolly-v2-12b/blob/main/instruct_pipeline.py#L15
DOLLY_V2_PROMPT_FORMAT = """Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
"""
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='Predict Tokens using `generate()` API for Dolly v2 model')
parser.add_argument('--repo-id-or-model-path', type=str, default="databricks/dolly-v2-12b",
help='The huggingface repo id for the Dolly v2 (e.g. `databricks/dolly-v2-7b` and `databricks/dolly-v2-12b`) to be downloaded'
', or the path to the huggingface checkpoint folder')
parser.add_argument('--prompt', type=str, default="What is AI?",
help='Prompt to infer')
parser.add_argument('--n-predict', type=int, default=32,
help='Max tokens to predict')
args = parser.parse_args()
model_path = args.repo_id_or_model_path
# Load model
model = AutoModelForCausalLM.from_pretrained(model_path,
trust_remote_code=True,
torch_dtype='auto',
low_cpu_mem_usage=True)
# With only one line to enable BigDL-LLM optimization on model
model = optimize_model(model)
model = model.to('xpu')
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
# Generate predicted tokens
with torch.inference_mode():
prompt = DOLLY_V2_PROMPT_FORMAT.format(prompt=args.prompt)
input_ids = tokenizer.encode(prompt, return_tensors="pt").to('xpu')
end_key_token_id=tokenizer.encode("### End")[0]
# ipex model needs a warmup, then inference time can be accurate
output = model.generate(input_ids,
max_new_tokens=args.n_predict,
pad_token_id=tokenizer.pad_token_id,
eos_token_id=end_key_token_id)
# start inference
st = time.time()
output = model.generate(input_ids,
max_new_tokens=args.n_predict,
pad_token_id=tokenizer.pad_token_id,
eos_token_id=end_key_token_id)
torch.xpu.synchronize()
end = time.time()
output = output.cpu()
output_str = tokenizer.decode(output[0], skip_special_tokens=False)
print(f'Inference time: {end-st} s')
print('-'*20, 'Output', '-'*20)
print(output_str)