[LLM] Mixtral CPU examples (#9673)
* Mixtral CPU PyTorch and hugging face examples, based on #9661 and #9671
This commit is contained in:
parent
5e46e0e5af
commit
223c9622f7
5 changed files with 233 additions and 1 deletions
|
|
@ -143,7 +143,7 @@ Over 20 models have been optimized/verified on `bigdl-llm`, including *LLaMA/LLa
|
|||
| ChatGLM2 | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/chatglm2) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/chatglm2) |
|
||||
| ChatGLM3 | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/chatglm3) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/chatglm3) |
|
||||
| Mistral | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/mistral) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/mistral) |
|
||||
| Mixtral | | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/mixtral) |
|
||||
| Mixtral | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/mixtral) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/mixtral) |
|
||||
| Falcon | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/falcon) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/falcon) |
|
||||
| MPT | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/mpt) | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/mpt) |
|
||||
| Dolly-v1 | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/dolly_v1) | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/dolly_v1) |
|
||||
|
|
|
|||
|
|
@ -0,0 +1,45 @@
|
|||
# Mixtral
|
||||
In this directory, you will find examples on how you could apply BigDL-LLM INT4 optimizations on Mixtral models on [Intel CPUs](../README.md). For illustration purposes, we utilize the [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) as a reference Mixtral model.
|
||||
|
||||
## Requirements
|
||||
To run these examples with BigDL-LLM on Intel CPUs, we have some recommended requirements for your machine, please refer to [here](../README.md#recommended-requirements) for more information.
|
||||
|
||||
**Important: Please make sure you have installed `transformers==4.36.0` to run the example.**
|
||||
|
||||
## Example: Predict Tokens using `generate()` API
|
||||
In the example [generate.py](./generate.py), we show a basic use case for a Mixtral model to predict the next N tokens using `generate()` API, with BigDL-LLM INT4 optimizations on Intel CPUs.
|
||||
### 1. Install
|
||||
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
|
||||
|
||||
After installing conda, create a Python environment for BigDL-LLM:
|
||||
```bash
|
||||
conda create -n llm python=3.9 # recommend to use Python 3.9
|
||||
conda activate llm
|
||||
|
||||
# below command will install PyTorch CPU as default
|
||||
pip install torch==2.0.1 --index-url https://download.pytorch.org/whl/cpu
|
||||
pip install --pre --upgrade bigdl-llm[all]
|
||||
|
||||
# Please make sure you are using a stable version of Transformers, 4.36.0 or newer.
|
||||
pip install transformers==4.36.0
|
||||
```
|
||||
|
||||
### 2. Run
|
||||
|
||||
```bash
|
||||
python ./generate.py --prompt 'What is AI?'
|
||||
```
|
||||
|
||||
In the example, several arguments can be passed to satisfy your requirements:
|
||||
|
||||
- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the Mixtral model (e.g. `mistralai/Mixtral-8x7B-Instruct-v0.1`) to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'mistralai/Mixtral-8x7B-Instruct-v0.1'`.
|
||||
- `--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `'What is AI?'`.
|
||||
- `--n-predict N_PREDICT`: argument defining the max number of tokens to predict. It is default to be `32`.
|
||||
|
||||
#### Sample Output
|
||||
#### [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
|
||||
```log
|
||||
Inference time: xxxx s
|
||||
-------------------- Output --------------------
|
||||
[INST] What is AI? [/INST] AI, or Artificial Intelligence, refers to the development of computer systems that can perform tasks that would normally require human intelligence to accomplish. These tasks can include things
|
||||
```
|
||||
|
|
@ -0,0 +1,72 @@
|
|||
#
|
||||
# Copyright 2016 The BigDL Authors.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
|
||||
import torch
|
||||
import time
|
||||
import argparse
|
||||
|
||||
from bigdl.llm.transformers import AutoModelForCausalLM
|
||||
from transformers import AutoTokenizer
|
||||
|
||||
# you could tune the prompt based on your own model,
|
||||
# here the prompt tuning refers to https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1#instruction-format
|
||||
MIXTRAL_PROMPT_FORMAT = """<s>[INST] {prompt} [/INST]"""
|
||||
|
||||
if __name__ == '__main__':
|
||||
parser = argparse.ArgumentParser(description='Predict Tokens using `generate()` API for Mixtral model')
|
||||
parser.add_argument('--repo-id-or-model-path', type=str, default="'mistralai/Mixtral-8x7B-Instruct-v0.1'",
|
||||
help='The huggingface repo id for the Mixtral (e.g. `mistralai/Mixtral-8x7B-Instruct-v0.1`) to be downloaded,'
|
||||
', or the path to the huggingface checkpoint folder.')
|
||||
parser.add_argument('--prompt', type=str, default="What is AI?",
|
||||
help='Prompt to infer')
|
||||
parser.add_argument('--n-predict', type=int, default=32,
|
||||
help='Max tokens to predict')
|
||||
|
||||
args = parser.parse_args()
|
||||
model_path = args.repo_id_or_model_path
|
||||
|
||||
# Load model in 4 bit,
|
||||
# which convert the relevant layers in the model into INT4 format
|
||||
model = AutoModelForCausalLM.from_pretrained(model_path,
|
||||
load_in_4bit=True,
|
||||
optimize_model=True,
|
||||
trust_remote_code=True,
|
||||
use_cache=True)
|
||||
|
||||
# Load tokenizer
|
||||
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
|
||||
|
||||
# Generate predicted tokens
|
||||
with torch.inference_mode():
|
||||
prompt = MIXTRAL_PROMPT_FORMAT.format(prompt=args.prompt)
|
||||
input_ids = tokenizer.encode(prompt, return_tensors="pt").to('cpu')
|
||||
output = model.generate(input_ids,
|
||||
max_new_tokens=args.n_predict)
|
||||
|
||||
# start inference
|
||||
st = time.time()
|
||||
# if your selected model is capable of utilizing previous key/value attentions
|
||||
# to enhance decoding speed, but has `"use_cache": false` in its model config,
|
||||
# it is important to set `use_cache=True` explicitly in the `generate` function
|
||||
# to obtain optimal performance with BigDL-LLM INT4 optimizations
|
||||
output = model.generate(input_ids,
|
||||
max_new_tokens=args.n_predict)
|
||||
end = time.time()
|
||||
output = output.cpu()
|
||||
output_str = tokenizer.decode(output[0], skip_special_tokens=True)
|
||||
print(f'Inference time: {end-st} s')
|
||||
print('-'*20, 'Output', '-'*20)
|
||||
print(output_str)
|
||||
|
|
@ -0,0 +1,45 @@
|
|||
# Mixtral
|
||||
In this directory, you will find examples on how you could use BigDL-LLM `optimize_model` API to accelerate Mixtral models. For illustration purposes, we utilize the [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) as a reference Mixtral model.
|
||||
|
||||
## Requirements
|
||||
To run these examples with BigDL-LLM on Intel CPUs, we have some recommended requirements for your machine, please refer to [here](../README.md#recommended-requirements) for more information.
|
||||
|
||||
**Important: Please make sure you have installed `transformers==4.36.0` to run the example.**
|
||||
|
||||
## Example: Predict Tokens using `generate()` API
|
||||
In the example [generate.py](./generate.py), we show a basic use case for a Mixtral model to predict the next N tokens using `generate()` API, with BigDL-LLM INT4 optimizations on Intel CPUs.
|
||||
### 1. Install
|
||||
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
|
||||
|
||||
After installing conda, create a Python environment for BigDL-LLM:
|
||||
```bash
|
||||
conda create -n llm python=3.9 # recommend to use Python 3.9
|
||||
conda activate llm
|
||||
|
||||
# below command will install PyTorch CPU as default
|
||||
pip install torch==2.0.1 --index-url https://download.pytorch.org/whl/cpu
|
||||
pip install --pre --upgrade bigdl-llm[all]
|
||||
|
||||
# Please make sure you are using a stable version of Transformers, 4.36.0 or newer.
|
||||
pip install transformers==4.36.0
|
||||
```
|
||||
|
||||
### 2. Run
|
||||
|
||||
```bash
|
||||
python ./generate.py --prompt 'What is AI?'
|
||||
```
|
||||
|
||||
In the example, several arguments can be passed to satisfy your requirements:
|
||||
|
||||
- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the Mixtral model (e.g. `mistralai/Mixtral-8x7B-Instruct-v0.1`) to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'mistralai/Mixtral-8x7B-Instruct-v0.1'`.
|
||||
- `--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `'What is AI?'`.
|
||||
- `--n-predict N_PREDICT`: argument defining the max number of tokens to predict. It is default to be `32`.
|
||||
|
||||
#### Sample Output
|
||||
#### [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
|
||||
```log
|
||||
Inference time: xxxx s
|
||||
-------------------- Output --------------------
|
||||
[INST] What is AI? [/INST] AI, or Artificial Intelligence, refers to the development of computer systems that can perform tasks that would normally require human intelligence to accomplish. These tasks can include things
|
||||
```
|
||||
|
|
@ -0,0 +1,70 @@
|
|||
#
|
||||
# Copyright 2016 The BigDL Authors.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
|
||||
import torch
|
||||
import time
|
||||
import argparse
|
||||
|
||||
from transformers import AutoModelForCausalLM, AutoTokenizer
|
||||
from bigdl.llm import optimize_model
|
||||
|
||||
# you could tune the prompt based on your own model,
|
||||
# here the prompt tuning refers to https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1#instruction-format
|
||||
MIXTRAL_PROMPT_FORMAT = """<s>[INST] {prompt} [/INST]"""
|
||||
|
||||
if __name__ == '__main__':
|
||||
parser = argparse.ArgumentParser(description='Predict Tokens using `generate()` API for Mixtral model')
|
||||
parser.add_argument('--repo-id-or-model-path', type=str, default="'mistralai/Mixtral-8x7B-Instruct-v0.1'",
|
||||
help='The huggingface repo id for the Mixtral (e.g. `mistralai/Mixtral-8x7B-Instruct-v0.1`) to be downloaded,'
|
||||
', or the path to the huggingface checkpoint folder.')
|
||||
parser.add_argument('--prompt', type=str, default="What is AI?",
|
||||
help='Prompt to infer')
|
||||
parser.add_argument('--n-predict', type=int, default=32,
|
||||
help='Max tokens to predict')
|
||||
|
||||
args = parser.parse_args()
|
||||
model_path = args.repo_id_or_model_path
|
||||
|
||||
# Load model
|
||||
model = AutoModelForCausalLM.from_pretrained(model_path,
|
||||
trust_remote_code=True,
|
||||
torch_dtype='auto',
|
||||
low_cpu_mem_usage=True)
|
||||
|
||||
# With only one line to enable BigDL-LLM optimization on model
|
||||
model = optimize_model(model)
|
||||
|
||||
# Load tokenizer
|
||||
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
|
||||
|
||||
# Generate predicted tokens
|
||||
with torch.inference_mode():
|
||||
prompt = MIXTRAL_PROMPT_FORMAT.format(prompt=args.prompt)
|
||||
input_ids = tokenizer.encode(prompt, return_tensors="pt").to('cpu')
|
||||
# ipex model needs a warmup, then inference time can be accurate
|
||||
output = model.generate(input_ids,
|
||||
max_new_tokens=args.n_predict)
|
||||
|
||||
# start inference
|
||||
st = time.time()
|
||||
output = model.generate(input_ids,
|
||||
max_new_tokens=args.n_predict)
|
||||
end = time.time()
|
||||
output = output.cpu()
|
||||
output_str = tokenizer.decode(output[0], skip_special_tokens=True)
|
||||
print(f'Inference time: {end-st} s')
|
||||
print('-'*20, 'Output', '-'*20)
|
||||
print(output_str)
|
||||
Loading…
Reference in a new issue