LLM: add chat & stream chat example for ChatGLM2 transformers int4 (#8636)
This commit is contained in:
parent
cdfbe652ca
commit
39994738d1
2 changed files with 115 additions and 1 deletions
|
|
@ -5,7 +5,7 @@ In this directory, you will find examples on how you could apply BigDL-LLM INT4
|
||||||
## 0. Requirements
|
## 0. Requirements
|
||||||
To run these examples with BigDL-LLM, we have some recommended requirements for your machine, please refer to [here](../README.md#recommended-requirements) for more information.
|
To run these examples with BigDL-LLM, we have some recommended requirements for your machine, please refer to [here](../README.md#recommended-requirements) for more information.
|
||||||
|
|
||||||
## Example: Predict Tokens using `generate()` API
|
## Example 1: Predict Tokens using `generate()` API
|
||||||
In the example [generate.py](./generate.py), we show a basic use case for a ChatGLM2 model to predict the next N tokens using `generate()` API, with BigDL-LLM INT4 optimizations.
|
In the example [generate.py](./generate.py), we show a basic use case for a ChatGLM2 model to predict the next N tokens using `generate()` API, with BigDL-LLM INT4 optimizations.
|
||||||
### 1. Install
|
### 1. Install
|
||||||
We suggest using conda to manage environment:
|
We suggest using conda to manage environment:
|
||||||
|
|
@ -74,3 +74,55 @@ Inference time: xxxx s
|
||||||
|
|
||||||
答: Artificial Intelligence (AI) refers to the ability of a computer or machine to perform tasks that typically require human-like intelligence, such as understanding language, recognizing patterns
|
答: Artificial Intelligence (AI) refers to the ability of a computer or machine to perform tasks that typically require human-like intelligence, such as understanding language, recognizing patterns
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Example 2: Stream Chat using `stream_chat()` API
|
||||||
|
In the example [streamchat.py](./streamchat.py), we show a basic use case for a ChatGLM2 model to stream chat, with BigDL-LLM INT4 optimizations.
|
||||||
|
### 1. Install
|
||||||
|
We suggest using conda to manage environment:
|
||||||
|
```bash
|
||||||
|
conda create -n llm python=3.9
|
||||||
|
conda activate llm
|
||||||
|
|
||||||
|
pip install bigdl-llm[all] # install bigdl-llm with 'all' option
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Run
|
||||||
|
**Stream Chat using `stream_chat()` API**:
|
||||||
|
```
|
||||||
|
python ./streamchat.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --question QUESTION
|
||||||
|
```
|
||||||
|
|
||||||
|
**Chat using `chat()` API**:
|
||||||
|
```
|
||||||
|
python ./streamchat.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --question QUESTION --disable-stream
|
||||||
|
```
|
||||||
|
|
||||||
|
Arguments info:
|
||||||
|
- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the ChatGLM2 model to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'THUDM/chatglm2-6b'`.
|
||||||
|
- `--question QUESTION`: argument defining the question to ask. It is default to be `"晚上睡不着应该怎么办"`.
|
||||||
|
- `--disable-stream`: argument defining whether to stream chat. If include `--disable-stream` when running the script, the stream chat is disabled and `chat()` API is used.
|
||||||
|
|
||||||
|
> **Note**: When loading the model in 4-bit, BigDL-LLM converts linear layers in the model into INT4 format. In theory, a *X*B model saved in 16-bit will requires approximately 2*X* GB of memory for loading, and ~0.5*X* GB memory for further inference.
|
||||||
|
>
|
||||||
|
> Please select the appropriate size of the ChatGLM2 model based on the capabilities of your machine.
|
||||||
|
|
||||||
|
#### 2.1 Client
|
||||||
|
On client Windows machine, it is recommended to run directly with full utilization of all cores:
|
||||||
|
```powershell
|
||||||
|
$env:PYTHONUNBUFFERED=1 # ensure stdout and stderr streams are sent straight to terminal without being first buffered
|
||||||
|
python ./streamchat.py
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 2.2 Server
|
||||||
|
For optimal performance on server, it is recommended to set several environment variables (refer to [here](../README.md#best-known-configuration-on-linux) for more information), and run the example with all the physical cores of a single socket.
|
||||||
|
|
||||||
|
E.g. on Linux,
|
||||||
|
```bash
|
||||||
|
# set BigDL-Nano env variables
|
||||||
|
source bigdl-nano-init
|
||||||
|
|
||||||
|
# e.g. for a server with 48 cores per socket
|
||||||
|
export OMP_NUM_THREADS=48
|
||||||
|
export PYTHONUNBUFFERED=1 # ensure stdout and stderr streams are sent straight to terminal without being first buffered
|
||||||
|
numactl -C 0-47 -m 0 python ./streamchat.py
|
||||||
|
```
|
||||||
|
|
|
||||||
|
|
@ -0,0 +1,62 @@
|
||||||
|
#
|
||||||
|
# Copyright 2016 The BigDL Authors.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
#
|
||||||
|
|
||||||
|
import torch
|
||||||
|
import time
|
||||||
|
import argparse
|
||||||
|
import numpy as np
|
||||||
|
|
||||||
|
from bigdl.llm.transformers import AutoModel
|
||||||
|
from transformers import AutoTokenizer
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
parser = argparse.ArgumentParser(description='Stream Chat for ChatGLM2 model')
|
||||||
|
parser.add_argument('--repo-id-or-model-path', type=str, default="THUDM/chatglm2-6b",
|
||||||
|
help='The huggingface repo id for the ChatGLM2 model to be downloaded'
|
||||||
|
', or the path to the huggingface checkpoint folder')
|
||||||
|
parser.add_argument('--question', type=str, default="晚上睡不着应该怎么办",
|
||||||
|
help='Qustion you want to ask')
|
||||||
|
parser.add_argument('--disable-stream', action="store_true",
|
||||||
|
help='Disable stream chat')
|
||||||
|
|
||||||
|
args = parser.parse_args()
|
||||||
|
model_path = args.repo_id_or_model_path
|
||||||
|
disable_stream = args.disable_stream
|
||||||
|
|
||||||
|
# Load model in 4 bit,
|
||||||
|
# which convert the relevant layers in the model into INT4 format
|
||||||
|
model = AutoModel.from_pretrained(model_path,
|
||||||
|
load_in_4bit=True,
|
||||||
|
trust_remote_code=True)
|
||||||
|
|
||||||
|
# Load tokenizer
|
||||||
|
tokenizer = AutoTokenizer.from_pretrained(model_path,
|
||||||
|
trust_remote_code=True)
|
||||||
|
|
||||||
|
with torch.inference_mode():
|
||||||
|
if disable_stream:
|
||||||
|
# Chat
|
||||||
|
response, history = model.chat(tokenizer, args.question, history=[])
|
||||||
|
print('-'*20, 'Chat Output', '-'*20)
|
||||||
|
print(response)
|
||||||
|
else:
|
||||||
|
# Stream chat
|
||||||
|
response_ = ""
|
||||||
|
print('-'*20, 'Stream Chat Output', '-'*20)
|
||||||
|
for response, history in model.stream_chat(tokenizer, args.question, history=[]):
|
||||||
|
print(response.replace(response_, ""), end="")
|
||||||
|
response_ = response
|
||||||
Loading…
Reference in a new issue