Add minimum Qwen model version (#9606)

This commit is contained in:
Ziteng Zhang 2023-12-06 11:49:14 +08:00 committed by GitHub
parent c998f5f2ba
commit aeb77b2ab1

View file

@ -1,13 +1,19 @@
# Qwen
In this directory, you will find examples on how you could apply BigDL-LLM INT4 optimizations on Qwen models. For illustration purposes, we utilize the [Qwen-7B-Chat](https://huggingface.co/Qwen/Qwen-7B-Chat) as a reference Qwen model.
## 0. Requirements
To run these examples with BigDL-LLM, we have some recommended requirements for your machine, please refer to [here](../README.md#recommended-requirements) for more information.
## Example: Predict Tokens using `generate()` API
In the example [generate.py](./generate.py), we show a basic use case for a Qwen model to predict the next N tokens using `generate()` API, with BigDL-LLM INT4 optimizations.
### 1. Install
We suggest using conda to manage environment:
```bash
conda create -n llm python=3.9
conda activate llm
@ -17,11 +23,15 @@ pip install tiktoken einops transformers_stream_generator # additional package
```
### 2. Run
The minimum Qwen model version currently supported by BigDL-LLM is the version on November 30, 2023.
```
python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROMPT --n-predict N_PREDICT
```
Arguments info:
- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the Qwen model to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'Qwen/Qwen-7B-Chat'`.
- `--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `'AI是什么'`.
- `--n-predict N_PREDICT`: argument defining the max number of tokens to predict. It is default to be `32`.
@ -31,15 +41,19 @@ Arguments info:
> Please select the appropriate size of the Qwen model based on the capabilities of your machine.
#### 2.1 Client
On client Windows machine, it is recommended to run directly with full utilization of all cores:
```powershell
python ./generate.py
```
#### 2.2 Server
For optimal performance on server, it is recommended to set several environment variables (refer to [here](../README.md#best-known-configuration-on-linux) for more information), and run the example with all the physical cores of a single socket.
E.g. on Linux,
```bash
# set BigDL-LLM env variables
source bigdl-llm-init
@ -50,7 +64,9 @@ numactl -C 0-47 -m 0 python ./generate.py
```
#### 2.3 Sample Output
#### [Qwen/Qwen-7B-Chat](https://huggingface.co/Qwen/Qwen-7B-Chat)
```log
Inference time: xxxx s
-------------------- Prompt --------------------