Add DeepSeek V3/R1 CPU example (#12836)
Add DeepSeek V3/R1 CPU example for bf16 model
This commit is contained in:
parent
8418450300
commit
09ed96082b
2 changed files with 151 additions and 0 deletions
|
|
@ -0,0 +1,66 @@
|
|||
# Deepseek V3/R1
|
||||
In this directory, you will find examples on how you could apply IPEX-LLM INT4 optimizations on Deepseek-V3/R1 models.
|
||||
|
||||
**Currently only BF16 models (`unsloth/DeepSeek-V3-bf16` and `unsloth/DeepSeek-R1-BF16`) are validated. It may need some config modifications and walkaround to run the official models.**
|
||||
|
||||
## 0. Requirements
|
||||
To run these examples with IPEX-LLM, we have some recommended requirements for your machine, please refer to [here](../README.md#recommended-requirements) for more information.
|
||||
|
||||
## Example: Predict Tokens using `generate()` API
|
||||
In the example [generate.py](./generate.py), we show a basic use case for a Deepseek model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations.
|
||||
### 1. Install
|
||||
We suggest using conda to manage environment:
|
||||
|
||||
On Linux:
|
||||
|
||||
```bash
|
||||
conda create -n llm python=3.11 # recommend to use Python 3.11
|
||||
conda activate llm
|
||||
|
||||
# install the latest ipex-llm nightly build with 'all' option
|
||||
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
|
||||
```
|
||||
|
||||
On Windows:
|
||||
|
||||
```cmd
|
||||
conda create -n llm python=3.11
|
||||
conda activate llm
|
||||
|
||||
pip install --pre --upgrade ipex-llm[all]
|
||||
|
||||
pip install transformers==4.48.3
|
||||
pip install trl==0.12.0
|
||||
```
|
||||
|
||||
### 2. Run
|
||||
```
|
||||
python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROMPT --n-predict N_PREDICT
|
||||
```
|
||||
|
||||
Arguments info:
|
||||
- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the Deepseek model to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'unsloth/DeepSeek-R1-BF16'`.
|
||||
- `--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `'What is AI?'`.
|
||||
- `--n-predict N_PREDICT`: argument defining the max number of tokens to predict. It is default to be `32`.
|
||||
|
||||
|
||||
> **Note**: When loading the model in 4-bit, IPEX-LLM converts linear layers in the model into INT4 format. In theory, a *X*B model saved in 16-bit will requires approximately 2*X* GB of memory for loading, and ~0.5*X* GB memory for further inference. For 671B model, ~1.3 TB memory is needed during the load procedure.
|
||||
|
||||
#### 2.1 Client
|
||||
On client Windows machine, it is recommended to run directly with full utilization of all cores:
|
||||
```cmd
|
||||
python ./generate.py
|
||||
```
|
||||
|
||||
#### 2.2 Server
|
||||
For optimal performance on server, it is recommended to set several environment variables (refer to [here](../README.md#best-known-configuration-on-linux) for more information), and run the example with all the physical cores of a single socket.
|
||||
|
||||
E.g. on Linux,
|
||||
```bash
|
||||
# set IPEX-LLM env variables
|
||||
source ipex-llm-init
|
||||
|
||||
# e.g. for a server with 48 cores per socket
|
||||
export OMP_NUM_THREADS=48
|
||||
numactl -C 0-47 -m 0 python ./generate.py
|
||||
```
|
||||
|
|
@ -0,0 +1,85 @@
|
|||
#
|
||||
# Copyright 2016 The BigDL Authors.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
|
||||
import torch
|
||||
import time
|
||||
import argparse
|
||||
|
||||
from ipex_llm.transformers import AutoModelForCausalLM
|
||||
# from transformers import LlamaTokenizer
|
||||
from transformers import AutoTokenizer, GenerationConfig
|
||||
|
||||
|
||||
PROMPT_FORMAT = """
|
||||
A conversation between User and Assistant. The user asks a question, and the Assistant solves it. The assistant first thinks about the reasoning process in the mind and then provides the user with the answer. The reasoning process and answer are enclosed within <think> </think> and <answer> </answer> tags, respectively, i.e., <think> reasoning process here </think> <answer> answer here </answer>.
|
||||
User: {prompt}.
|
||||
Assistant: <think>
|
||||
"""
|
||||
|
||||
if __name__ == '__main__':
|
||||
parser = argparse.ArgumentParser(description='Predict Tokens using `generate()` API for DeepSeek V3/R1 model')
|
||||
parser.add_argument('--repo-id-or-model-path', type=str, default="unsloth/DeepSeek-R1-BF16",
|
||||
help='The huggingface repo id for the DeepSeek V3/R1 (e.g. `unsloth/DeepSeek-R1-BF16`) to be downloaded'
|
||||
', or the path to the huggingface checkpoint folder')
|
||||
parser.add_argument('--prompt', type=str, default="What is AI?",
|
||||
help='Prompt to infer')
|
||||
parser.add_argument('--n-predict', type=int, default=32,
|
||||
help='Max tokens to predict')
|
||||
|
||||
args = parser.parse_args()
|
||||
model_path = args.repo_id_or_model_path
|
||||
|
||||
# Load model in 4 bit,
|
||||
# which convert the relevant layers in the model into INT4 format
|
||||
# When running LLMs on Intel iGPUs for Windows users, we recommend setting `cpu_embedding=True` in the from_pretrained function.
|
||||
# This will allow the memory-intensive embedding layer to utilize the CPU instead of iGPU.
|
||||
model = AutoModelForCausalLM.from_pretrained(model_path,
|
||||
load_in_4bit=True,
|
||||
optimize_model=True,
|
||||
trust_remote_code=True,
|
||||
use_cache=True)
|
||||
|
||||
print(model)
|
||||
|
||||
# Load tokenizer
|
||||
tokenizer = AutoTokenizer.from_pretrained(model_path,
|
||||
trust_remote_code=True)
|
||||
|
||||
# Generate predicted tokens
|
||||
with torch.inference_mode():
|
||||
prompt = PROMPT_FORMAT.format(prompt=args.prompt)
|
||||
input_ids = tokenizer.encode(prompt, return_tensors="pt")
|
||||
# ipex_llm model needs a warmup, then inference time can be accurate
|
||||
output = model.generate(input_ids,
|
||||
max_new_tokens=args.n_predict)
|
||||
|
||||
# start inference
|
||||
st = time.time()
|
||||
# if your selected model is capable of utilizing previous key/value attentions
|
||||
# to enhance decoding speed, but has `"use_cache": false` in its model config,
|
||||
# it is important to set `use_cache=True` explicitly in the `generate` function
|
||||
# to obtain optimal performance with IPEX-LLM INT4 optimizations
|
||||
output = model.generate(input_ids,
|
||||
max_new_tokens=args.n_predict)
|
||||
|
||||
end = time.time()
|
||||
output = output.cpu()
|
||||
output_str = tokenizer.decode(output[0], skip_special_tokens=True)
|
||||
print(f'Inference time: {end-st} s')
|
||||
print('-'*20, 'Prompt', '-'*20)
|
||||
print(prompt)
|
||||
print('-'*20, 'Output', '-'*20)
|
||||
print(output_str)
|
||||
Loading…
Reference in a new issue