diff --git a/README.md b/README.md index 58b26bf7..db3bf58c 100644 --- a/README.md +++ b/README.md @@ -135,6 +135,7 @@ Over 20 models have been optimized/verified on `bigdl-llm`, including *LLaMA/LLa | LLaMA 2 | [link1](python/llm/example/CPU/Native-Models), [link2](python/llm/example/CPU/HF-Transformers-AutoModels/Model/llama2) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/llama2) | | ChatGLM | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/chatglm) | | | ChatGLM2 | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/chatglm2) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/chatglm2) | +| ChatGLM3 | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/chatglm3) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/chatglm3) | | Mistral | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/mistral) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/mistral) | | Falcon | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/falcon) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/falcon) | | MPT | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/mpt) | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/mpt) | diff --git a/python/llm/README.md b/python/llm/README.md index 628496e6..0a21375f 100644 --- a/python/llm/README.md +++ b/python/llm/README.md @@ -42,6 +42,7 @@ Over 20 models have been optimized/verified on `bigdl-llm`, including *LLaMA/LLa | LLaMA 2 | [link1](example/CPU/Native-Models), [link2](example/CPU/HF-Transformers-AutoModels/Model/llama2) | [link](example/GPU/HF-Transformers-AutoModels/Model/llama2) | | ChatGLM | [link](example/CPU/HF-Transformers-AutoModels/Model/chatglm) | | | ChatGLM2 | [link](example/CPU/HF-Transformers-AutoModels/Model/chatglm2) | [link](example/GPU/HF-Transformers-AutoModels/Model/chatglm2) | +| ChatGLM3 | [link](example/CPU/HF-Transformers-AutoModels/Model/chatglm3) | [link](example/GPU/HF-Transformers-AutoModels/Model/chatglm3) | | Mistral | [link](example/CPU/HF-Transformers-AutoModels/Model/mistral) | [link](example/GPU/HF-Transformers-AutoModels/Model/mistral) | | Falcon | [link](example/CPU/HF-Transformers-AutoModels/Model/falcon) | [link](example/GPU/HF-Transformers-AutoModels/Model/falcon) | | MPT | [link](example/CPU/HF-Transformers-AutoModels/Model/mpt) | [link](example/CPU/HF-Transformers-AutoModels/Model/mpt) | diff --git a/python/llm/example/CPU/HF-Transformers-AutoModels/Model/README.md b/python/llm/example/CPU/HF-Transformers-AutoModels/Model/README.md index f7bd3b55..b319f89b 100644 --- a/python/llm/example/CPU/HF-Transformers-AutoModels/Model/README.md +++ b/python/llm/example/CPU/HF-Transformers-AutoModels/Model/README.md @@ -1,33 +1,6 @@ # BigDL-LLM Transformers INT4 Optimization for Large Language Model You can use BigDL-LLM to run any Huggingface Transformer models with INT4 optimizations on either servers or laptops. This directory contains example scripts to help you quickly get started using BigDL-LLM to run some popular open-source models in the community. Each model has its own dedicated folder, where you can find detailed instructions on how to install and run it. -# Verified models -| Model | Example | -|-----------|----------------------------------------------------------| -| LLaMA | [link](vicuna) | -| LLaMA 2 | [link](llama2) | -| MPT | [link](mpt) | -| Falcon | [link](falcon) | -| ChatGLM | [link](chatglm) | -| ChatGLM2 | [link](chatglm2) | -| MOSS | [link](moss) | -| Baichuan | [link](baichuan) | -| Baichuan2 | [link](baichuan2) | -| Dolly-v1 | [link](dolly_v1) | -| Dolly-v2 | [link](dolly_v2) | -| RedPajama | [link](redpajama) | -| Phoenix | [link](phoenix) | -| StarCoder | [link](starcoder) | -| InternLM | [link](internlm) | -| Whisper | [link](whisper) | -| Qwen | [link](qwen) | -| Aquila | [link](aquila) | -| Replit | [link](replit) | -| Mistral | [link](mistral) | -| Flan-t5 | [link](flan-t5) | -| Phi-1_5 | [link](phi-1_5) | -| Qwen-VL | [link](qwen-vl) | - ## Recommended Requirements To run the examples, we recommend using Intel® Xeon® processors (server), or >= 12th Gen Intel® Core™ processor (client). diff --git a/python/llm/example/CPU/HF-Transformers-AutoModels/Model/chatglm3/README.md b/python/llm/example/CPU/HF-Transformers-AutoModels/Model/chatglm3/README.md new file mode 100644 index 00000000..a1df8bff --- /dev/null +++ b/python/llm/example/CPU/HF-Transformers-AutoModels/Model/chatglm3/README.md @@ -0,0 +1,129 @@ +# ChatGLM3 + +In this directory, you will find examples on how you could apply BigDL-LLM INT4 optimizations on ChatGLM3 models. For illustration purposes, we utilize the [THUDM/chatglm3-6b](https://huggingface.co/THUDM/chatglm3-6b) as a reference ChatGLM3 model. + +## 0. Requirements +To run these examples with BigDL-LLM, we have some recommended requirements for your machine, please refer to [here](../README.md#recommended-requirements) for more information. + +## Example 1: Predict Tokens using `generate()` API +In the example [generate.py](./generate.py), we show a basic use case for a ChatGLM3 model to predict the next N tokens using `generate()` API, with BigDL-LLM INT4 optimizations. +### 1. Install +We suggest using conda to manage environment: +```bash +conda create -n llm python=3.9 +conda activate llm + +pip install --pre --upgrade bigdl-llm[all] # install bigdl-llm with 'all' option +``` + +### 2. Run +``` +python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROMPT --n-predict N_PREDICT +``` + +Arguments info: +- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the ChatGLM3 model to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'THUDM/chatglm3-6b'`. +- `--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `'AI是什么?'`. +- `--n-predict N_PREDICT`: argument defining the max number of tokens to predict. It is default to be `32`. + +> **Note**: When loading the model in 4-bit, BigDL-LLM converts linear layers in the model into INT4 format. In theory, a *X*B model saved in 16-bit will requires approximately 2*X* GB of memory for loading, and ~0.5*X* GB memory for further inference. +> +> Please select the appropriate size of the ChatGLM3 model based on the capabilities of your machine. + +#### 2.1 Client +On client Windows machine, it is recommended to run directly with full utilization of all cores: +```powershell +python ./generate.py +``` + +#### 2.2 Server +For optimal performance on server, it is recommended to set several environment variables (refer to [here](../README.md#best-known-configuration-on-linux) for more information), and run the example with all the physical cores of a single socket. + +E.g. on Linux, +```bash +# set BigDL-Nano env variables +source bigdl-nano-init + +# e.g. for a server with 48 cores per socket +export OMP_NUM_THREADS=48 +numactl -C 0-47 -m 0 python ./generate.py +``` + +#### 2.3 Sample Output +#### [THUDM/chatglm3-6b](https://huggingface.co/THUDM/chatglm3-6b) +```log +Inference time: xxxx s +-------------------- Prompt -------------------- +<|user|> +AI是什么? +<|assistant|> +-------------------- Output -------------------- +[gMASK]sop <|user|> +AI是什么? +<|assistant|> AI是人工智能(Artificial Intelligence)的缩写,指的是通过计算机程序和算法模拟人类智能的技术。AI可以帮助我们解决各种问题,例如语音 +``` + +```log +Inference time: xxxx s +-------------------- Prompt -------------------- +<|user|> +What is AI? +<|assistant|> +-------------------- Output -------------------- +[gMASK]sop <|user|> +What is AI? +<|assistant|> +AI stands for Artificial Intelligence. It refers to the development of computer systems that can perform tasks that would normally require human intelligence, such as recognizing speech or making +``` + +## Example 2: Stream Chat using `stream_chat()` API +In the example [streamchat.py](./streamchat.py), we show a basic use case for a ChatGLM3 model to stream chat, with BigDL-LLM INT4 optimizations. +### 1. Install +We suggest using conda to manage environment: +```bash +conda create -n llm python=3.9 +conda activate llm + +pip install --pre --upgrade bigdl-llm[all] # install bigdl-llm with 'all' option +``` + +### 2. Run +**Stream Chat using `stream_chat()` API**: +``` +python ./streamchat.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --question QUESTION +``` + +**Chat using `chat()` API**: +``` +python ./streamchat.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --question QUESTION --disable-stream +``` + +Arguments info: +- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the ChatGLM3 model to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'THUDM/chatglm3-6b'`. +- `--question QUESTION`: argument defining the question to ask. It is default to be `"晚上睡不着应该怎么办"`. +- `--disable-stream`: argument defining whether to stream chat. If include `--disable-stream` when running the script, the stream chat is disabled and `chat()` API is used. + +> **Note**: When loading the model in 4-bit, BigDL-LLM converts linear layers in the model into INT4 format. In theory, a *X*B model saved in 16-bit will requires approximately 2*X* GB of memory for loading, and ~0.5*X* GB memory for further inference. +> +> Please select the appropriate size of the ChatGLM3 model based on the capabilities of your machine. + +#### 2.1 Client +On client Windows machine, it is recommended to run directly with full utilization of all cores: +```powershell +$env:PYTHONUNBUFFERED=1 # ensure stdout and stderr streams are sent straight to terminal without being first buffered +python ./streamchat.py +``` + +#### 2.2 Server +For optimal performance on server, it is recommended to set several environment variables (refer to [here](../README.md#best-known-configuration-on-linux) for more information), and run the example with all the physical cores of a single socket. + +E.g. on Linux, +```bash +# set BigDL-Nano env variables +source bigdl-nano-init + +# e.g. for a server with 48 cores per socket +export OMP_NUM_THREADS=48 +export PYTHONUNBUFFERED=1 # ensure stdout and stderr streams are sent straight to terminal without being first buffered +numactl -C 0-47 -m 0 python ./streamchat.py +``` diff --git a/python/llm/example/CPU/HF-Transformers-AutoModels/Model/chatglm3/generate.py b/python/llm/example/CPU/HF-Transformers-AutoModels/Model/chatglm3/generate.py new file mode 100644 index 00000000..9372ed8a --- /dev/null +++ b/python/llm/example/CPU/HF-Transformers-AutoModels/Model/chatglm3/generate.py @@ -0,0 +1,69 @@ +# +# Copyright 2016 The BigDL Authors. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +import torch +import time +import argparse +import numpy as np + +from bigdl.llm.transformers import AutoModel +from transformers import AutoTokenizer + +# you could tune the prompt based on your own model, +# here the prompt tuning refers to https://github.com/THUDM/ChatGLM3/blob/main/PROMPT.md +CHATGLM_V3_PROMPT_FORMAT = "<|user|>\n{prompt}\n<|assistant|>" + +if __name__ == '__main__': + parser = argparse.ArgumentParser(description='Predict Tokens using `generate()` API for ChatGLM3 model') + parser.add_argument('--repo-id-or-model-path', type=str, default="THUDM/chatglm3-6b", + help='The huggingface repo id for the ChatGLM3 model to be downloaded' + ', or the path to the huggingface checkpoint folder') + parser.add_argument('--prompt', type=str, default="AI是什么?", + help='Prompt to infer') + parser.add_argument('--n-predict', type=int, default=32, + help='Max tokens to predict') + + args = parser.parse_args() + model_path = args.repo_id_or_model_path + + # Load model in 4 bit, + # which convert the relevant layers in the model into INT4 format + model = AutoModel.from_pretrained(model_path, + load_in_4bit=True, + trust_remote_code=True) + + # Load tokenizer + tokenizer = AutoTokenizer.from_pretrained(model_path, + trust_remote_code=True) + + # Generate predicted tokens + with torch.inference_mode(): + prompt = CHATGLM_V3_PROMPT_FORMAT.format(prompt=args.prompt) + input_ids = tokenizer.encode(prompt, return_tensors="pt") + st = time.time() + # if your selected model is capable of utilizing previous key/value attentions + # to enhance decoding speed, but has `"use_cache": false` in its model config, + # it is important to set `use_cache=True` explicitly in the `generate` function + # to obtain optimal performance with BigDL-LLM INT4 optimizations + output = model.generate(input_ids, + max_new_tokens=args.n_predict) + end = time.time() + output_str = tokenizer.decode(output[0], skip_special_tokens=True) + print(f'Inference time: {end-st} s') + print('-'*20, 'Prompt', '-'*20) + print(prompt) + print('-'*20, 'Output', '-'*20) + print(output_str) diff --git a/python/llm/example/CPU/HF-Transformers-AutoModels/Model/chatglm3/streamchat.py b/python/llm/example/CPU/HF-Transformers-AutoModels/Model/chatglm3/streamchat.py new file mode 100644 index 00000000..3006299d --- /dev/null +++ b/python/llm/example/CPU/HF-Transformers-AutoModels/Model/chatglm3/streamchat.py @@ -0,0 +1,62 @@ +# +# Copyright 2016 The BigDL Authors. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +import torch +import time +import argparse +import numpy as np + +from bigdl.llm.transformers import AutoModel +from transformers import AutoTokenizer + + +if __name__ == '__main__': + parser = argparse.ArgumentParser(description='Stream Chat for ChatGLM3 model') + parser.add_argument('--repo-id-or-model-path', type=str, default="THUDM/chatglm3-6b", + help='The huggingface repo id for the ChatGLM3 model to be downloaded' + ', or the path to the huggingface checkpoint folder') + parser.add_argument('--question', type=str, default="晚上睡不着应该怎么办", + help='Qustion you want to ask') + parser.add_argument('--disable-stream', action="store_true", + help='Disable stream chat') + + args = parser.parse_args() + model_path = args.repo_id_or_model_path + disable_stream = args.disable_stream + + # Load model in 4 bit, + # which convert the relevant layers in the model into INT4 format + model = AutoModel.from_pretrained(model_path, + load_in_4bit=True, + trust_remote_code=True) + + # Load tokenizer + tokenizer = AutoTokenizer.from_pretrained(model_path, + trust_remote_code=True) + + with torch.inference_mode(): + if disable_stream: + # Chat + response, history = model.chat(tokenizer, args.question, history=[]) + print('-'*20, 'Chat Output', '-'*20) + print(response) + else: + # Stream chat + response_ = "" + print('-'*20, 'Stream Chat Output', '-'*20) + for response, history in model.stream_chat(tokenizer, args.question, history=[]): + print(response.replace(response_, ""), end="") + response_ = response diff --git a/python/llm/example/CPU/PyTorch-Models/Model/README.md b/python/llm/example/CPU/PyTorch-Models/Model/README.md index 090e3dc0..b89ad944 100644 --- a/python/llm/example/CPU/PyTorch-Models/Model/README.md +++ b/python/llm/example/CPU/PyTorch-Models/Model/README.md @@ -1,20 +1,6 @@ # BigDL-LLM INT4 Optimization for Large Language Model You can use `optimize_model` API to accelerate general PyTorch models on Intel servers and PCs. This directory contains example scripts to help you quickly get started using BigDL-LLM to run some popular open-source models in the community. Each model has its own dedicated folder, where you can find detailed instructions on how to install and run it. -# Verified models -| Model | Example | -|----------------|----------------------------------------------------------| -| LLaMA 2 | [link](llama2) | -| ChatGLM | [link](chatglm) | -| Openai Whisper | [link](openai-whisper) | -| BERT | [link](bert) | -| Bark | [link](bark) | -| Mistral | [link](mistral) | -| Flan-t5 | [link](flan-t5) | -| Phi-1_5 | [link](phi-1_5) | -| Qwen-VL | [link](qwen-vl) | -| LLaVA | [link](llava) | - ## Recommended Requirements To run the examples, we recommend using Intel® Xeon® processors (server), or >= 12th Gen Intel® Core™ processor (client). diff --git a/python/llm/example/CPU/PyTorch-Models/Model/chatglm3/README.md b/python/llm/example/CPU/PyTorch-Models/Model/chatglm3/README.md new file mode 100644 index 00000000..b41bbbe1 --- /dev/null +++ b/python/llm/example/CPU/PyTorch-Models/Model/chatglm3/README.md @@ -0,0 +1,59 @@ +# ChatGLM3 +In this directory, you will find examples on how you could use BigDL-LLM `optimize_model` API to accelerate ChatGLM3 models. For illustration purposes, we utilize the [THUDM/chatglm3-6b](https://huggingface.co/THUDM/chatglm3-6b) as a reference ChatGLM3 model. + +## Requirements +To run these examples with BigDL-LLM, we have some recommended requirements for your machine, please refer to [here](../README.md#recommended-requirements) for more information. + +## Example: Predict Tokens using `generate()` API +In the example [generate.py](./generate.py), we show a basic use case for a ChatGLM3 model to predict the next N tokens using `generate()` API, with BigDL-LLM INT4 optimizations. +### 1. Install +We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). + +After installing conda, create a Python environment for BigDL-LLM: +```bash +conda create -n llm python=3.9 # recommend to use Python 3.9 +conda activate llm + +pip install --pre --upgrade bigdl-llm[all] # install the latest bigdl-llm nightly build with 'all' option +``` + +### 2. Run +After setting up the Python environment, you could run the example by following steps. + +#### 2.1 Client +On client Windows machines, it is recommended to run directly with full utilization of all cores: +```powershell +python ./generate.py --prompt 'AI是什么?' +``` +More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section. + +#### 2.2 Server +For optimal performance on server, it is recommended to set several environment variables (refer to [here](../README.md#best-known-configuration-on-linux) for more information), and run the example with all the physical cores of a single socket. + +E.g. on Linux, +```bash +# set BigDL-Nano env variables +source bigdl-nano-init + +# e.g. for a server with 48 cores per socket +export OMP_NUM_THREADS=48 +numactl -C 0-47 -m 0 python ./generate.py --prompt 'AI是什么?' +``` +More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section. + +#### 2.3 Arguments Info +In the example, several arguments can be passed to satisfy your requirements: + +- `--repo-id-or-model-path`: str, argument defining the huggingface repo id for the ChatGLM model to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'THUDM/chatglm3-6b'`. +- `--prompt`: str, argument defining the prompt to be inferred (with integrated prompt format for chat). It is default to be `'AI是什么?'`. +- `--n-predict`: int, argument defining the max number of tokens to predict. It is default to be `32`. + +#### 2.4 Sample Output +#### [THUDM/chatglm3-6b](https://huggingface.co/THUDM/chatglm3-6b) +```log +Inference time: xxxx s +-------------------- Output -------------------- +[gMASK]sop <|user|> +AI是什么? +<|assistant|> AI是人工智能(Artificial Intelligence)的缩写,指的是通过计算机程序和算法模拟人类智能的技术。AI可以帮助我们解决各种问题,例如语音 +``` diff --git a/python/llm/example/CPU/PyTorch-Models/Model/chatglm3/generate.py b/python/llm/example/CPU/PyTorch-Models/Model/chatglm3/generate.py new file mode 100644 index 00000000..72a0ab99 --- /dev/null +++ b/python/llm/example/CPU/PyTorch-Models/Model/chatglm3/generate.py @@ -0,0 +1,61 @@ +# +# Copyright 2016 The BigDL Authors. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +import torch +import time +import argparse + +from transformers import AutoModel, AutoTokenizer +from bigdl.llm import optimize_model + +# you could tune the prompt based on your own model, +# here the prompt tuning refers to https://github.com/THUDM/ChatGLM3/blob/main/PROMPT.md +CHATGLM_V3_PROMPT_FORMAT = "<|user|>\n{prompt}\n<|assistant|>" + +if __name__ == '__main__': + parser = argparse.ArgumentParser(description='Predict Tokens using `generate()` API for ChatGLM3 model') + parser.add_argument('--repo-id-or-model-path', type=str, default="THUDM/chatglm3-6b", + help='The huggingface repo id for the ChatGLM model to be downloaded' + ', or the path to the huggingface checkpoint folder') + parser.add_argument('--prompt', type=str, default="AI是什么?", + help='Prompt to infer') + parser.add_argument('--n-predict', type=int, default=32, + help='Max tokens to predict') + + args = parser.parse_args() + model_path = args.repo_id_or_model_path + + # Load model + model = AutoModel.from_pretrained(model_path, trust_remote_code=True) + + # With only one line to enable BigDL-LLM optimization on model + model = optimize_model(model) + + # Load tokenizer + tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) + + # Generate predicted tokens + with torch.inference_mode(): + prompt = CHATGLM_V3_PROMPT_FORMAT.format(prompt=args.prompt) + input_ids = tokenizer.encode(prompt, return_tensors="pt") + st = time.time() + output = model.generate(input_ids, + max_new_tokens=args.n_predict) + end = time.time() + output_str = tokenizer.decode(output[0], skip_special_tokens=True) + print(f'Inference time: {end-st} s') + print('-'*20, 'Output', '-'*20) + print(output_str) diff --git a/python/llm/example/GPU/HF-Transformers-AutoModels/Model/README.md b/python/llm/example/GPU/HF-Transformers-AutoModels/Model/README.md index ec5db97f..b98d03e3 100644 --- a/python/llm/example/GPU/HF-Transformers-AutoModels/Model/README.md +++ b/python/llm/example/GPU/HF-Transformers-AutoModels/Model/README.md @@ -1,31 +1,6 @@ # BigDL-LLM Transformers INT4 Optimization for Large Language Model on Intel GPUs You can use BigDL-LLM to run almost every Huggingface Transformer models with INT4 optimizations on your laptops with Intel GPUs. This directory contains example scripts to help you quickly get started using BigDL-LLM to run some popular open-source models in the community. Each model has its own dedicated folder, where you can find detailed instructions on how to install and run it. -## Verified models - -| Model | Example | -|----------------|----------------------------------------------------------| -| Aquila | [link](aquila) | -| Baichuan | [link](baichuan) | -| Baichuan2 | [link](baichuan2) | -| ChatGLM2 | [link](chatglm2) | -| Chinese Llama2 | [link](chinese-llama2) | -| Dolly v1 | [link](dolly-v1) | -| Dolly v2 | [link](dolly-v2) | -| Falcon | [link](falcon) | -| GPT-J | [link](gpt-j) | -| InternLM | [link](internlm) | -| LLaMA 2 | [link](llama2) | -| Mistral | [link](mistral) | -| MPT | [link](mpt) | -| Qwen | [link](qwen) | -| StarCoder | [link](starcoder) | -| Vicuna | [link](vicuna) | -| Whisper | [link](whisper) | -| Replit | [link](replit) | -| Flan-t5 | [link](flan-t5) | - - ## Verified Hardware Platforms - Intel Arc™ A-Series Graphics diff --git a/python/llm/example/GPU/HF-Transformers-AutoModels/Model/chatglm3/README.md b/python/llm/example/GPU/HF-Transformers-AutoModels/Model/chatglm3/README.md new file mode 100644 index 00000000..f2e34d19 --- /dev/null +++ b/python/llm/example/GPU/HF-Transformers-AutoModels/Model/chatglm3/README.md @@ -0,0 +1,109 @@ +# ChatGLM3 + +In this directory, you will find examples on how you could apply BigDL-LLM INT4 optimizations on ChatGLM3 models on [Intel GPUs](../README.md). For illustration purposes, we utilize the [THUDM/chatglm3-6b](https://huggingface.co/THUDM/chatglm3-6b) as a reference ChatGLM3 model. + +## 0. Requirements +To run these examples with BigDL-LLM on Intel GPUs, we have some recommended requirements for your machine, please refer to [here](../README.md#recommended-requirements) for more information. + +## Example 1: Predict Tokens using `generate()` API +In the example [generate.py](./generate.py), we show a basic use case for a ChatGLM3 model to predict the next N tokens using `generate()` API, with BigDL-LLM INT4 optimizations on Intel GPUs. +### 1. Install +We suggest using conda to manage environment: +```bash +conda create -n llm python=3.9 +conda activate llm +# below command will install intel_extension_for_pytorch==2.0.110+xpu as default +# you can install specific ipex/torch version for your need +pip install --pre --upgrade bigdl-llm[xpu] -f https://developer.intel.com/ipex-whl-stable-xpu +``` + +### 2. Configures OneAPI environment variables +```bash +source /opt/intel/oneapi/setvars.sh +``` + +### 3. Run + +For optimal performance on Arc, it is recommended to set several environment variables. + +```bash +export USE_XETLA=OFF +export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1 +``` + +``` +python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROMPT --n-predict N_PREDICT +``` + +Arguments info: +- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the ChatGLM3 model to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'THUDM/chatglm3-6b'`. +- `--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `'AI是什么?'`. +- `--n-predict N_PREDICT`: argument defining the max number of tokens to predict. It is default to be `32`. + +#### Sample Output +#### [THUDM/chatglm3-6b](https://huggingface.co/THUDM/chatglm3-6b) +```log +Inference time: xxxx s +-------------------- Prompt -------------------- +<|user|> +AI是什么? +<|assistant|> +-------------------- Output -------------------- +[gMASK]sop <|user|> +AI是什么? +<|assistant|> AI是人工智能(Artificial Intelligence)的缩写,指通过计算机程序或机器学习算法来模拟、延伸或扩展人类智能的技术。AI旨在 +``` + +```log +Inference time: xxxx s +-------------------- Prompt -------------------- +<|user|> +What is AI? +<|assistant|> +-------------------- Output -------------------- +[gMASK]sop <|user|> +What is AI? +<|assistant|> +AI stands for Artificial Intelligence. It refers to the development of computer systems or machines that can perform tasks that would normally require human intelligence, such as recognizing patterns +``` + +## Example 2: Stream Chat using `stream_chat()` API +In the example [streamchat.py](./streamchat.py), we show a basic use case for a ChatGLM3 model to stream chat, with BigDL-LLM INT4 optimizations. +### 1. Install +We suggest using conda to manage environment: +```bash +conda create -n llm python=3.9 +conda activate llm +# below command will install intel_extension_for_pytorch==2.0.110+xpu as default +# you can install specific ipex/torch version for your need +pip install --pre --upgrade bigdl-llm[xpu] -f https://developer.intel.com/ipex-whl-stable-xpu +``` + +### 2. Configures OneAPI environment variables +```bash +source /opt/intel/oneapi/setvars.sh +``` + +### 3. Run + +For optimal performance on Arc, it is recommended to set several environment variables. + +```bash +export USE_XETLA=OFF +export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1 +``` + +**Stream Chat using `stream_chat()` API**: +``` +python ./streamchat.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --question QUESTION +``` + +**Chat using `chat()` API**: +``` +python ./streamchat.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --question QUESTION --disable-stream +``` + +Arguments info: +- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the ChatGLM3 model to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'THUDM/chatglm3-6b'`. +- `--question QUESTION`: argument defining the question to ask. It is default to be `"晚上睡不着应该怎么办"`. +- `--disable-stream`: argument defining whether to stream chat. If include `--disable-stream` when running the script, the stream chat is disabled and `chat()` API is used. diff --git a/python/llm/example/GPU/HF-Transformers-AutoModels/Model/chatglm3/generate.py b/python/llm/example/GPU/HF-Transformers-AutoModels/Model/chatglm3/generate.py new file mode 100644 index 00000000..55e529f7 --- /dev/null +++ b/python/llm/example/GPU/HF-Transformers-AutoModels/Model/chatglm3/generate.py @@ -0,0 +1,79 @@ +# +# Copyright 2016 The BigDL Authors. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +import torch +import intel_extension_for_pytorch as ipex +import time +import argparse +import numpy as np + +from bigdl.llm.transformers import AutoModel +from transformers import AutoTokenizer + +# you could tune the prompt based on your own model, +# here the prompt tuning refers to https://github.com/THUDM/ChatGLM3/blob/main/PROMPT.md +CHATGLM_V3_PROMPT_FORMAT = "<|user|>\n{prompt}\n<|assistant|>" + +if __name__ == '__main__': + parser = argparse.ArgumentParser(description='Predict Tokens using `generate()` API for ChatGLM3 model') + parser.add_argument('--repo-id-or-model-path', type=str, default="THUDM/chatglm3-6b", + help='The huggingface repo id for the ChatGLM3 model to be downloaded' + ', or the path to the huggingface checkpoint folder') + parser.add_argument('--prompt', type=str, default="AI是什么?", + help='Prompt to infer') + parser.add_argument('--n-predict', type=int, default=32, + help='Max tokens to predict') + + args = parser.parse_args() + model_path = args.repo_id_or_model_path + + # Load model in 4 bit, + # which convert the relevant layers in the model into INT4 format + model = AutoModel.from_pretrained(model_path, + load_in_4bit=True, + optimize_model=True, + trust_remote_code=True, + use_cache=True) + model = model.to('xpu') + + # Load tokenizer + tokenizer = AutoTokenizer.from_pretrained(model_path, + trust_remote_code=True) + + # Generate predicted tokens + with torch.inference_mode(): + prompt = CHATGLM_V3_PROMPT_FORMAT.format(prompt=args.prompt) + input_ids = tokenizer.encode(prompt, return_tensors="pt").to('xpu') + # ipex model needs a warmup, then inference time can be accurate + output = model.generate(input_ids, + max_new_tokens=args.n_predict) + + # start inference + st = time.time() + # if your selected model is capable of utilizing previous key/value attentions + # to enhance decoding speed, but has `"use_cache": false` in its model config, + # it is important to set `use_cache=True` explicitly in the `generate` function + # to obtain optimal performance with BigDL-LLM INT4 optimizations + output = model.generate(input_ids, + max_new_tokens=args.n_predict) + torch.xpu.synchronize() + end = time.time() + output_str = tokenizer.decode(output[0], skip_special_tokens=True) + print(f'Inference time: {end-st} s') + print('-'*20, 'Prompt', '-'*20) + print(prompt) + print('-'*20, 'Output', '-'*20) + print(output_str) diff --git a/python/llm/example/GPU/HF-Transformers-AutoModels/Model/chatglm3/streamchat.py b/python/llm/example/GPU/HF-Transformers-AutoModels/Model/chatglm3/streamchat.py new file mode 100644 index 00000000..f294dea9 --- /dev/null +++ b/python/llm/example/GPU/HF-Transformers-AutoModels/Model/chatglm3/streamchat.py @@ -0,0 +1,72 @@ +# +# Copyright 2016 The BigDL Authors. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +import torch +import intel_extension_for_pytorch as ipex +import time +import argparse +import numpy as np + +from bigdl.llm.transformers import AutoModel +from transformers import AutoTokenizer + + +if __name__ == '__main__': + parser = argparse.ArgumentParser(description='Stream Chat for ChatGLM3 model') + parser.add_argument('--repo-id-or-model-path', type=str, default="THUDM/chatglm3-6b", + help='The huggingface repo id for the ChatGLM3 model to be downloaded' + ', or the path to the huggingface checkpoint folder') + parser.add_argument('--question', type=str, default="晚上睡不着应该怎么办", + help='Qustion you want to ask') + parser.add_argument('--disable-stream', action="store_true", + help='Disable stream chat') + + args = parser.parse_args() + model_path = args.repo_id_or_model_path + disable_stream = args.disable_stream + + # Load model in 4 bit, + # which convert the relevant layers in the model into INT4 format + model = AutoModel.from_pretrained(model_path, + load_in_4bit=True, + trust_remote_code=True, + optimize_model=True) + model.to('xpu') + + # Load tokenizer + tokenizer = AutoTokenizer.from_pretrained(model_path, + trust_remote_code=True) + + with torch.inference_mode(): + prompt = args.question + input_ids = tokenizer.encode(prompt, return_tensors="pt").to('xpu') + # ipex model needs a warmup, then inference time can be accurate + output = model.generate(input_ids, + max_new_tokens=32) + + # start inference + if disable_stream: + # Chat + response, history = model.chat(tokenizer, args.question, history=[]) + print('-'*20, 'Chat Output', '-'*20) + print(response) + else: + # Stream chat + response_ = "" + print('-'*20, 'Stream Chat Output', '-'*20) + for response, history in model.stream_chat(tokenizer, args.question, history=[]): + print(response.replace(response_, ""), end="") + response_ = response diff --git a/python/llm/example/GPU/PyTorch-Models/Model/README.md b/python/llm/example/GPU/PyTorch-Models/Model/README.md index 41beaaee..75b7f668 100644 --- a/python/llm/example/GPU/PyTorch-Models/Model/README.md +++ b/python/llm/example/GPU/PyTorch-Models/Model/README.md @@ -1,20 +1,6 @@ # BigDL-LLM INT4 Optimization for Large Language Model on Intel GPUs You can use `optimize_model` API to accelerate general PyTorch models on Intel GPUs. This directory contains example scripts to help you quickly get started using BigDL-LLM to run some popular open-source models in the community. Each model has its own dedicated folder, where you can find detailed instructions on how to install and run it. -## Verified models -| Model | Example | -|----------------|----------------------------------------------------------| -| Mistral | [link](mistral) | -| LLaMA 2 | [link](llama2) | -| ChatGLM2 | [link](chatglm2) | -| Baichuan | [link](baichuan) | -| Baichuan2 | [link](baichuan2) | -| Replit | [link](replit) | -| StarCoder | [link](starcoder) | -| Dolly v1 | [link](dolly-v1) | -| Dolly v2 | [link](dolly-v2) | -| Flan-t5 | [link](flan-t5) | - ## Verified Hardware Platforms - Intel Arc™ A-Series Graphics diff --git a/python/llm/example/GPU/PyTorch-Models/Model/chatglm3/README.md b/python/llm/example/GPU/PyTorch-Models/Model/chatglm3/README.md new file mode 100644 index 00000000..825ecd88 --- /dev/null +++ b/python/llm/example/GPU/PyTorch-Models/Model/chatglm3/README.md @@ -0,0 +1,108 @@ +# ChatGLM3 +In this directory, you will find examples on how you could use BigDL-LLM `optimize_model` API to accelerate ChatGLM3 models. For illustration purposes, we utilize the [THUDM/chatglm3-6b](https://huggingface.co/THUDM/chatglm3-6b) as reference ChatGLM3 models. + +## Requirements +To run these examples with BigDL-LLM on Intel GPUs, we have some recommended requirements for your machine, please refer to [here](../README.md#recommended-requirements) for more information. + +## Example 1: Predict Tokens using `generate()` API +In the example [generate.py](./generate.py), we show a basic use case for a ChatGLM3 model to predict the next N tokens using `generate()` API, with BigDL-LLM INT4 optimizations on Intel GPUs. +### 1. Install +We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). + +After installing conda, create a Python environment for BigDL-LLM: +```bash +conda create -n llm python=3.9 # recommend to use Python 3.9 +conda activate llm + +# below command will install intel_extension_for_pytorch==2.0.110+xpu as default +# you can install specific ipex/torch version for your need +pip install --pre --upgrade bigdl-llm[xpu] -f https://developer.intel.com/ipex-whl-stable-xpu +``` + +### 2. Configures OneAPI environment variables +```bash +source /opt/intel/oneapi/setvars.sh +``` + +### 3. Run + +For optimal performance on Arc, it is recommended to set several environment variables. + +```bash +export USE_XETLA=OFF +export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1 +``` + +```bash +python ./generate.py --prompt 'AI是什么?' +``` + +In the example, several arguments can be passed to satisfy your requirements: + +- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the ChatGLM3 model to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'THUDM/chatglm3-6b'`. +- `--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `'AI是什么?'`. +- `--n-predict N_PREDICT`: argument defining the max number of tokens to predict. It is default to be `32`. + +#### 2.3 Sample Output +#### [THUDM/chatglm3-6b](https://huggingface.co/THUDM/chatglm3-6b) +```log +Inference time: xxxx s +-------------------- Output -------------------- +[gMASK]sop <|user|> +AI是什么? +<|assistant|> AI是人工智能(Artificial Intelligence)的缩写,指通过计算机程序或机器学习算法来模拟、延伸或扩展人类智能的技术。AI旨在 +``` + +```log +Inference time: xxxx s +-------------------- Output -------------------- +[gMASK]sop <|user|> +What is AI? +<|assistant|> +AI stands for Artificial Intelligence. It refers to the development of computer systems or machines that can perform tasks that would normally require human intelligence, such as recognizing patterns +``` + +## Example 2: Stream Chat using `stream_chat()` API +In the example [streamchat.py](./streamchat.py), we show a basic use case for a ChatGLM3 model to stream chat, with BigDL-LLM INT4 optimizations. +### 1. Install +We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). + +After installing conda, create a Python environment for BigDL-LLM: +```bash +conda create -n llm python=3.9 # recommend to use Python 3.9 +conda activate llm + +# below command will install intel_extension_for_pytorch==2.0.110+xpu as default +# you can install specific ipex/torch version for your need +pip install --pre --upgrade bigdl-llm[xpu] -f https://developer.intel.com/ipex-whl-stable-xpu +``` + +### 2. Configures OneAPI environment variables +```bash +source /opt/intel/oneapi/setvars.sh +``` + +### 3. Run + +For optimal performance on Arc, it is recommended to set several environment variables. + +```bash +export USE_XETLA=OFF +export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1 +``` + +**Stream Chat using `stream_chat()` API**: +``` +python ./streamchat.py +``` + +**Chat using `chat()` API**: +``` +python ./streamchat.py --disable-stream +``` + +In the example, several arguments can be passed to satisfy your requirements: + +- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the ChatGLM3 model to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'THUDM/chatglm3-6b'`. +- `--question QUESTION`: argument defining the question to ask. It is default to be `"晚上睡不着应该怎么办"`. +- `--disable-stream`: argument defining whether to stream chat. If include `--disable-stream` when running the script, the stream chat is disabled and `chat()` API is used. diff --git a/python/llm/example/GPU/PyTorch-Models/Model/chatglm3/generate.py b/python/llm/example/GPU/PyTorch-Models/Model/chatglm3/generate.py new file mode 100644 index 00000000..9194310b --- /dev/null +++ b/python/llm/example/GPU/PyTorch-Models/Model/chatglm3/generate.py @@ -0,0 +1,74 @@ +# +# Copyright 2016 The BigDL Authors. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +import torch +import intel_extension_for_pytorch as ipex +import time +import argparse + +from transformers import AutoModel, AutoTokenizer +from bigdl.llm import optimize_model + +# you could tune the prompt based on your own model, +# here the prompt tuning refers to https://github.com/THUDM/ChatGLM3/blob/main/PROMPT.md +CHATGLM_V3_PROMPT_FORMAT = "<|user|>\n{prompt}\n<|assistant|>" + +if __name__ == '__main__': + parser = argparse.ArgumentParser(description='Predict Tokens using `generate()` API for ChatGLM3 model') + parser.add_argument('--repo-id-or-model-path', type=str, default="THUDM/chatglm3-6b", + help='The huggingface repo id for the ChatGLM3 model to be downloaded' + ', or the path to the huggingface checkpoint folder') + parser.add_argument('--prompt', type=str, default="AI是什么?", + help='Prompt to infer') + parser.add_argument('--n-predict', type=int, default=32, + help='Max tokens to predict') + + args = parser.parse_args() + model_path = args.repo_id_or_model_path + + # Load model + model = AutoModel.from_pretrained(model_path, + trust_remote_code=True, + torch_dtype='auto', + low_cpu_mem_usage=True) + + # With only one line to enable BigDL-LLM optimization on model + model = optimize_model(model) + + model = model.to('xpu') + + # Load tokenizer + tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) + + # Generate predicted tokens + with torch.inference_mode(): + prompt = CHATGLM_V3_PROMPT_FORMAT.format(prompt=args.prompt) + input_ids = tokenizer.encode(prompt, return_tensors="pt").to('xpu') + # ipex model needs a warmup, then inference time can be accurate + output = model.generate(input_ids, + max_new_tokens=args.n_predict) + + # start inference + st = time.time() + output = model.generate(input_ids, + max_new_tokens=args.n_predict) + torch.xpu.synchronize() + end = time.time() + output = output.cpu() + output_str = tokenizer.decode(output[0], skip_special_tokens=True) + print(f'Inference time: {end-st} s') + print('-'*20, 'Output', '-'*20) + print(output_str) diff --git a/python/llm/example/GPU/PyTorch-Models/Model/chatglm3/streamchat.py b/python/llm/example/GPU/PyTorch-Models/Model/chatglm3/streamchat.py new file mode 100644 index 00000000..440db7a6 --- /dev/null +++ b/python/llm/example/GPU/PyTorch-Models/Model/chatglm3/streamchat.py @@ -0,0 +1,75 @@ +# +# Copyright 2016 The BigDL Authors. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +import torch +import intel_extension_for_pytorch as ipex +import time +import argparse +import numpy as np + +from transformers import AutoModel, AutoTokenizer +from bigdl.llm import optimize_model + + +if __name__ == '__main__': + parser = argparse.ArgumentParser(description='Stream Chat for ChatGLM3 model') + parser.add_argument('--repo-id-or-model-path', type=str, default="THUDM/chatglm3-6b", + help='The huggingface repo id for the ChatGLM3 model to be downloaded' + ', or the path to the huggingface checkpoint folder') + parser.add_argument('--question', type=str, default="晚上睡不着应该怎么办", + help='Qustion you want to ask') + parser.add_argument('--disable-stream', action="store_true", + help='Disable stream chat') + + args = parser.parse_args() + model_path = args.repo_id_or_model_path + disable_stream = args.disable_stream + + # Load model + model = AutoModel.from_pretrained(model_path, + trust_remote_code=True, + torch_dtype='auto', + low_cpu_mem_usage=True) + + # With only one line to enable BigDL-LLM optimization on model + model = optimize_model(model) + + model.to('xpu') + + # Load tokenizer + tokenizer = AutoTokenizer.from_pretrained(model_path, + trust_remote_code=True) + + with torch.inference_mode(): + prompt = args.question + input_ids = tokenizer.encode(prompt, return_tensors="pt").to('xpu') + # ipex model needs a warmup, then inference time can be accurate + output = model.generate(input_ids, + max_new_tokens=32) + + # start inference + if disable_stream: + # Chat + response, history = model.chat(tokenizer, args.question, history=[]) + print('-'*20, 'Chat Output', '-'*20) + print(response) + else: + # Stream chat + response_ = "" + print('-'*20, 'Stream Chat Output', '-'*20) + for response, history in model.stream_chat(tokenizer, args.question, history=[]): + print(response.replace(response_, ""), end="") + response_ = response