Add codegemma example (#10884)
* add codegemma example in GPU/HF-Transformers-AutoModels/ * add README of codegemma example in GPU/HF-Transformers-AutoModels/ * add codegemma example in GPU/PyTorch-Models/ * add readme of codegemma example in GPU/PyTorch-Models/ * add codegemma example in CPU/HF-Transformers-AutoModels/ * add readme of codegemma example in CPU/HF-Transformers-AutoModels/ * add codegemma example in CPU/PyTorch-Models/ * add readme of codegemma example in CPU/PyTorch-Models/ * fix typos * fix filename typo * add codegemma in tables * add comments of lm_head * remove comments of use_cache
This commit is contained in:
parent
08ad40b251
commit
245c7348bc
10 changed files with 746 additions and 0 deletions
|
|
@ -183,6 +183,7 @@ Over 50 models have been optimized/verified on `ipex-llm`, including *LLaMA/LLaM
|
|||
| DeciLM-7B | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/deciLM-7b) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/deciLM-7b) |
|
||||
| Deepseek | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/deepseek) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/deepseek) |
|
||||
| StableLM | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/stablelm) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/stablelm) |
|
||||
| CodeGemma | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/codegemma) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/codegemma) |
|
||||
|
||||
## Get Support
|
||||
- Please report a bug or raise a feature request by opening a [Github Issue](https://github.com/intel-analytics/ipex-llm/issues)
|
||||
|
|
|
|||
|
|
@ -580,6 +580,13 @@ Verified Models
|
|||
<td>
|
||||
<a href="https://github.com/intel-analytics/ipex-llm/tree/main/python/llm/example/GPU/HF-Transformers-AutoModels/Model/stablelm">link</a></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>CodeGemma</td>
|
||||
<td>
|
||||
<a href="https://github.com/intel-analytics/ipex-llm/tree/main/python/llm/example/CPU/HF-Transformers-AutoModels/Model/codegemma">link</a></td>
|
||||
<td>
|
||||
<a href="https://github.com/intel-analytics/ipex-llm/tree/main/python/llm/example/GPU/HF-Transformers-AutoModels/Model/codegemma">link</a></td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,75 @@
|
|||
# CodeGemma
|
||||
In this directory, you will find examples on how you could apply IPEX-LLM INT4 optimizations on CodeGemma models. For illustration purposes, we utilize the [google/codegemma-7b-it](https://huggingface.co/google/codegemma-7b-it) as reference CodeGemma models.
|
||||
|
||||
## 0. Requirements
|
||||
To run these examples with IPEX-LLM, we have some recommended requirements for your machine, please refer to [here](../README.md#recommended-requirements) for more information.
|
||||
|
||||
## Example: Predict Tokens using `generate()` API
|
||||
In the example [generate.py](./generate.py), we show a basic use case for a CodeGemma model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations.
|
||||
### 1. Install
|
||||
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
|
||||
|
||||
After installing conda, create a Python environment for IPEX-LLM:
|
||||
```bash
|
||||
conda create -n llm python=3.11 # recommend to use Python 3.11
|
||||
conda activate llm
|
||||
|
||||
# install ipex-llm with 'all' option
|
||||
pip install ipex-llm[all]
|
||||
|
||||
# According to CodeGemma's requirement, please make sure you are using a stable version of Transformers, 4.38.1 or newer.
|
||||
pip install transformers==4.38.1
|
||||
```
|
||||
|
||||
### 2. Run
|
||||
```
|
||||
python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROMPT --n-predict N_PREDICT
|
||||
```
|
||||
|
||||
Arguments info:
|
||||
- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the CodeGemma model to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'google/codegemma-7b-it'`.
|
||||
- `--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `'Write a hello world program'`.
|
||||
- `--n-predict N_PREDICT`: argument defining the max number of tokens to predict. It is default to be `32`.
|
||||
|
||||
> **Note**: When loading the model in 4-bit, IPEX-LLM converts linear layers in the model into INT4 format. In theory, a *X*B model saved in 16-bit will requires approximately 2*X* GB of memory for loading, and ~0.5*X* GB memory for further inference.
|
||||
>
|
||||
> Please select the appropriate size of the CodeLlama model based on the capabilities of your machine.
|
||||
|
||||
#### 2.1 Client
|
||||
On client Windows machine, it is recommended to run directly with full utilization of all cores:
|
||||
```powershell
|
||||
python ./generate.py
|
||||
```
|
||||
|
||||
#### 2.2 Server
|
||||
For optimal performance on server, it is recommended to set several environment variables (refer to [here](../README.md#best-known-configuration-on-linux) for more information), and run the example with all the physical cores of a single socket.
|
||||
|
||||
E.g. on Linux,
|
||||
```bash
|
||||
# set IPEX-LLM env variables
|
||||
source ipex-llm-init
|
||||
|
||||
# e.g. for a server with 48 cores per socket
|
||||
export OMP_NUM_THREADS=48
|
||||
numactl -C 0-47 -m 0 python ./generate.py
|
||||
```
|
||||
|
||||
#### 2.3 Sample Output
|
||||
#### [google/codegemma-7b-it](https://huggingface.co/google/codegemma-7b-it)
|
||||
```log
|
||||
Inference time: xxxx s
|
||||
-------------------- Prompt --------------------
|
||||
<bos><start_of_turn>user
|
||||
Write a hello world program<end_of_turn>
|
||||
<start_of_turn>model
|
||||
|
||||
-------------------- Output --------------------
|
||||
<start_of_turn>user
|
||||
Write a hello world program<end_of_turn>
|
||||
<start_of_turn>model
|
||||
```python
|
||||
print("Hello, world!")
|
||||
```
|
||||
|
||||
This program will print the message "Hello, world!" to the console.
|
||||
```
|
||||
|
|
@ -0,0 +1,71 @@
|
|||
#
|
||||
# Copyright 2016 The BigDL Authors.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
|
||||
import torch
|
||||
import time
|
||||
import argparse
|
||||
|
||||
from ipex_llm.transformers import AutoModelForCausalLM
|
||||
from transformers import AutoTokenizer
|
||||
|
||||
# The instruction-tuned models use a chat template that must be adhered to for conversational use.
|
||||
# see https://huggingface.co/google/codegemma-7b-it#chat-template.
|
||||
chat = [
|
||||
{ "role": "user", "content": "Write a hello world program" },
|
||||
]
|
||||
|
||||
if __name__ == '__main__':
|
||||
parser = argparse.ArgumentParser(description='Predict Tokens using `generate()` API for CodeGemma model')
|
||||
parser.add_argument('--repo-id-or-model-path', type=str, default="google/codegemma-7b-it",
|
||||
help='The huggingface repo id for the CodeGemma to be downloaded'
|
||||
', or the path to the huggingface checkpoint folder')
|
||||
parser.add_argument('--prompt', type=str, default="Write a hello world program",
|
||||
help='Prompt to infer')
|
||||
parser.add_argument('--n-predict', type=int, default=32,
|
||||
help='Max tokens to predict')
|
||||
|
||||
args = parser.parse_args()
|
||||
model_path = args.repo_id_or_model_path
|
||||
|
||||
# Load model in 4 bit,
|
||||
# which convert the relevant layers in the model into INT4 format
|
||||
# To fix the issue that the output of codegemma-7b-it is abnormal, skip the 'lm_head' module during optimization
|
||||
model = AutoModelForCausalLM.from_pretrained(model_path,
|
||||
load_in_4bit=True,
|
||||
trust_remote_code=True,
|
||||
use_cache=True,
|
||||
modules_to_not_convert=["lm_head"])
|
||||
|
||||
# Load tokenizer
|
||||
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
|
||||
|
||||
# Generate predicted tokens
|
||||
with torch.inference_mode():
|
||||
chat[0]['content'] = args.prompt
|
||||
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
|
||||
input_ids = tokenizer.encode(prompt, return_tensors="pt")
|
||||
|
||||
# start inference
|
||||
st = time.time()
|
||||
output = model.generate(input_ids,
|
||||
max_new_tokens=args.n_predict)
|
||||
end = time.time()
|
||||
output_str = tokenizer.decode(output[0], skip_special_tokens=True)
|
||||
print(f'Inference time: {end-st} s')
|
||||
print('-'*20, 'Prompt', '-'*20)
|
||||
print(prompt)
|
||||
print('-'*20, 'Output', '-'*20)
|
||||
print(output_str)
|
||||
|
|
@ -0,0 +1,73 @@
|
|||
# CodeGemma
|
||||
In this directory, you will find examples on how you could use IPEX-LLM `optimize_model` API to accelerate CodeGemma models. For illustration purposes, we utilize the [google/codegemma-7b-it](https://huggingface.co/google/codegemma-7b-it) as reference CodeGemma models.
|
||||
|
||||
## 0. Requirements
|
||||
To run these examples with IPEX-LLM, we have some recommended requirements for your machine, please refer to [here](../README.md#recommended-requirements) for more information.
|
||||
|
||||
## Example: Predict Tokens using `generate()` API
|
||||
In the example [generate.py](./generate.py), we show a basic use case for a CodeGemma model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations.
|
||||
### 1. Install
|
||||
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
|
||||
|
||||
After installing conda, create a Python environment for IPEX-LLM:
|
||||
```bash
|
||||
conda create -n llm python=3.11 # recommend to use Python 3.11
|
||||
conda activate llm
|
||||
|
||||
# install ipex-llm with 'all' option
|
||||
pip install --pre --upgrade ipex-llm[all]
|
||||
|
||||
# According to CodeGemma's requirement, please make sure you are using a stable version of Transformers, 4.38.1 or newer.
|
||||
pip install transformers==4.38.1
|
||||
```
|
||||
|
||||
### 2. Run
|
||||
After setting up the Python environment, you could run the example by following steps.
|
||||
|
||||
#### 2.1 Client
|
||||
On client Windows machines, it is recommended to run directly with full utilization of all cores:
|
||||
```powershell
|
||||
python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROMPT --n-predict N_PREDICT
|
||||
```
|
||||
More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.
|
||||
|
||||
#### 2.2 Server
|
||||
For optimal performance on server, it is recommended to set several environment variables (refer to [here](../README.md#best-known-configuration-on-linux) for more information), and run the example with all the physical cores of a single socket.
|
||||
|
||||
E.g. on Linux,
|
||||
```bash
|
||||
# set IPEX-LLM env variables
|
||||
source ipex-llm-init
|
||||
|
||||
# e.g. for a server with 48 cores per socket
|
||||
export OMP_NUM_THREADS=48
|
||||
numactl -C 0-47 -m 0 python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROMPT --n-predict N_PREDICT
|
||||
```
|
||||
More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.
|
||||
|
||||
#### 2.3 Arguments Info
|
||||
In the example, several arguments can be passed to satisfy your requirements:
|
||||
|
||||
- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the CodeGemma model to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'google/codegemma-7b-it'`.
|
||||
- `--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `'Write a hello world program'`.
|
||||
- `--n-predict N_PREDICT`: argument defining the max number of tokens to predict. It is default to be `32`.
|
||||
|
||||
#### 2.4 Sample Output
|
||||
#### [google/codegemma-7b-it](https://huggingface.co/google/codegemma-7b-it)
|
||||
```log
|
||||
Inference time: xxxx s
|
||||
-------------------- Prompt --------------------
|
||||
<bos><start_of_turn>user
|
||||
Write a hello world program<end_of_turn>
|
||||
<start_of_turn>model
|
||||
|
||||
-------------------- Output --------------------
|
||||
<start_of_turn>user
|
||||
Write a hello world program<end_of_turn>
|
||||
<start_of_turn>model
|
||||
```python
|
||||
print("Hello, world!")
|
||||
```
|
||||
|
||||
This program will print the message "Hello, world!" to the console.
|
||||
```
|
||||
|
|
@ -0,0 +1,68 @@
|
|||
#
|
||||
# Copyright 2016 The BigDL Authors.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
|
||||
import torch
|
||||
import time
|
||||
import argparse
|
||||
|
||||
from transformers import AutoModelForCausalLM, AutoTokenizer
|
||||
from ipex_llm import optimize_model
|
||||
|
||||
# The instruction-tuned models use a chat template that must be adhered to for conversational use.
|
||||
# see https://huggingface.co/google/codegemma-7b-it#chat-template.
|
||||
chat = [
|
||||
{ "role": "user", "content": "Write a hello world program" },
|
||||
]
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
parser = argparse.ArgumentParser(description='Predict Tokens using `generate()` API for CodeGemma model')
|
||||
parser.add_argument('--repo-id-or-model-path', type=str, default="google/codegemma-7b-it",
|
||||
help='The huggingface repo id for the CodeGemma model to be downloaded'
|
||||
', or the path to the huggingface checkpoint folder')
|
||||
parser.add_argument('--prompt', type=str, default="Write a hello world program",
|
||||
help='Prompt to infer')
|
||||
parser.add_argument('--n-predict', type=int, default=32,
|
||||
help='Max tokens to predict')
|
||||
|
||||
args = parser.parse_args()
|
||||
model_path = args.repo_id_or_model_path
|
||||
|
||||
# Load model
|
||||
model = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True)
|
||||
|
||||
# With only one line to enable IPEX-LLM optimization on model
|
||||
# To fix the issue that the output of codegemma-7b-it is abnormal, skip the 'lm_head' module during optimization
|
||||
model = optimize_model(model, modules_to_not_convert=["lm_head"])
|
||||
|
||||
# Load tokenizer
|
||||
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
|
||||
|
||||
# Generate predicted tokens
|
||||
with torch.inference_mode():
|
||||
chat[0]['content'] = args.prompt
|
||||
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
|
||||
input_ids = tokenizer.encode(prompt, return_tensors="pt")
|
||||
st = time.time()
|
||||
output = model.generate(input_ids,
|
||||
max_new_tokens=args.n_predict)
|
||||
end = time.time()
|
||||
output_str = tokenizer.decode(output[0], skip_special_tokens=True)
|
||||
print(f'Inference time: {end-st} s')
|
||||
print('-'*20, 'Prompt', '-'*20)
|
||||
print(prompt)
|
||||
print('-'*20, 'Output', '-'*20)
|
||||
print(output_str)
|
||||
|
|
@ -0,0 +1,144 @@
|
|||
# CodeGemma
|
||||
In this directory, you will find examples on how you could apply IPEX-LLM INT4 optimizations on CodeGemma models on [Intel GPUs](../../../README.md). For illustration purposes, we utilize the [google/codegemma-7b-it](https://huggingface.co/google/codegemma-7b-it) as reference CodeGemma models.
|
||||
|
||||
## 0. Requirements
|
||||
To run these examples with IPEX-LLM on Intel GPUs, we have some recommended requirements for your machine, please refer to [here](../../../README.md#requirements) for more information.
|
||||
|
||||
**Important: According to CodeGemma's requirement, please make sure you have installed `transformers==4.38.1` to run the example.**
|
||||
|
||||
## Example: Predict Tokens using `generate()` API
|
||||
In the example [generate.py](./generate.py), we show a basic use case for a CodeGemma model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations on Intel GPUs.
|
||||
### 1. Install
|
||||
#### 1.1 Installation on Linux
|
||||
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
|
||||
|
||||
After installing conda, create a Python environment for IPEX-LLM:
|
||||
```bash
|
||||
conda create -n llm python=3.11 # recommend to use Python 3.11
|
||||
conda activate llm
|
||||
# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
|
||||
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
|
||||
|
||||
# According to CodeGemma's requirement, please make sure you are using a stable version of Transformers, 4.38.1 or newer.
|
||||
pip install transformers==4.38.1
|
||||
```
|
||||
|
||||
#### 1.2 Installation on Windows
|
||||
We suggest using conda to manage environment:
|
||||
```bash
|
||||
conda create -n llm python=3.11 libuv
|
||||
conda activate llm
|
||||
# below command will use pip to install the Intel oneAPI Base Toolkit 2024.0
|
||||
pip install dpcpp-cpp-rt==2024.0.2 mkl-dpcpp==2024.0.0 onednn==2024.0.0
|
||||
|
||||
# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
|
||||
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
|
||||
|
||||
# According to CodeGemma's requirement, please make sure you are using a stable version of Transformers, 4.38.1 or newer.
|
||||
pip install transformers==4.38.1
|
||||
```
|
||||
|
||||
### 2. Configures OneAPI environment variables for Linux
|
||||
|
||||
> [!NOTE]
|
||||
> Skip this step if you are running on Windows.
|
||||
|
||||
This is a required step on Linux for APT or offline installed oneAPI. Skip this step for PIP-installed oneAPI.
|
||||
|
||||
```bash
|
||||
source /opt/intel/oneapi/setvars.sh
|
||||
```
|
||||
|
||||
### 3. Runtime Configurations
|
||||
For optimal performance, it is recommended to set several environment variables. Please check out the suggestions based on your device.
|
||||
#### 3.1 Configurations for Linux
|
||||
<details>
|
||||
|
||||
<summary>For Intel Arc™ A-Series Graphics and Intel Data Center GPU Flex Series</summary>
|
||||
|
||||
```bash
|
||||
export USE_XETLA=OFF
|
||||
export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
|
||||
export SYCL_CACHE_PERSISTENT=1
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
<details>
|
||||
|
||||
<summary>For Intel Data Center GPU Max Series</summary>
|
||||
|
||||
```bash
|
||||
export LD_PRELOAD=${LD_PRELOAD}:${CONDA_PREFIX}/lib/libtcmalloc.so
|
||||
export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
|
||||
export SYCL_CACHE_PERSISTENT=1
|
||||
export ENABLE_SDP_FUSION=1
|
||||
```
|
||||
> Note: Please note that `libtcmalloc.so` can be installed by `conda install -c conda-forge -y gperftools=2.10`.
|
||||
</details>
|
||||
|
||||
<details>
|
||||
|
||||
<summary>For Intel iGPU</summary>
|
||||
|
||||
```bash
|
||||
export SYCL_CACHE_PERSISTENT=1
|
||||
export BIGDL_LLM_XMX_DISABLED=1
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
#### 3.2 Configurations for Windows
|
||||
<details>
|
||||
|
||||
<summary>For Intel iGPU</summary>
|
||||
|
||||
```cmd
|
||||
set SYCL_CACHE_PERSISTENT=1
|
||||
set BIGDL_LLM_XMX_DISABLED=1
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
<details>
|
||||
|
||||
<summary>For Intel Arc™ A-Series Graphics</summary>
|
||||
|
||||
```cmd
|
||||
set SYCL_CACHE_PERSISTENT=1
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
> [!NOTE]
|
||||
> For the first time that each model runs on Intel iGPU/Intel Arc™ A300-Series or Pro A60, it may take several minutes to compile.
|
||||
### 4. Running examples
|
||||
|
||||
```
|
||||
python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROMPT --n-predict N_PREDICT
|
||||
```
|
||||
|
||||
Arguments info:
|
||||
- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the CodeGemma model to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'google/codegemma-7b-it'`.
|
||||
- `--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `'Write a hello world program'`.
|
||||
- `--n-predict N_PREDICT`: argument defining the max number of tokens to predict. It is default to be `32`.
|
||||
|
||||
##### Sample Output
|
||||
##### [google/codegemma-7b-it](https://huggingface.co/google/codegemma-7b-it)
|
||||
```log
|
||||
Inference time: xxxx s
|
||||
-------------------- Prompt --------------------
|
||||
<bos><start_of_turn>user
|
||||
Write a hello world program<end_of_turn>
|
||||
<start_of_turn>model
|
||||
|
||||
-------------------- Output --------------------
|
||||
<start_of_turn>user
|
||||
Write a hello world program<end_of_turn>
|
||||
<start_of_turn>model
|
||||
```python
|
||||
print("Hello, world!")
|
||||
```
|
||||
|
||||
This program will print the message "Hello, world!" to the console.
|
||||
```
|
||||
|
|
@ -0,0 +1,80 @@
|
|||
#
|
||||
# Copyright 2016 The BigDL Authors.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
|
||||
import torch
|
||||
import time
|
||||
import argparse
|
||||
|
||||
from ipex_llm.transformers import AutoModelForCausalLM
|
||||
from transformers import AutoTokenizer
|
||||
|
||||
# The instruction-tuned models use a chat template that must be adhered to for conversational use.
|
||||
# see https://huggingface.co/google/codegemma-7b-it#chat-template.
|
||||
chat = [
|
||||
{ "role": "user", "content": "Write a hello world program" },
|
||||
]
|
||||
|
||||
if __name__ == '__main__':
|
||||
parser = argparse.ArgumentParser(description='Predict Tokens using `generate()` API for CodeGemma model')
|
||||
parser.add_argument('--repo-id-or-model-path', type=str, default="google/codegemma-7b-it",
|
||||
help='The huggingface repo id for the CodeGemma to be downloaded'
|
||||
', or the path to the huggingface checkpoint folder')
|
||||
parser.add_argument('--prompt', type=str, default="Write a hello world program",
|
||||
help='Prompt to infer')
|
||||
parser.add_argument('--n-predict', type=int, default=32,
|
||||
help='Max tokens to predict')
|
||||
|
||||
args = parser.parse_args()
|
||||
model_path = args.repo_id_or_model_path
|
||||
|
||||
# Load model in 4 bit,
|
||||
# which convert the relevant layers in the model into INT4 format
|
||||
# To fix the issue that the output of codegemma-7b-it is abnormal, skip the 'lm_head' module during optimization
|
||||
# When running LLMs on Intel iGPUs for Windows users, we recommend setting `cpu_embedding=True` in the from_pretrained function.
|
||||
# This will allow the memory-intensive embedding layer to utilize the CPU instead of iGPU.
|
||||
model = AutoModelForCausalLM.from_pretrained(model_path,
|
||||
load_in_4bit=True,
|
||||
optimize_model=True,
|
||||
trust_remote_code=True,
|
||||
use_cache=True,
|
||||
modules_to_not_convert=["lm_head"])
|
||||
model = model.to('xpu')
|
||||
|
||||
# Load tokenizer
|
||||
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
|
||||
|
||||
# Generate predicted tokens
|
||||
with torch.inference_mode():
|
||||
chat[0]['content'] = args.prompt
|
||||
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
|
||||
input_ids = tokenizer.encode(prompt, return_tensors="pt").to('xpu')
|
||||
# ipex_llm model needs a warmup, then inference time can be accurate
|
||||
output = model.generate(input_ids,
|
||||
max_new_tokens=args.n_predict)
|
||||
|
||||
# start inference
|
||||
st = time.time()
|
||||
output = model.generate(input_ids,
|
||||
max_new_tokens=args.n_predict)
|
||||
torch.xpu.synchronize()
|
||||
end = time.time()
|
||||
output = output.cpu()
|
||||
output_str = tokenizer.decode(output[0], skip_special_tokens=True)
|
||||
print(f'Inference time: {end-st} s')
|
||||
print('-'*20, 'Prompt', '-'*20)
|
||||
print(prompt)
|
||||
print('-'*20, 'Output', '-'*20)
|
||||
print(output_str)
|
||||
145
python/llm/example/GPU/PyTorch-Models/Model/codegemma/README.md
Normal file
145
python/llm/example/GPU/PyTorch-Models/Model/codegemma/README.md
Normal file
|
|
@ -0,0 +1,145 @@
|
|||
# CodeGemma
|
||||
In this directory, you will find examples on how you could use IPEX-LLM `optimize_model` API to accelerate CodeGemma models. For illustration purposes, we utilize the [google/codegemma-7b-it](https://huggingface.co/google/codegemma-7b-it) as reference CodeGemma models.
|
||||
|
||||
## 0. Requirements
|
||||
To run these examples with IPEX-LLM on Intel GPUs, we have some recommended requirements for your machine, please refer to [here](../../../README.md#requirements) for more information.
|
||||
|
||||
**Important: According to CodeGemma's requirement, please make sure you have installed `transformers==4.38.1` to run the example.**
|
||||
|
||||
## Example: Predict Tokens using `generate()` API
|
||||
In the example [generate.py](./generate.py), we show a basic use case for a CodeGemma model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations on Intel GPUs.
|
||||
### 1. Install
|
||||
#### 1.1 Installation on Linux
|
||||
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
|
||||
|
||||
After installing conda, create a Python environment for IPEX-LLM:
|
||||
```bash
|
||||
conda create -n llm python=3.11 # recommend to use Python 3.11
|
||||
conda activate llm
|
||||
# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
|
||||
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
|
||||
|
||||
# According to CodeGemma's requirement, please make sure you are using a stable version of Transformers, 4.38.1 or newer.
|
||||
pip install transformers==4.38.1
|
||||
```
|
||||
|
||||
#### 1.2 Installation on Windows
|
||||
We suggest using conda to manage environment:
|
||||
```bash
|
||||
conda create -n llm python=3.11 libuv
|
||||
conda activate llm
|
||||
# below command will use pip to install the Intel oneAPI Base Toolkit 2024.0
|
||||
pip install dpcpp-cpp-rt==2024.0.2 mkl-dpcpp==2024.0.0 onednn==2024.0.0
|
||||
|
||||
# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
|
||||
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
|
||||
|
||||
# According to CodeGemma's requirement, please make sure you are using a stable version of Transformers, 4.38.1 or newer.
|
||||
pip install transformers==4.38.1
|
||||
```
|
||||
|
||||
### 2. Configures OneAPI environment variables for Linux
|
||||
|
||||
> [!NOTE]
|
||||
> Skip this step if you are running on Windows.
|
||||
|
||||
This is a required step on Linux for APT or offline installed oneAPI. Skip this step for PIP-installed oneAPI.
|
||||
|
||||
```bash
|
||||
source /opt/intel/oneapi/setvars.sh
|
||||
```
|
||||
|
||||
### 3. Runtime Configurations
|
||||
For optimal performance, it is recommended to set several environment variables. Please check out the suggestions based on your device.
|
||||
#### 3.1 Configurations for Linux
|
||||
<details>
|
||||
|
||||
<summary>For Intel Arc™ A-Series Graphics and Intel Data Center GPU Flex Series</summary>
|
||||
|
||||
```bash
|
||||
export USE_XETLA=OFF
|
||||
export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
|
||||
export SYCL_CACHE_PERSISTENT=1
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
<details>
|
||||
|
||||
<summary>For Intel Data Center GPU Max Series</summary>
|
||||
|
||||
```bash
|
||||
export LD_PRELOAD=${LD_PRELOAD}:${CONDA_PREFIX}/lib/libtcmalloc.so
|
||||
export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
|
||||
export SYCL_CACHE_PERSISTENT=1
|
||||
export ENABLE_SDP_FUSION=1
|
||||
```
|
||||
> Note: Please note that `libtcmalloc.so` can be installed by `conda install -c conda-forge -y gperftools=2.10`.
|
||||
</details>
|
||||
|
||||
<details>
|
||||
|
||||
<summary>For Intel iGPU</summary>
|
||||
|
||||
```bash
|
||||
export SYCL_CACHE_PERSISTENT=1
|
||||
export BIGDL_LLM_XMX_DISABLED=1
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
#### 3.2 Configurations for Windows
|
||||
<details>
|
||||
|
||||
<summary>For Intel iGPU</summary>
|
||||
|
||||
```cmd
|
||||
set SYCL_CACHE_PERSISTENT=1
|
||||
set BIGDL_LLM_XMX_DISABLED=1
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
<details>
|
||||
|
||||
<summary>For Intel Arc™ A-Series Graphics</summary>
|
||||
|
||||
```cmd
|
||||
set SYCL_CACHE_PERSISTENT=1
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
> [!NOTE]
|
||||
> For the first time that each model runs on Intel iGPU/Intel Arc™ A300-Series or Pro A60, it may take several minutes to compile.
|
||||
### 4. Running examples
|
||||
|
||||
```bash
|
||||
python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROMPT --n-predict N_PREDICT
|
||||
```
|
||||
|
||||
In the example, several arguments can be passed to satisfy your requirements:
|
||||
|
||||
- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the CodeGemma model to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'google/codegemma-7b-it'`.
|
||||
- `--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `Write a hello world program'`.
|
||||
- `--n-predict N_PREDICT`: argument defining the max number of tokens to predict. It is default to be `32`.
|
||||
|
||||
#### 4.1 Sample Output
|
||||
#### [google/codegemma-7b-it](https://huggingface.co/google/codegemma-7b-it)
|
||||
```log
|
||||
Inference time: xxxx s
|
||||
-------------------- Prompt --------------------
|
||||
<bos><start_of_turn>user
|
||||
Write a hello world program<end_of_turn>
|
||||
<start_of_turn>model
|
||||
|
||||
-------------------- Output --------------------
|
||||
<start_of_turn>user
|
||||
Write a hello world program<end_of_turn>
|
||||
<start_of_turn>model
|
||||
```python
|
||||
print("Hello, world!")
|
||||
```
|
||||
|
||||
This program will print the message "Hello, world!" to the console.
|
||||
```
|
||||
|
|
@ -0,0 +1,82 @@
|
|||
#
|
||||
# Copyright 2016 The BigDL Authors.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
|
||||
import torch
|
||||
import time
|
||||
import argparse
|
||||
|
||||
from transformers import AutoModelForCausalLM, AutoTokenizer
|
||||
from ipex_llm import optimize_model
|
||||
|
||||
# The instruction-tuned models use a chat template that must be adhered to for conversational use.
|
||||
# see https://huggingface.co/google/codegemma-7b-it#chat-template.
|
||||
chat = [
|
||||
{ "role": "user", "content": "Write a hello world program" },
|
||||
]
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
parser = argparse.ArgumentParser(description='Predict Tokens using `generate()` API for CodeGemma model')
|
||||
parser.add_argument('--repo-id-or-model-path', type=str, default="google/codegemma-7b-it",
|
||||
help='The huggingface repo id for the CodeGemma model to be downloaded'
|
||||
', or the path to the huggingface checkpoint folder')
|
||||
parser.add_argument('--prompt', type=str, default="Write a hello world program",
|
||||
help='Prompt to infer')
|
||||
parser.add_argument('--n-predict', type=int, default=32,
|
||||
help='Max tokens to predict')
|
||||
|
||||
args = parser.parse_args()
|
||||
model_path = args.repo_id_or_model_path
|
||||
|
||||
# Load model
|
||||
model = AutoModelForCausalLM.from_pretrained(model_path,
|
||||
trust_remote_code=True,
|
||||
torch_dtype='auto',
|
||||
low_cpu_mem_usage=True)
|
||||
|
||||
# With only one line to enable IPEX-LLM optimization on model
|
||||
# To fix the issue that the output of codegemma-7b-it is abnormal, skip the 'lm_head' module during optimization
|
||||
# When running LLMs on Intel iGPUs for Windows users, we recommend setting `cpu_embedding=True` in the optimize_model function.
|
||||
# This will allow the memory-intensive embedding layer to utilize the CPU instead of iGPU.
|
||||
model = optimize_model(model, modules_to_not_convert=["lm_head"])
|
||||
|
||||
model = model.to('xpu')
|
||||
|
||||
# Load tokenizer
|
||||
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
|
||||
|
||||
# Generate predicted tokens
|
||||
with torch.inference_mode():
|
||||
chat[0]['content'] = args.prompt
|
||||
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
|
||||
input_ids = tokenizer.encode(prompt, return_tensors="pt").to('xpu')
|
||||
# ipex_llm model needs a warmup, then inference time can be accurate
|
||||
output = model.generate(input_ids,
|
||||
max_new_tokens=args.n_predict)
|
||||
|
||||
# start inference
|
||||
st = time.time()
|
||||
output = model.generate(input_ids,
|
||||
max_new_tokens=args.n_predict)
|
||||
torch.xpu.synchronize()
|
||||
end = time.time()
|
||||
output = output.cpu()
|
||||
output_str = tokenizer.decode(output[0], skip_special_tokens=True)
|
||||
print(f'Inference time: {end-st} s')
|
||||
print('-'*20, 'Prompt', '-'*20)
|
||||
print(prompt)
|
||||
print('-'*20, 'Output', '-'*20)
|
||||
print(output_str)
|
||||
Loading…
Reference in a new issue