LLM: add replit and starcoder to gpu pytorch model example (#9154)
This commit is contained in:
		
							parent
							
								
									797b156a0d
								
							
						
					
					
						commit
						db7f938fdc
					
				
					 5 changed files with 270 additions and 0 deletions
				
			
		| 
						 | 
				
			
			@ -9,6 +9,10 @@ You can use `optimize_model` API to accelerate general PyTorch models on Intel G
 | 
			
		|||
| ChatGLM2       | [link](chatglm2)                                         |
 | 
			
		||||
| Baichuan       | [link](baichuan)                                         |
 | 
			
		||||
| Baichuan2      | [link](baichuan2)                                        |
 | 
			
		||||
| Replit         | [link](replit)                                           |
 | 
			
		||||
| StarCoder      | [link](starcoder)                                        |
 | 
			
		||||
| Dolly v1       | [link](dolly-v1)                                         |
 | 
			
		||||
| Dolly v2       | [link](dolly-v2)                                         |
 | 
			
		||||
 | 
			
		||||
## Verified Hardware Platforms
 | 
			
		||||
 | 
			
		||||
| 
						 | 
				
			
			
 | 
			
		|||
							
								
								
									
										60
									
								
								python/llm/example/GPU/PyTorch-Models/Model/replit/README.md
									
									
									
									
									
										Normal file
									
								
							
							
						
						
									
										60
									
								
								python/llm/example/GPU/PyTorch-Models/Model/replit/README.md
									
									
									
									
									
										Normal file
									
								
							| 
						 | 
				
			
			@ -0,0 +1,60 @@
 | 
			
		|||
# Replit
 | 
			
		||||
In this directory, you will find examples on how you could use BigDL-LLM `optimize_model` API to accelerate Replit models. For illustration purposes, we utilize the [replit/replit-code-v1-3b](https://huggingface.co/replit/replit-code-v1-3b) as reference Replit models.
 | 
			
		||||
 | 
			
		||||
## Requirements
 | 
			
		||||
To run these examples with BigDL-LLM on Intel GPUs, we have some recommended requirements for your machine, please refer to [here](../README.md#recommended-requirements) for more information.
 | 
			
		||||
 | 
			
		||||
## Example: Predict Tokens using `generate()` API
 | 
			
		||||
In the example [generate.py](./generate.py), we show a basic use case for a Replit model to predict the next N tokens using `generate()` API, with BigDL-LLM INT4 optimizations on Intel GPUs.
 | 
			
		||||
### 1. Install
 | 
			
		||||
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
 | 
			
		||||
 | 
			
		||||
After installing conda, create a Python environment for BigDL-LLM:
 | 
			
		||||
```bash
 | 
			
		||||
conda create -n llm python=3.9 # recommend to use Python 3.9
 | 
			
		||||
conda activate llm
 | 
			
		||||
 | 
			
		||||
# below command will install intel_extension_for_pytorch==2.0.110+xpu as default
 | 
			
		||||
# you can install specific ipex/torch version for your need
 | 
			
		||||
pip install --pre --upgrade bigdl-llm[xpu] -f https://developer.intel.com/ipex-whl-stable-xpu
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
### 2. Configures OneAPI environment variables
 | 
			
		||||
```bash
 | 
			
		||||
source /opt/intel/oneapi/setvars.sh
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
### 3. Run
 | 
			
		||||
 | 
			
		||||
For optimal performance on Arc, it is recommended to set several environment variables.
 | 
			
		||||
 | 
			
		||||
```bash
 | 
			
		||||
export USE_XETLA=OFF
 | 
			
		||||
export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
```bash
 | 
			
		||||
python ./generate.py --prompt 'def print_hello_world():'
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
In the example, several arguments can be passed to satisfy your requirements:
 | 
			
		||||
 | 
			
		||||
- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the Replit model to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'replit/replit-code-v1-3b'`.
 | 
			
		||||
- `--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `def print_hello_world():'`.
 | 
			
		||||
- `--n-predict N_PREDICT`: argument defining the max number of tokens to predict. It is default to be `32`.
 | 
			
		||||
 | 
			
		||||
#### 2.3 Sample Output
 | 
			
		||||
#### [replit/replit-code-v1-3b](https://huggingface.co/replit/replit-code-v1-3b)
 | 
			
		||||
```log
 | 
			
		||||
Inference time: xxxx s
 | 
			
		||||
-------------------- Output --------------------
 | 
			
		||||
def print_hello_world():
 | 
			
		||||
    print("Hello")
 | 
			
		||||
    print("World")
 | 
			
		||||
 | 
			
		||||
print_hello_world()
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
def print_hello_world():
 | 
			
		||||
    print
 | 
			
		||||
```
 | 
			
		||||
| 
						 | 
				
			
			@ -0,0 +1,73 @@
 | 
			
		|||
#
 | 
			
		||||
# Copyright 2016 The BigDL Authors.
 | 
			
		||||
#
 | 
			
		||||
# Licensed under the Apache License, Version 2.0 (the "License");
 | 
			
		||||
# you may not use this file except in compliance with the License.
 | 
			
		||||
# You may obtain a copy of the License at
 | 
			
		||||
#
 | 
			
		||||
#     http://www.apache.org/licenses/LICENSE-2.0
 | 
			
		||||
#
 | 
			
		||||
# Unless required by applicable law or agreed to in writing, software
 | 
			
		||||
# distributed under the License is distributed on an "AS IS" BASIS,
 | 
			
		||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 | 
			
		||||
# See the License for the specific language governing permissions and
 | 
			
		||||
# limitations under the License.
 | 
			
		||||
#
 | 
			
		||||
 | 
			
		||||
import torch
 | 
			
		||||
import intel_extension_for_pytorch as ipex
 | 
			
		||||
import time
 | 
			
		||||
import argparse
 | 
			
		||||
 | 
			
		||||
from transformers import AutoModelForCausalLM, AutoTokenizer
 | 
			
		||||
from bigdl.llm import optimize_model
 | 
			
		||||
 | 
			
		||||
# you could tune the prompt based on your own model,
 | 
			
		||||
REPLIT_PROMPT_FORMAT = "{prompt}"
 | 
			
		||||
 | 
			
		||||
if __name__ == '__main__':
 | 
			
		||||
    parser = argparse.ArgumentParser(description='Predict Tokens using `generate()` API for Replit model')
 | 
			
		||||
    parser.add_argument('--repo-id-or-model-path', type=str, default="replit/replit-code-v1-3b",
 | 
			
		||||
                        help='The huggingface repo id for the Replit model to be downloaded'
 | 
			
		||||
                             ', or the path to the huggingface checkpoint folder')
 | 
			
		||||
    parser.add_argument('--prompt', type=str, default="def print_hello_world():",
 | 
			
		||||
                        help='Prompt to infer')
 | 
			
		||||
    parser.add_argument('--n-predict', type=int, default=32,
 | 
			
		||||
                        help='Max tokens to predict')
 | 
			
		||||
 | 
			
		||||
    args = parser.parse_args()
 | 
			
		||||
    model_path = args.repo_id_or_model_path
 | 
			
		||||
 | 
			
		||||
    # Load model
 | 
			
		||||
    model = AutoModelForCausalLM.from_pretrained(model_path,
 | 
			
		||||
                                                 trust_remote_code=True,
 | 
			
		||||
                                                 torch_dtype='auto',
 | 
			
		||||
                                                 low_cpu_mem_usage=True)
 | 
			
		||||
 | 
			
		||||
    # With only one line to enable BigDL-LLM optimization on model
 | 
			
		||||
    model = optimize_model(model)
 | 
			
		||||
 | 
			
		||||
    model = model.to('xpu')
 | 
			
		||||
 | 
			
		||||
    # Load tokenizer
 | 
			
		||||
    tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
 | 
			
		||||
    
 | 
			
		||||
    # Generate predicted tokens
 | 
			
		||||
    with torch.inference_mode():
 | 
			
		||||
        prompt = REPLIT_PROMPT_FORMAT.format(prompt=args.prompt)
 | 
			
		||||
        input_ids = tokenizer.encode(prompt, return_tensors="pt").to('xpu')
 | 
			
		||||
        # ipex model needs a warmup, then inference time can be accurate
 | 
			
		||||
        output = model.generate(input_ids,
 | 
			
		||||
                                max_new_tokens=args.n_predict)
 | 
			
		||||
 | 
			
		||||
        # start inference
 | 
			
		||||
        st = time.time()
 | 
			
		||||
        output = model.generate(input_ids,
 | 
			
		||||
                                max_new_tokens=args.n_predict)
 | 
			
		||||
        torch.xpu.synchronize()
 | 
			
		||||
        end = time.time()
 | 
			
		||||
        output = output.cpu()
 | 
			
		||||
        output_str = tokenizer.decode(output[0], skip_special_tokens=True)
 | 
			
		||||
        print(f'Inference time: {end-st} s')
 | 
			
		||||
        print('-'*20, 'Output', '-'*20)
 | 
			
		||||
        print(output_str)
 | 
			
		||||
| 
						 | 
				
			
			@ -0,0 +1,60 @@
 | 
			
		|||
# StarCoder
 | 
			
		||||
In this directory, you will find examples on how you could use BigDL-LLM `optimize_model` API to accelerate StarCoder models. For illustration purposes, we utilize the [bigcode/starcoder](https://huggingface.co/bigcode/starcoder) as reference StarCoder models.
 | 
			
		||||
 | 
			
		||||
## Requirements
 | 
			
		||||
To run these examples with BigDL-LLM on Intel GPUs, we have some recommended requirements for your machine, please refer to [here](../README.md#recommended-requirements) for more information.
 | 
			
		||||
 | 
			
		||||
## Example: Predict Tokens using `generate()` API
 | 
			
		||||
In the example [generate.py](./generate.py), we show a basic use case for a StarCoder model to predict the next N tokens using `generate()` API, with BigDL-LLM INT4 optimizations on Intel GPUs.
 | 
			
		||||
### 1. Install
 | 
			
		||||
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
 | 
			
		||||
 | 
			
		||||
After installing conda, create a Python environment for BigDL-LLM:
 | 
			
		||||
```bash
 | 
			
		||||
conda create -n llm python=3.9 # recommend to use Python 3.9
 | 
			
		||||
conda activate llm
 | 
			
		||||
 | 
			
		||||
# below command will install intel_extension_for_pytorch==2.0.110+xpu as default
 | 
			
		||||
# you can install specific ipex/torch version for your need
 | 
			
		||||
pip install --pre --upgrade bigdl-llm[xpu] -f https://developer.intel.com/ipex-whl-stable-xpu
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
### 2. Configures OneAPI environment variables
 | 
			
		||||
```bash
 | 
			
		||||
source /opt/intel/oneapi/setvars.sh
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
### 3. Run
 | 
			
		||||
 | 
			
		||||
For optimal performance on Arc, it is recommended to set several environment variables.
 | 
			
		||||
 | 
			
		||||
```bash
 | 
			
		||||
export USE_XETLA=OFF
 | 
			
		||||
export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
```bash
 | 
			
		||||
python ./generate.py --prompt 'def print_hello_world():'
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
In the example, several arguments can be passed to satisfy your requirements:
 | 
			
		||||
 | 
			
		||||
- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the StarCoder model to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'bigcode/starcoder'`.
 | 
			
		||||
- `--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `def print_hello_world():'`.
 | 
			
		||||
- `--n-predict N_PREDICT`: argument defining the max number of tokens to predict. It is default to be `32`.
 | 
			
		||||
 | 
			
		||||
#### 2.3 Sample Output
 | 
			
		||||
#### [bigcode/starcoder](https://huggingface.co/bigcode/starcoder)
 | 
			
		||||
```log
 | 
			
		||||
Inference time: xxxx s
 | 
			
		||||
-------------------- Output --------------------
 | 
			
		||||
def print_hello_world():
 | 
			
		||||
    print("Hello World!")
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
def print_hello_name(name):
 | 
			
		||||
    print(f"Hello {name}!")
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
def print_
 | 
			
		||||
```
 | 
			
		||||
| 
						 | 
				
			
			@ -0,0 +1,73 @@
 | 
			
		|||
#
 | 
			
		||||
# Copyright 2016 The BigDL Authors.
 | 
			
		||||
#
 | 
			
		||||
# Licensed under the Apache License, Version 2.0 (the "License");
 | 
			
		||||
# you may not use this file except in compliance with the License.
 | 
			
		||||
# You may obtain a copy of the License at
 | 
			
		||||
#
 | 
			
		||||
#     http://www.apache.org/licenses/LICENSE-2.0
 | 
			
		||||
#
 | 
			
		||||
# Unless required by applicable law or agreed to in writing, software
 | 
			
		||||
# distributed under the License is distributed on an "AS IS" BASIS,
 | 
			
		||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 | 
			
		||||
# See the License for the specific language governing permissions and
 | 
			
		||||
# limitations under the License.
 | 
			
		||||
#
 | 
			
		||||
 | 
			
		||||
import torch
 | 
			
		||||
import intel_extension_for_pytorch as ipex
 | 
			
		||||
import time
 | 
			
		||||
import argparse
 | 
			
		||||
 | 
			
		||||
from transformers import AutoModelForCausalLM, AutoTokenizer
 | 
			
		||||
from bigdl.llm import optimize_model
 | 
			
		||||
 | 
			
		||||
# you could tune the prompt based on your own model,
 | 
			
		||||
STARCODER_PROMPT_FORMAT = "{prompt}"
 | 
			
		||||
 | 
			
		||||
if __name__ == '__main__':
 | 
			
		||||
    parser = argparse.ArgumentParser(description='Predict Tokens using `generate()` API for StarCoder model')
 | 
			
		||||
    parser.add_argument('--repo-id-or-model-path', type=str, default="bigcode/starcoder",
 | 
			
		||||
                        help='The huggingface repo id for the StarCoder model to be downloaded'
 | 
			
		||||
                             ', or the path to the huggingface checkpoint folder')
 | 
			
		||||
    parser.add_argument('--prompt', type=str, default="def print_hello_world():",
 | 
			
		||||
                        help='Prompt to infer')
 | 
			
		||||
    parser.add_argument('--n-predict', type=int, default=32,
 | 
			
		||||
                        help='Max tokens to predict')
 | 
			
		||||
 | 
			
		||||
    args = parser.parse_args()
 | 
			
		||||
    model_path = args.repo_id_or_model_path
 | 
			
		||||
 | 
			
		||||
    # Load model
 | 
			
		||||
    model = AutoModelForCausalLM.from_pretrained(model_path,
 | 
			
		||||
                                                 trust_remote_code=True,
 | 
			
		||||
                                                 torch_dtype='auto',
 | 
			
		||||
                                                 low_cpu_mem_usage=True)
 | 
			
		||||
 | 
			
		||||
    # With only one line to enable BigDL-LLM optimization on model
 | 
			
		||||
    model = optimize_model(model)
 | 
			
		||||
 | 
			
		||||
    model = model.to('xpu')
 | 
			
		||||
 | 
			
		||||
    # Load tokenizer
 | 
			
		||||
    tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
 | 
			
		||||
    
 | 
			
		||||
    # Generate predicted tokens
 | 
			
		||||
    with torch.inference_mode():
 | 
			
		||||
        prompt = STARCODER_PROMPT_FORMAT.format(prompt=args.prompt)
 | 
			
		||||
        input_ids = tokenizer.encode(prompt, return_tensors="pt").to('xpu')
 | 
			
		||||
        # ipex model needs a warmup, then inference time can be accurate
 | 
			
		||||
        output = model.generate(input_ids,
 | 
			
		||||
                                max_new_tokens=args.n_predict)
 | 
			
		||||
 | 
			
		||||
        # start inference
 | 
			
		||||
        st = time.time()
 | 
			
		||||
        output = model.generate(input_ids,
 | 
			
		||||
                                max_new_tokens=args.n_predict)
 | 
			
		||||
        torch.xpu.synchronize()
 | 
			
		||||
        end = time.time()
 | 
			
		||||
        output = output.cpu()
 | 
			
		||||
        output_str = tokenizer.decode(output[0], skip_special_tokens=True)
 | 
			
		||||
        print(f'Inference time: {end-st} s')
 | 
			
		||||
        print('-'*20, 'Output', '-'*20)
 | 
			
		||||
        print(output_str)
 | 
			
		||||
		Loading…
	
		Reference in a new issue