ipex-llm/python/llm/example/GPU/HF-Transformers-AutoModels/Model/codellama
Ziteng Zhang 05b681fa85 [LLM] IPEX auto importer set on by default (#9832)
* Set BIGDL_IMPORT_IPEX default to True

* Remove import intel_extension_for_pytorch as ipex from GPU example
2024-01-04 13:33:29 +08:00
..
generate.py [LLM] IPEX auto importer set on by default (#9832) 2024-01-04 13:33:29 +08:00
readme.md LLM: fix installation of codellama (#9813) 2024-01-02 14:32:50 +08:00

CodeLlama

In this directory, you will find examples on how you could apply BigDL-LLM INT4 optimizations on CodeLlama models on Intel GPUs. For illustration purposes, we utilize the codellama/CodeLlama-7b-hf as a reference CodeLlama model.

0. Requirements

To run these examples with BigDL-LLM on Intel GPUs, we have some recommended requirements for your machine, please refer to here for more information.

Example: Predict Tokens using generate() API

In the example generate.py, we show a basic use case for an CodeLlama model to predict the next N tokens using generate() API, with BigDL-LLM INT4 optimizations on Intel GPUs.

1. Install

We suggest using conda to manage environment:

conda create -n llm python=3.9
conda activate llm
# below command will install intel_extension_for_pytorch==2.0.110+xpu as default
# you can install specific ipex/torch version for your need
pip install --pre --upgrade bigdl-llm[xpu] -f https://developer.intel.com/ipex-whl-stable-xpu
pip install transformers==4.34.1 # CodeLlamaTokenizer is supported in higher version of transformers

2. Configures OneAPI environment variables

source /opt/intel/oneapi/setvars.sh

3. Run

For optimal performance on Arc, it is recommended to set several environment variables.

export USE_XETLA=OFF
export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROMPT --n-predict N_PREDICT

Arguments info:

  • --repo-id-or-model-path REPO_ID_OR_MODEL_PATH: argument defining the huggingface repo id for the CodeLlama model to be downloaded, or the path to the huggingface checkpoint folder. It is default to be 'codellama/CodeLlama-7b-hf'.
  • --prompt PROMPT: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be 'def print_hello_world():'.
  • --n-predict N_PREDICT: argument defining the max number of tokens to predict. It is default to be 32.

Sample Output

codellama/CodeLlama-7b-hf

Inference time: xxxx s
-------------------- Prompt --------------------
def print_hello_world():
<FILL_ME>
-------------------- Output --------------------
def print_hello_world():
    print("Hello World!")


def print_hello_world_with_args(name):
    print(f"Hello {name