ipex-llm/python/llm/example/GPU/GraphMode/README.md
Yuwen Hu 0801d27a6f
Remove PyTorch 2.3 support for Intel GPU (#13097)
* Remove PyTorch 2.3 installation option for GPU

* Remove xpu_lnl option in installation guides for docs

* Update BMG quickstart

* Remove PyTorch 2.3 dependencies for GPU examples

* Update the graphmode example to use stable version 2.2.0

* Fix based on comments
2025-04-22 10:26:16 +08:00

1.8 KiB

Torch Graph Mode

Here, we provide how to run torch graph mode on Intel Arc™ A-Series Graphics with ipex-llm, and gpt2-medium for classification task is used as illustration:

1. Install

conda create -n ipex-llm python=3.11
conda activate ipex-llm
pip install --pre --upgrade ipex-llm[xpu_arc]==2.2.0 --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/cn/
pip install --pre pytorch-triton-xpu==3.0.0+1b2f15840e --index-url https://download.pytorch.org/whl/nightly/xpu
conda install -c conda-forge libstdcxx-ng
unset OCL_ICD_VENDORS

2. Configures OneAPI environment variables

Note

Skip this step if you are running on Windows.

This is a required step on Linux for APT or offline installed oneAPI. Skip this step for PIP-installed oneAPI.

source /opt/intel/oneapi/setvars.sh

3. Run

Convert text-generating GPT2-Medium to the classification:

# The convert step needs to access the internet
export http_proxy=http://your_proxy_url
export https_proxy=http://your_proxy_url

# This will yield gpt2-medium-classification under /llm/models in the container
python convert-model-textgen-to-classfication.py --model-path MODEL_PATH

This will yield a mode directory ends with '-classification' neart your input model path.

Benchmark GPT2-Medium's performance with IPEX-LLM engine:

ipexrun xpu gpt2-graph-mode-benchmark.py --device xpu --engine ipex-llm --batch 16 --model-path MODEL_PATH

# You will see the key output like:
# Average time taken (excluding the first two loops): xxxx seconds, Classification per seconds is xxxx