Remove PyTorch 2.3 support for Intel GPU (#13097)

* Remove PyTorch 2.3 installation option for GPU

* Remove xpu_lnl option in installation guides for docs

* Update BMG quickstart

* Remove PyTorch 2.3 dependencies for GPU examples

* Update the graphmode example to use stable version 2.2.0

* Fix based on comments
This commit is contained in:
Yuwen Hu 2025-04-22 10:26:16 +08:00 committed by GitHub
parent a2a35fdfad
commit 0801d27a6f
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
9 changed files with 92 additions and 291 deletions

View file

@ -46,93 +46,47 @@ We recommend using [Miniforge](https://conda-forge.org/download/) to create a py
The easiest ways to install `ipex-llm` is the following commands.
- For **Intel Core™ Ultra Processors (Series 2) with processor number 2xxV (code name Lunar Lake)**:
Choose either US or CN website for `extra-index-url`:
Choose either US or CN website for `extra-index-url`:
- For **US**:
- For **US**:
```cmd
conda create -n llm python=3.11 libuv
conda activate llm
```cmd
conda create -n llm python=3.11 libuv
conda activate llm
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
```
pip install --pre --upgrade ipex-llm[xpu_lnl] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/lnl/us/
```
- For **CN**:
- For **CN**:
```cmd
conda create -n llm python=3.11 libuv
conda activate llm
```cmd
conda create -n llm python=3.11 libuv
conda activate llm
pip install --pre --upgrade ipex-llm[xpu_lnl] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/lnl/cn/
```
- For **other Intel iGPU and dGPU**:
Choose either US or CN website for `extra-index-url`:
- For **US**:
```cmd
conda create -n llm python=3.11 libuv
conda activate llm
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
```
- For **CN**:
```cmd
conda create -n llm python=3.11 libuv
conda activate llm
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/cn/
```
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/cn/
```
#### Install IPEX-LLM From Wheel
If you encounter network issues when installing IPEX, you can also install IPEX-LLM dependencies for Intel XPU from source archives. First you need to download and install torch/torchvision/ipex from wheels listed below before installing `ipex-llm`.
- For **Intel Core™ Ultra Processors (Series 2) with processor number 2xxV (code name Lunar Lake)**:
Download the wheels on Windows system:
Download the wheels on Windows system:
```
wget https://intel-extension-for-pytorch.s3.amazonaws.com/ipex_stable/xpu/torch-2.1.0a0%2Bcxx11.abi-cp311-cp311-win_amd64.whl
wget https://intel-extension-for-pytorch.s3.amazonaws.com/ipex_stable/xpu/torchvision-0.16.0a0%2Bcxx11.abi-cp311-cp311-win_amd64.whl
wget https://intel-extension-for-pytorch.s3.amazonaws.com/ipex_stable/xpu/intel_extension_for_pytorch-2.1.10%2Bxpu-cp311-cp311-win_amd64.whl
```
```
wget https://intel-extension-for-pytorch.s3.amazonaws.com/ipex_stable/lnl/torch-2.3.1%2Bcxx11.abi-cp311-cp311-win_amd64.whl
wget https://intel-extension-for-pytorch.s3.amazonaws.com/ipex_stable/lnl/torchvision-0.18.1%2Bcxx11.abi-cp311-cp311-win_amd64.whl
wget https://intel-extension-for-pytorch.s3.amazonaws.com/ipex_stable/lnl/intel_extension_for_pytorch-2.3.110%2Bxpu-cp311-cp311-win_amd64.whl
```
You may install dependencies directly from the wheel archives and then install `ipex-llm` using following commands:
You may install dependencies directly from the wheel archives and then install `ipex-llm` using following commands:
```
pip install torch-2.1.0a0+cxx11.abi-cp311-cp311-win_amd64.whl
pip install torchvision-0.16.0a0+cxx11.abi-cp311-cp311-win_amd64.whl
pip install intel_extension_for_pytorch-2.1.10+xpu-cp311-cp311-win_amd64.whl
```
pip install torch-2.3.1+cxx11.abi-cp311-cp311-win_amd64.whl
pip install torchvision-0.18.1+cxx11.abi-cp311-cp311-win_amd64.whl
pip install intel_extension_for_pytorch-2.3.110+xpu-cp311-cp311-win_amd64.whl
pip install --pre --upgrade ipex-llm[xpu_lnl]
```
- For **other Intel iGPU and dGPU**:
Download the wheels on Windows system:
```
wget https://intel-extension-for-pytorch.s3.amazonaws.com/ipex_stable/xpu/torch-2.1.0a0%2Bcxx11.abi-cp311-cp311-win_amd64.whl
wget https://intel-extension-for-pytorch.s3.amazonaws.com/ipex_stable/xpu/torchvision-0.16.0a0%2Bcxx11.abi-cp311-cp311-win_amd64.whl
wget https://intel-extension-for-pytorch.s3.amazonaws.com/ipex_stable/xpu/intel_extension_for_pytorch-2.1.10%2Bxpu-cp311-cp311-win_amd64.whl
```
You may install dependencies directly from the wheel archives and then install `ipex-llm` using following commands:
```
pip install torch-2.1.0a0+cxx11.abi-cp311-cp311-win_amd64.whl
pip install torchvision-0.16.0a0+cxx11.abi-cp311-cp311-win_amd64.whl
pip install intel_extension_for_pytorch-2.1.10+xpu-cp311-cp311-win_amd64.whl
pip install --pre --upgrade ipex-llm[xpu]
```
pip install --pre --upgrade ipex-llm[xpu]
```
> [!NOTE]
> All the wheel packages mentioned here are for Python 3.11. If you would like to use Python 3.9 or 3.10, you should modify the wheel names for ``torch``, ``torchvision``, and ``intel_extension_for_pytorch`` by replacing ``cp11`` with ``cp39`` or ``cp310``, respectively.
@ -453,7 +407,7 @@ We recommend using [Miniforge](https://conda-forge.org/download/) to create a py
> The ``xpu`` option will install IPEX-LLM with PyTorch 2.1 by default, which is equivalent to
>
> ```bash
> pip install --pre --upgrade ipex-llm[xpu_2.1] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/> xpu/us/
> pip install --pre --upgrade ipex-llm[xpu_2.1] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
> ```
- For **CN**:
@ -470,7 +424,7 @@ We recommend using [Miniforge](https://conda-forge.org/download/) to create a py
> The ``xpu`` option will install IPEX-LLM with PyTorch 2.1 by default, which is equivalent to
>
> ```bash
> pip install --pre --upgrade ipex-llm[xpu_2.1] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/> xpu/cn/
> pip install --pre --upgrade ipex-llm[xpu_2.1] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
> ```
- For **PyTorch 2.0** (deprecated for versions ``ipex-llm >= 2.1.0b20240511``):

View file

@ -67,27 +67,18 @@ conda activate llm
With the `llm` environment active, install the appropriate `ipex-llm` package based on your use case:
#### For PyTorch and HuggingFace:
Install the `ipex-llm[xpu-arc]` package. Choose either the US or CN website for `extra-index-url`:
Install the `ipex-llm[xpu_2.6]` package:
- For **US**:
```bash
pip install --pre --upgrade ipex-llm[xpu-arc] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
```
- For **CN**:
```bash
pip install --pre --upgrade ipex-llm[xpu-arc] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/cn/
```
```bash
pip install --pre --upgrade ipex-llm[xpu_2.6] --extra-index-url https://download.pytorch.org/whl/xpu
```
#### For llama.cpp and Ollama:
Install the `ipex-llm[cpp]` package.
Install the `ipex-llm[cpp]` package:
```bash
pip install --pre --upgrade ipex-llm[cpp]
```
> [!NOTE]
> If you encounter network issues during installation, refer to the [troubleshooting guide](../Overview/install_gpu.md#install-ipex-llm-from-wheel-1) for alternative steps.
```bash
pip install --pre --upgrade ipex-llm[cpp]
```
---
@ -106,7 +97,7 @@ If your driver version is lower than `32.0.101.6449/32.0.101.101.6256`, update i
Download and install Miniforge for Windows from the [official page](https://conda-forge.org/download/). After installation, create and activate a Python environment:
```cmd
conda create -n llm python=3.11 libuv
conda create -n llm python=3.11
conda activate llm
```
@ -117,27 +108,18 @@ conda activate llm
With the `llm` environment active, install the appropriate `ipex-llm` package based on your use case:
#### For PyTorch and HuggingFace:
Install the `ipex-llm[xpu-arc]` package. Choose either the US or CN website for `extra-index-url`:
Install the `ipex-llm[xpu_2.6]` package:
- For **US**:
```cmd
pip install --pre --upgrade ipex-llm[xpu-arc] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
```
- For **CN**:
```cmd
pip install --pre --upgrade ipex-llm[xpu-arc] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/cn/
```
```bash
pip install --pre --upgrade ipex-llm[xpu_2.6] --extra-index-url https://download.pytorch.org/whl/xpu
```
#### For llama.cpp and Ollama:
Install the `ipex-llm[cpp]` package.
Install the `ipex-llm[cpp]` package.:
```cmd
pip install --pre --upgrade ipex-llm[cpp]
```
> [!NOTE]
> If you encounter network issues while installing IPEX, refer to [this guide](../Overview/install_gpu.md#install-ipex-llm-from-wheel) for troubleshooting advice.
```cmd
pip install --pre --upgrade ipex-llm[cpp]
```
---
@ -166,21 +148,24 @@ Run a Quick PyTorch Example:
torch.Size([1, 1, 40, 40])
```
For benchmarks and performance measurement, refer to the [Benchmark Quickstart guide](https://github.com/intel-analytics/ipex-llm/blob/main/docs/mddocs/Quickstart/benchmark_quickstart.md).
> [!TIP]
> Please refer to here ([Linux](./install_pytorch26_gpu.md#runtime-configurations-1) or [Windows](./install_pytorch26_gpu.md#runtime-configurations)) regarding runtime configurations for PyTorch with IPEX-LLM on B-Series GPU.
For benchmarks and performance measurement, refer to the [Benchmark Quickstart guide](./benchmark_quickstart.md).
---
### 3.2 Ollama
To integrate and run with **Ollama**, follow the [Ollama Quickstart guide](https://github.com/intel-analytics/ipex-llm/blob/main/docs/mddocs/Quickstart/ollama_quickstart.md).
To integrate and run with **Ollama**, follow the [Ollama Quickstart guide](./ollama_quickstart.md).
### 3.3 llama.cpp
For instructions on how to run **llama.cpp** with IPEX-LLM, refer to the [llama.cpp Quickstart guide](https://github.com/intel-analytics/ipex-llm/blob/main/docs/mddocs/Quickstart/llama_cpp_quickstart.md).
For instructions on how to run **llama.cpp** with IPEX-LLM, refer to the [llama.cpp Quickstart guide](./llama_cpp_quickstart.md).
### 3.4 vLLM
To set up and run **vLLM**, follow the [vLLM Quickstart guide](https://github.com/intel-analytics/ipex-llm/blob/main/docs/mddocs/Quickstart/vLLM_quickstart.md).
To set up and run **vLLM**, follow the [vLLM Quickstart guide](./vLLM_quickstart.md).
## 4. Troubleshooting

View file

@ -59,49 +59,25 @@ conda activate llm
With the `llm` environment active, use `pip` to install `ipex-llm` for GPU:
- For **Intel Core™ Ultra Processors (Series 2) with processor number 2xxV (code name Lunar Lake)**:
Choose either US or CN website for `extra-index-url`:
Choose either US or CN website for `extra-index-url`:
- For **US**:
- For **US**:
```cmd
conda create -n llm python=3.11 libuv
conda activate llm
```cmd
conda create -n llm python=3.11 libuv
conda activate llm
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
```
pip install --pre --upgrade ipex-llm[xpu_lnl] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/lnl/us/
```
- For **CN**:
- For **CN**:
```cmd
conda create -n llm python=3.11 libuv
conda activate llm
```cmd
conda create -n llm python=3.11 libuv
conda activate llm
pip install --pre --upgrade ipex-llm[xpu_lnl] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/lnl/cn/
```
- For **other Intel iGPU and dGPU**:
Choose either US or CN website for `extra-index-url`:
- For **US**:
```cmd
conda create -n llm python=3.11 libuv
conda activate llm
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
```
- For **CN**:
```cmd
conda create -n llm python=3.11 libuv
conda activate llm
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/cn/
```
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/cn/
```
> [!NOTE]
> If you encounter network issues while installing IPEX, refer to [this guide](../Overview/install_gpu.md#install-ipex-llm-from-wheel) for troubleshooting advice.

View file

@ -60,47 +60,26 @@ conda activate llm
## 安装 `ipex-llm`
`llm` 环境处于激活状态下,使用 `pip` 安装适用于 GPU 的 `ipex-llm`
- **对于处理器编号为 2xxV 的第二代 Intel Core™ Ultra Processors (代号 Lunar Lake)**
可以根据区域选择不同的 `extra-index-url`,提供 US 和 CN 两个选项:
可以根据区域选择不同的 `extra-index-url`,提供 US 和 CN 两个选项:
- **US**:
- **US**:
```cmd
conda create -n llm python=3.11 libuv
conda activate llm
```cmd
conda create -n llm python=3.11 libuv
conda activate llm
pip install --pre --upgrade ipex-llm[xpu_lnl] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/lnl/us/
```
- **CN**:
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
```
```cmd
conda create -n llm python=3.11 libuv
conda activate llm
- **CN**:
pip install --pre --upgrade ipex-llm[xpu_lnl] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/lnl/cn/
```
- 对于**其他 Intel iGPU 和 dGPU**:
```cmd
conda create -n llm python=3.11 libuv
conda activate llm
可以根据区域选择不同的 `extra-index-url`,提供 US 和 CN 两个选项:
- **US**:
```cmd
conda create -n llm python=3.11 libuv
conda activate llm
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
```
- **CN**:
```cmd
conda create -n llm python=3.11 libuv
conda activate llm
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/cn/
```
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/cn/
```
> [!NOTE]
> 如果在安装 IPEX 时遇到网络问题,请参阅[本指南](../Overview/install_gpu.md#install-ipex-llm-from-wheel)获取故障排除建议。

View file

@ -6,7 +6,7 @@ Here, we provide how to run [torch graph mode](https://pytorch.org/blog/optimizi
```bash
conda create -n ipex-llm python=3.11
conda activate ipex-llm
pip install --pre --upgrade ipex-llm[xpu_arc] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/cn/
pip install --pre --upgrade ipex-llm[xpu_arc]==2.2.0 --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/cn/
pip install --pre pytorch-triton-xpu==3.0.0+1b2f15840e --index-url https://download.pytorch.org/whl/nightly/xpu
conda install -c conda-forge libstdcxx-ng
unset OCL_ICD_VENDORS

View file

@ -4,16 +4,16 @@ In this directory, you will find examples on how you could apply IPEX-LLM INT4 o
## 0. Requirements & Installation
To run these examples with IPEX-LLM on Intel GPUs, we have some recommended requirements for your machine, please refer to [here](../../../README.md#requirements) for more information.
To run these examples with IPEX-LLM on Intel GPUs, we have some recommended requirements for your machine, please refer to here ([Windows](../../../../../../../docs/mddocs/Quickstart/install_pytorch26_gpu.md#install-prerequisites) or [Linux](../../../../../../../docs/mddocs/Quickstart/install_pytorch26_gpu.md#install-prerequisites-1)) for more information.
### 0.1 Installation
```bash
conda create -n llm python=3.11
conda activate llm
Visit [Install IPEX-LLM on Intel GPU with PyTorch 2.6](../../../../../../../docs/mddocs/Quickstart/install_pytorch26_gpu.md), and follow **Install `ipex-llm`** ([Windows](../../../../../../../docs/mddocs/Quickstart/install_pytorch26_gpu.md#install-ipex-llm) or [Linux](../../../../../../../docs/mddocs/Quickstart/install_pytorch26_gpu.md#install-ipex-llm-1)).
# install IPEX-LLM with PyTorch 2.6 supports
pip install --pre --upgrade ipex-llm[xpu_2.6] --extra-index-url https://download.pytorch.org/whl/xpu
Then, install other dependencies for Moonlight model with IPEX-LLM optimizations:
```bash
conda activate llm-pt26
pip install transformers==4.45.0
pip install accelerate==0.33.0
@ -24,23 +24,7 @@ pip install tiktoken blobfile
### 0.2 Runtime Configuration
- For Windows users:
```cmd
set SYCL_CACHE_PERSISTENT=1
:: optional
set SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
```
- For Linux users:
```cmd
unset OCL_ICD_VENDOR
export SYCL_CACHE_PERSISTENT=1
# optional
export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
```
> [!NOTE]
> The environment variable `SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS` determines the usage of immediate command lists for task submission to the GPU. Enabling this mode may improve performance, but sometimes this may also cause performance degradation. Please consider experimenting with and without this environment variable for best performance. For more details, you can refer to [this article](https://www.intel.com/content/www/us/en/developer/articles/guide/level-zero-immediate-command-lists.html)
Visit [Install IPEX-LLM on Intel GPU with PyTorch 2.6](../../../../../../../docs/mddocs/Quickstart/install_pytorch26_gpu.md), and follow **Runtime Configurations** ([Windows](../../../../../../../docs/mddocs/Quickstart/install_pytorch26_gpu.md#runtime-configurations) or [Linux](../../../../../../../docs/mddocs/Quickstart/install_pytorch26_gpu.md#runtime-configurations-1)).
## 1. Download & Convert Model

View file

@ -5,31 +5,11 @@ In the following examples, we will guide you to apply IPEX-LLM optimizations on
## 0. Requirements & Installation
To run these examples with IPEX-LLM on Intel GPUs, we have some recommended requirements for your machine, please refer to [here](../../../README.md#requirements) for more information.
To run these examples with IPEX-LLM on Intel GPUs, we have some recommended requirements for your machine, please refer to here ([Windows](../../../../../../../docs/mddocs/Quickstart/install_pytorch26_gpu.md#install-prerequisites) or [Linux](../../../../../../../docs/mddocs/Quickstart/install_pytorch26_gpu.md#install-prerequisites-1)) for more information.
### 0.1 Install IPEX-LLM
- For **Intel Core™ Ultra Processors (Series 2) with processor number 2xxV (code name Lunar Lake)** on Windows:
```cmd
conda create -n llm python=3.11 libuv
conda activate llm
:: or --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/lnl/cn/
pip install --pre --upgrade ipex-llm[xpu_lnl] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/lnl/us/
pip install torchaudio==2.3.1.post0 --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/lnl/us/
```
- For **Intel Arc B-Series GPU (code name Battlemage)** on Linux:
```cmd
conda create -n llm python=3.11
conda activate llm
# or --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/cn/
pip install --pre --upgrade ipex-llm[xpu-arc] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
pip install torchaudio==2.3.1+cxx11.abi --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
```
> [!NOTE]
> We will update for installation on more Intel GPU platforms.
Visit [Install IPEX-LLM on Intel GPU with PyTorch 2.6](../../../../../../../docs/mddocs/Quickstart/install_pytorch26_gpu.md), and follow **Install `ipex-llm`** ([Windows](../../../../../../../docs/mddocs/Quickstart/install_pytorch26_gpu.md#install-ipex-llm) or [Linux](../../../../../../../docs/mddocs/Quickstart/install_pytorch26_gpu.md#install-ipex-llm-1)).
### 0.2 Install Required Pacakges for MiniCPM-o-2_6
@ -45,18 +25,7 @@ pip install moviepy
### 0.3 Runtime Configuration
- For **Intel Core™ Ultra Processors (Series 2) with processor number 2xxV (code name Lunar Lake)** on Windows:
```cmd
set SYCL_CACHE_PERSISTENT=1
```
- For **Intel Arc B-Series GPU (code name Battlemage)** on Linux:
```cmd
unset OCL_ICD_VENDOR
export SYCL_CACHE_PERSISTENT=1
```
> [!NOTE]
> We will update for runtime configuration on more Intel GPU platforms.
Visit [Install IPEX-LLM on Intel GPU with PyTorch 2.6](../../../../../../../docs/mddocs/Quickstart/install_pytorch26_gpu.md), and follow **Runtime Configurations** ([Windows](../../../../../../../docs/mddocs/Quickstart/install_pytorch26_gpu.md#runtime-configurations) or [Linux](../../../../../../../docs/mddocs/Quickstart/install_pytorch26_gpu.md#runtime-configurations-1)).
## 1. Example: Chat in Omni Mode
In [omni.py](./omni.py), we show a use case for a MiniCPM-V-2_6 model to chat in omni mode with IPEX-LLM INT4 optimizations on Intel GPUs. In this example, the model will take a video as input, and conduct inference based on the images and audio of this video.

View file

@ -5,29 +5,12 @@ In the following examples, we will guide you to apply IPEX-LLM optimizations on
## 0. Requirements & Installation
To run these examples with IPEX-LLM on Intel GPUs, we have some recommended requirements for your machine, please refer to [here](../../../README.md#requirements) for more information.
To run these examples with IPEX-LLM on Intel GPUs, we have some recommended requirements for your machine, please refer to here ([Windows](../../../../../../../docs/mddocs/Quickstart/install_pytorch26_gpu.md#install-prerequisites) or [Linux](../../../../../../../docs/mddocs/Quickstart/install_pytorch26_gpu.md#install-prerequisites-1)) for more information.
### 0.1 Install IPEX-LLM
- For **Intel Core™ Ultra Processors (Series 2) with processor number 2xxV (code name Lunar Lake)** on Windows:
```cmd
conda create -n llm python=3.11 libuv
conda activate llm
Visit [Install IPEX-LLM on Intel GPU with PyTorch 2.6](../../../../../../../docs/mddocs/Quickstart/install_pytorch26_gpu.md), and follow **Install `ipex-llm`** ([Windows](../../../../../../../docs/mddocs/Quickstart/install_pytorch26_gpu.md#install-ipex-llm) or [Linux](../../../../../../../docs/mddocs/Quickstart/install_pytorch26_gpu.md#install-ipex-llm-1)).
:: or --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/lnl/cn/
pip install --pre --upgrade ipex-llm[xpu_lnl] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/lnl/us/
```
- For **Intel Arc B-Series GPU (code name Battlemage)** on Linux:
```cmd
conda create -n llm python=3.11
conda activate llm
# or --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/cn/
pip install --pre --upgrade ipex-llm[xpu-arc] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
```
> [!NOTE]
> We will update for installation on more Intel GPU platforms.
### 0.2 Install Required Pacakges for Janus-Pro
@ -55,18 +38,7 @@ cd ..
### 0.3 Runtime Configuration
- For **Intel Core™ Ultra Processors (Series 2) with processor number 2xxV (code name Lunar Lake)** on Windows:
```cmd
set SYCL_CACHE_PERSISTENT=1
```
- For **Intel Arc B-Series GPU (code name Battlemage)** on Linux:
```bash
unset OCL_ICD_VENDOR
export SYCL_CACHE_PERSISTENT=1
```
> [!NOTE]
> We will update for runtime configuration on more Intel GPU platforms.
Visit [Install IPEX-LLM on Intel GPU with PyTorch 2.6](../../../../../../../docs/mddocs/Quickstart/install_pytorch26_gpu.md), and follow **Runtime Configurations** ([Windows](../../../../../../../docs/mddocs/Quickstart/install_pytorch26_gpu.md#runtime-configurations) or [Linux](../../../../../../../docs/mddocs/Quickstart/install_pytorch26_gpu.md#runtime-configurations-1)).
## 1. Example: Predict Tokens using `generate()` API
In [generate.py](./generate.py), we show a use case for a Janus-Pro model to predict the next N tokens using `generate()` API based on text/image inputs, or a combination of two of them, with IPEX-LLM low-bit optimizations on Intel GPUs.

View file

@ -296,21 +296,6 @@ def setup_package():
xpu_21_requires += oneapi_2024_0_requires
# default to ipex 2.1 for linux and windows
xpu_requires = copy.deepcopy(xpu_21_requires)
xpu_lnl_requires = copy.deepcopy(all_requires)
for exclude_require in cpu_torch_version:
xpu_lnl_requires.remove(exclude_require)
xpu_lnl_requires += ["torch==2.3.1.post0+cxx11.abi;platform_system=='Windows'",
"torchvision==0.18.1.post0+cxx11.abi;platform_system=='Windows'",
"intel-extension-for-pytorch==2.3.110.post0+xpu;platform_system=='Windows'",
"torch==2.3.1+cxx11.abi;platform_system=='Linux'",
"torchvision==0.18.1+cxx11.abi;platform_system=='Linux'",
"intel-extension-for-pytorch==2.3.110+xpu;platform_system=='Linux'",
"bigdl-core-xe-23==" + CORE_XE_VERSION,
"bigdl-core-xe-batch-23==" + CORE_XE_VERSION,
"bigdl-core-xe-addons-23==" + CORE_XE_VERSION,
"onednn-devel==2024.1.1;platform_system=='Windows'",
"onednn==2024.1.1;platform_system=='Windows'"]
xpu_26_requires = copy.deepcopy(all_requires)
for exclude_require in cpu_torch_version:
@ -381,9 +366,6 @@ def setup_package():
"xpu": xpu_requires, # default to ipex 2.1 for linux and windows
"npu": npu_requires,
"xpu-2-1": xpu_21_requires,
"xpu-lnl": xpu_lnl_requires,
"xpu-arl": xpu_lnl_requires,
"xpu-arc": xpu_lnl_requires,
"xpu-2-6": xpu_26_requires,
"xpu-2-6-arl": xpu_26_arl_requires,
"serving": serving_requires,