Fix vLLM-v2 install instructions(#10822)

This commit is contained in:
Guancheng Fu 2024-04-22 09:02:48 +08:00 committed by GitHub
parent 3cd21d5105
commit 61c67af386
No known key found for this signature in database
GPG key ID: B5690EEEBB952194

View file

@ -34,6 +34,8 @@ sycl-ls
To run vLLM continuous batching on Intel GPUs, install the dependencies as follows: To run vLLM continuous batching on Intel GPUs, install the dependencies as follows:
```bash ```bash
# This directory may change depends on where you install oneAPI-basekit
source /opt/intel/oneapi/setvars.sh
# First create an conda environment # First create an conda environment
conda create -n ipex-vllm python=3.11 conda create -n ipex-vllm python=3.11
conda activate ipex-vllm conda activate ipex-vllm