From 61c67af3866b7ae8af388698faa875974d270689 Mon Sep 17 00:00:00 2001 From: Guancheng Fu <110874468+gc-fu@users.noreply.github.com> Date: Mon, 22 Apr 2024 09:02:48 +0800 Subject: [PATCH] Fix vLLM-v2 install instructions(#10822) --- python/llm/example/GPU/vLLM-Serving/README.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/python/llm/example/GPU/vLLM-Serving/README.md b/python/llm/example/GPU/vLLM-Serving/README.md index 8d84a130..8afc4b51 100644 --- a/python/llm/example/GPU/vLLM-Serving/README.md +++ b/python/llm/example/GPU/vLLM-Serving/README.md @@ -34,6 +34,8 @@ sycl-ls To run vLLM continuous batching on Intel GPUs, install the dependencies as follows: ```bash +# This directory may change depends on where you install oneAPI-basekit +source /opt/intel/oneapi/setvars.sh # First create an conda environment conda create -n ipex-vllm python=3.11 conda activate ipex-vllm