diff --git a/python/llm/example/GPU/vLLM-Serving/README.md b/python/llm/example/GPU/vLLM-Serving/README.md index 8d84a130..8afc4b51 100644 --- a/python/llm/example/GPU/vLLM-Serving/README.md +++ b/python/llm/example/GPU/vLLM-Serving/README.md @@ -34,6 +34,8 @@ sycl-ls To run vLLM continuous batching on Intel GPUs, install the dependencies as follows: ```bash +# This directory may change depends on where you install oneAPI-basekit +source /opt/intel/oneapi/setvars.sh # First create an conda environment conda create -n ipex-vllm python=3.11 conda activate ipex-vllm