fix vllm docs (#12176)

This commit is contained in:
Guancheng Fu 2024-10-10 15:44:36 +08:00 committed by GitHub
parent 890662610b
commit 0ef7e1d101
No known key found for this signature in database
GPG key ID: B5690EEEBB952194

View file

@ -42,15 +42,14 @@ Activate the `ipex-vllm` conda environment and install vLLM by execcuting the co
```bash ```bash
conda activate ipex-vllm conda activate ipex-vllm
source /opt/intel/oneapi/setvars.sh source /opt/intel/oneapi/setvars.sh
git clone -b sycl_xpu https://github.com/analytics-zoo/vllm.git pip install oneccl-bind-pt==2.1.300+xpu --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
git clone -b 0.5.4 https://github.com/analytics-zoo/vllm.git
cd vllm cd vllm
pip install -r requirements-xpu.txt pip install -r requirements-xpu.txt
pip install --no-deps xformers VLLM_TARGET_DEVICE=xpu python setup.py install
VLLM_BUILD_XPU_OPS=1 pip install --no-build-isolation -v -e . pip install mpi4py fastapi uvicorn openai
pip install outlines==0.0.34 --no-deps pip install gradio==4.43.0
pip install interegular cloudpickle diskcache joblib lark nest-asyncio numba scipy pip install ray
# For Qwen model support
pip install transformers_stream_generator einops tiktoken
``` ```
**Now you are all set to use vLLM with IPEX-LLM** **Now you are all set to use vLLM with IPEX-LLM**