From ff83fad400de691239feadf6a325c0debe6c6acb Mon Sep 17 00:00:00 2001 From: Xiangyu Tian <109123695+xiangyuT@users.noreply.github.com> Date: Mon, 3 Jun 2024 15:55:27 +0800 Subject: [PATCH] Fix typo in vLLM CPU docker guide (#11188) --- .../source/doc/LLM/DockerGuides/vllm_cpu_docker_quickstart.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/readthedocs/source/doc/LLM/DockerGuides/vllm_cpu_docker_quickstart.md b/docs/readthedocs/source/doc/LLM/DockerGuides/vllm_cpu_docker_quickstart.md index 3795d13b..36b39ed5 100644 --- a/docs/readthedocs/source/doc/LLM/DockerGuides/vllm_cpu_docker_quickstart.md +++ b/docs/readthedocs/source/doc/LLM/DockerGuides/vllm_cpu_docker_quickstart.md @@ -40,7 +40,7 @@ After the container is booted, you could get into the container through `docker docker exec -it ipex-llm-serving-cpu-container /bin/bash ``` -## Running vLLM serving with IPEX-LLM on Intel GPU in Docker +## Running vLLM serving with IPEX-LLM on Intel CPU in Docker We have included multiple vLLM-related files in `/llm/`: 1. `vllm_offline_inference.py`: Used for vLLM offline inference example