From 4fd1df9cf6f4eeb0127172dcdce80dfab303dfb8 Mon Sep 17 00:00:00 2001 From: Guancheng Fu <110874468+gc-fu@users.noreply.github.com> Date: Wed, 22 May 2024 11:23:22 +0800 Subject: [PATCH] Add toc for docker quickstarts (#11095) * fix * fix --- docs/readthedocs/source/_toc.yml | 2 ++ docs/readthedocs/source/doc/LLM/DockerGuides/index.rst | 4 ++-- 2 files changed, 4 insertions(+), 2 deletions(-) diff --git a/docs/readthedocs/source/_toc.yml b/docs/readthedocs/source/_toc.yml index 3d3a5245..9f4b3578 100644 --- a/docs/readthedocs/source/_toc.yml +++ b/docs/readthedocs/source/_toc.yml @@ -23,6 +23,8 @@ subtrees: - file: doc/LLM/DockerGuides/docker_pytorch_inference_gpu - file: doc/LLM/DockerGuides/docker_run_pytorch_inference_in_vscode - file: doc/LLM/DockerGuides/docker_cpp_xpu_quickstart + - file: doc/LLM/DockerGuides/fastchat_docker_quickstart + - file: doc/LLM/DockerGuides/vllm_docker_quickstart - file: doc/LLM/Quickstart/index title: "Quickstart" subtrees: diff --git a/docs/readthedocs/source/doc/LLM/DockerGuides/index.rst b/docs/readthedocs/source/doc/LLM/DockerGuides/index.rst index 9e9f02fd..2dccefbb 100644 --- a/docs/readthedocs/source/doc/LLM/DockerGuides/index.rst +++ b/docs/readthedocs/source/doc/LLM/DockerGuides/index.rst @@ -8,5 +8,5 @@ In this section, you will find guides related to using IPEX-LLM with Docker, cov * `Run PyTorch Inference on an Intel GPU via Docker <./docker_pytorch_inference_gpu.html>`_ * `Run/Develop PyTorch in VSCode with Docker on Intel GPU <./docker_pytorch_inference_gpu.html>`_ * `Run llama.cpp/Ollama/open-webui with Docker on Intel GPU <./docker_cpp_xpu_quickstart.html>`_ -* `Run IPEX-LLM integrated FastChat with Docker on Intel GPU <./fastchat_docker_quickstart>`_ -* `Run IPEX-LLM integrated vLLM with Docker on Intel GPU <./vllm_docker_quickstart>`_ +* `Run IPEX-LLM integrated FastChat with Docker on Intel GPU <./fastchat_docker_quickstart.html>`_ +* `Run IPEX-LLM integrated vLLM with Docker on Intel GPU <./vllm_docker_quickstart.html>`_