diff --git a/docs/readthedocs/source/_toc.yml b/docs/readthedocs/source/_toc.yml index 3d3a5245..9f4b3578 100644 --- a/docs/readthedocs/source/_toc.yml +++ b/docs/readthedocs/source/_toc.yml @@ -23,6 +23,8 @@ subtrees: - file: doc/LLM/DockerGuides/docker_pytorch_inference_gpu - file: doc/LLM/DockerGuides/docker_run_pytorch_inference_in_vscode - file: doc/LLM/DockerGuides/docker_cpp_xpu_quickstart + - file: doc/LLM/DockerGuides/fastchat_docker_quickstart + - file: doc/LLM/DockerGuides/vllm_docker_quickstart - file: doc/LLM/Quickstart/index title: "Quickstart" subtrees: diff --git a/docs/readthedocs/source/doc/LLM/DockerGuides/index.rst b/docs/readthedocs/source/doc/LLM/DockerGuides/index.rst index 9e9f02fd..2dccefbb 100644 --- a/docs/readthedocs/source/doc/LLM/DockerGuides/index.rst +++ b/docs/readthedocs/source/doc/LLM/DockerGuides/index.rst @@ -8,5 +8,5 @@ In this section, you will find guides related to using IPEX-LLM with Docker, cov * `Run PyTorch Inference on an Intel GPU via Docker <./docker_pytorch_inference_gpu.html>`_ * `Run/Develop PyTorch in VSCode with Docker on Intel GPU <./docker_pytorch_inference_gpu.html>`_ * `Run llama.cpp/Ollama/open-webui with Docker on Intel GPU <./docker_cpp_xpu_quickstart.html>`_ -* `Run IPEX-LLM integrated FastChat with Docker on Intel GPU <./fastchat_docker_quickstart>`_ -* `Run IPEX-LLM integrated vLLM with Docker on Intel GPU <./vllm_docker_quickstart>`_ +* `Run IPEX-LLM integrated FastChat with Docker on Intel GPU <./fastchat_docker_quickstart.html>`_ +* `Run IPEX-LLM integrated vLLM with Docker on Intel GPU <./vllm_docker_quickstart.html>`_