Add toc for docker quickstarts (#11095)

* fix

* fix
This commit is contained in:
Guancheng Fu 2024-05-22 11:23:22 +08:00 committed by GitHub
parent 584439e498
commit 4fd1df9cf6
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
2 changed files with 4 additions and 2 deletions

View file

@ -23,6 +23,8 @@ subtrees:
- file: doc/LLM/DockerGuides/docker_pytorch_inference_gpu
- file: doc/LLM/DockerGuides/docker_run_pytorch_inference_in_vscode
- file: doc/LLM/DockerGuides/docker_cpp_xpu_quickstart
- file: doc/LLM/DockerGuides/fastchat_docker_quickstart
- file: doc/LLM/DockerGuides/vllm_docker_quickstart
- file: doc/LLM/Quickstart/index
title: "Quickstart"
subtrees:

View file

@ -8,5 +8,5 @@ In this section, you will find guides related to using IPEX-LLM with Docker, cov
* `Run PyTorch Inference on an Intel GPU via Docker <./docker_pytorch_inference_gpu.html>`_
* `Run/Develop PyTorch in VSCode with Docker on Intel GPU <./docker_pytorch_inference_gpu.html>`_
* `Run llama.cpp/Ollama/open-webui with Docker on Intel GPU <./docker_cpp_xpu_quickstart.html>`_
* `Run IPEX-LLM integrated FastChat with Docker on Intel GPU <./fastchat_docker_quickstart>`_
* `Run IPEX-LLM integrated vLLM with Docker on Intel GPU <./vllm_docker_quickstart>`_
* `Run IPEX-LLM integrated FastChat with Docker on Intel GPU <./fastchat_docker_quickstart.html>`_
* `Run IPEX-LLM integrated vLLM with Docker on Intel GPU <./vllm_docker_quickstart.html>`_