parent
584439e498
commit
4fd1df9cf6
2 changed files with 4 additions and 2 deletions
|
|
@ -23,6 +23,8 @@ subtrees:
|
||||||
- file: doc/LLM/DockerGuides/docker_pytorch_inference_gpu
|
- file: doc/LLM/DockerGuides/docker_pytorch_inference_gpu
|
||||||
- file: doc/LLM/DockerGuides/docker_run_pytorch_inference_in_vscode
|
- file: doc/LLM/DockerGuides/docker_run_pytorch_inference_in_vscode
|
||||||
- file: doc/LLM/DockerGuides/docker_cpp_xpu_quickstart
|
- file: doc/LLM/DockerGuides/docker_cpp_xpu_quickstart
|
||||||
|
- file: doc/LLM/DockerGuides/fastchat_docker_quickstart
|
||||||
|
- file: doc/LLM/DockerGuides/vllm_docker_quickstart
|
||||||
- file: doc/LLM/Quickstart/index
|
- file: doc/LLM/Quickstart/index
|
||||||
title: "Quickstart"
|
title: "Quickstart"
|
||||||
subtrees:
|
subtrees:
|
||||||
|
|
|
||||||
|
|
@ -8,5 +8,5 @@ In this section, you will find guides related to using IPEX-LLM with Docker, cov
|
||||||
* `Run PyTorch Inference on an Intel GPU via Docker <./docker_pytorch_inference_gpu.html>`_
|
* `Run PyTorch Inference on an Intel GPU via Docker <./docker_pytorch_inference_gpu.html>`_
|
||||||
* `Run/Develop PyTorch in VSCode with Docker on Intel GPU <./docker_pytorch_inference_gpu.html>`_
|
* `Run/Develop PyTorch in VSCode with Docker on Intel GPU <./docker_pytorch_inference_gpu.html>`_
|
||||||
* `Run llama.cpp/Ollama/open-webui with Docker on Intel GPU <./docker_cpp_xpu_quickstart.html>`_
|
* `Run llama.cpp/Ollama/open-webui with Docker on Intel GPU <./docker_cpp_xpu_quickstart.html>`_
|
||||||
* `Run IPEX-LLM integrated FastChat with Docker on Intel GPU <./fastchat_docker_quickstart>`_
|
* `Run IPEX-LLM integrated FastChat with Docker on Intel GPU <./fastchat_docker_quickstart.html>`_
|
||||||
* `Run IPEX-LLM integrated vLLM with Docker on Intel GPU <./vllm_docker_quickstart>`_
|
* `Run IPEX-LLM integrated vLLM with Docker on Intel GPU <./vllm_docker_quickstart.html>`_
|
||||||
|
|
|
||||||
Loading…
Reference in a new issue