ipex-llm/docs/readthedocs/source/doc/LLM/DockerGuides/index.rst
Guancheng Fu f654f7e08c
Add serving docker quickstart (#11072)
* add temp file

* add initial docker readme

* temp

* done

* add fastchat service

* fix

* fix

* fix

* fix

* remove stale file
2024-05-21 17:00:58 +08:00

11 lines
No EOL
620 B
ReStructuredText

IPEX-LLM Docker Container User Guides
=====================================
In this section, you will find guides related to using IPEX-LLM with Docker, covering how to:
* `Overview of IPEX-LLM Containers for Intel GPU <./docker_windows_gpu.html>`_
* `Run PyTorch Inference on an Intel GPU via Docker <./docker_pytorch_inference_gpu.html>`_
* `Run llama.cpp/Ollama/open-webui with Docker on Intel GPU <./docker_cpp_xpu_quickstart.html>`_
* `Run IPEX-LLM integrated FastChat with Docker on Intel GPU <./fastchat_docker_quickstart>`_
* `Run IPEX-LLM integrated vLLM with Docker on Intel GPU <./vllm_docker_quickstart>`_