15 lines
782 B
ReStructuredText
15 lines
782 B
ReStructuredText
IPEX-LLM Docker Container User Guides
|
|
=====================================
|
|
|
|
In this section, you will find guides related to using IPEX-LLM with Docker, covering how to:
|
|
|
|
* `Overview of IPEX-LLM Containers <./docker_windows_gpu.html>`_
|
|
|
|
* Inference in Python/C++
|
|
* `GPU Inference in Python with IPEX-LLM <./docker_pytorch_inference_gpu.html>`_
|
|
* `VSCode LLM Development with IPEX-LLM on Intel GPU <./docker_pytorch_inference_gpu.html>`_
|
|
* `llama.cpp/Ollama/Open-WebUI with IPEX-LLM on Intel GPU <./docker_cpp_xpu_quickstart.html>`_
|
|
* Serving
|
|
* `FastChat with IPEX-LLM on Intel GPU <./fastchat_docker_quickstart.html>`_
|
|
* `vLLM with IPEX-LLM on Intel GPU <./vllm_docker_quickstart.html>`_
|
|
* `vLLM with IPEX-LLM on Intel CPU <./vllm_cpu_docker_quickstart.html>`_
|