* add quickstart: Run/Develop PyTorch in VSCode with Docker on Intel GPU * add gif * update index.rst * update link * update GIFs
12 lines
719 B
ReStructuredText
12 lines
719 B
ReStructuredText
IPEX-LLM Docker Container User Guides
|
|
=====================================
|
|
|
|
In this section, you will find guides related to using IPEX-LLM with Docker, covering how to:
|
|
|
|
|
|
* `Overview of IPEX-LLM Containers for Intel GPU <./docker_windows_gpu.html>`_
|
|
* `Run PyTorch Inference on an Intel GPU via Docker <./docker_pytorch_inference_gpu.html>`_
|
|
* `Run/Develop PyTorch in VSCode with Docker on Intel GPU <./docker_pytorch_inference_gpu.html>`_
|
|
* `Run llama.cpp/Ollama/open-webui with Docker on Intel GPU <./docker_cpp_xpu_quickstart.html>`_
|
|
* `Run IPEX-LLM integrated FastChat with Docker on Intel GPU <./fastchat_docker_quickstart>`_
|
|
* `Run IPEX-LLM integrated vLLM with Docker on Intel GPU <./vllm_docker_quickstart>`_
|