* fix: remove BIGDL_LLM_XMX_DISABLED in mddocs * fix: remove set SYCL_CACHE_PERSISTENT=1 in example * fix: remove BIGDL_LLM_XMX_DISABLED in workflows * fix: merge igpu and A-series Graphics * fix: remove set BIGDL_LLM_XMX_DISABLED=1 in example * fix: remove BIGDL_LLM_XMX_DISABLED in workflows * fix: merge igpu and A-series Graphics * fix: textual adjustment * fix: textual adjustment * fix: textual adjustment  | 
			||
|---|---|---|
| .. | ||
| docker_cpp_xpu_quickstart.md | ||
| docker_pytorch_inference_gpu.md | ||
| docker_run_pytorch_inference_in_vscode.md | ||
| docker_windows_gpu.md | ||
| fastchat_docker_quickstart.md | ||
| README.md | ||
| vllm_cpu_docker_quickstart.md | ||
| vllm_docker_quickstart.md | ||
IPEX-LLM Docker Container User Guides
In this section, you will find guides related to using IPEX-LLM with Docker, covering how to:
- 
Inference in Python/C++
 - 
Serving