* transfer files in DockerGuides from rst to md * add some dividing lines * adjust the title hierarchy in docker_cpp_xpu_quickstart.md * restore * switch to the correct branch * small change --------- Co-authored-by: ATMxsp01 <shou.xu@intel.com>
		
			
				
	
	
		
			16 lines
		
	
	
	
		
			724 B
		
	
	
	
		
			Markdown
		
	
	
	
	
	
			
		
		
	
	
			16 lines
		
	
	
	
		
			724 B
		
	
	
	
		
			Markdown
		
	
	
	
	
	
# IPEX-LLM Docker Container User Guides
 | 
						|
 | 
						|
 | 
						|
In this section, you will find guides related to using IPEX-LLM with Docker, covering how to:
 | 
						|
 | 
						|
- [Overview of IPEX-LLM Containers](./docker_windows_gpu.md)
 | 
						|
 | 
						|
- Inference in Python/C++  
 | 
						|
  - [GPU Inference in Python with IPEX-LLM](./docker_pytorch_inference_gpu.md)
 | 
						|
  - [VSCode LLM Development with IPEX-LLM on Intel GPU](./docker_run_pytorch_inference_in_vscode.md)
 | 
						|
  - [llama.cpp/Ollama/Open-WebUI with IPEX-LLM on Intel GPU](./docker_cpp_xpu_quickstart.md)
 | 
						|
 | 
						|
- Serving
 | 
						|
  - [FastChat with IPEX-LLM on Intel GPU](./fastchat_docker_quickstart.md)
 | 
						|
  - [vLLM with IPEX-LLM on Intel GPU](./vllm_docker_quickstart.md)
 | 
						|
  - [vLLM with IPEX-LLM on Intel CPU](./vllm_cpu_docker_quickstart.md)
 |