* Add initial quickstart for Ollama portable zip * Small fix * Fixed based on comments * Small fix * Add demo image for run ollama * Update download link
		
			
				
	
	
	
	
		
			1.7 KiB
		
	
	
	
	
	
	
	
			
		
		
	
	
			1.7 KiB
		
	
	
	
	
	
	
	
IPEX-LLM Quickstart
Note
We are adding more Quickstart guides.
This section includes efficient guide to show you how to:
Install
bigdl-llmMigration Guide- Install IPEX-LLM on Linux with Intel GPU
 - Install IPEX-LLM on Windows with Intel GPU
 
Inference
- Run IPEX-LLM on Intel NPU
 - Run Performance Benchmarking with IPEX-LLM
 - Run Local RAG using Langchain-Chatchat on Intel GPU
 - Run Text Generation WebUI on Intel GPU
 - Run Open WebUI on Intel GPU
 - Run PrivateGPT with IPEX-LLM on Intel GPU
 - Run Coding Copilot (Continue) in VSCode with Intel GPU
 - Run Dify on Intel GPU
 - Run llama.cpp with IPEX-LLM on Intel GPU
 - Run Ollama with IPEX-LLM on Intel GPU
 - Run Ollama Portable Zip on Intel GPU with IPEX-LLM
 - Run Llama 3 on Intel GPU using llama.cpp and ollama with IPEX-LLM
 - Run RAGFlow with IPEX-LLM on Intel GPU
 - Run GraphRAG with IPEX-LLM on Intel GPU
 
Serving
- Run IPEX-LLM Serving with FastChat
 - Run IPEX-LLM Serving with vLLM on Intel GPU
 - Run IPEX-LLM serving on Multiple Intel GPUs using DeepSpeed AutoTP and FastApi