Add initial QuickStart for Ollama portable zip (#12817)
* Add initial quickstart for Ollama portable zip * Small fix * Fixed based on comments * Small fix * Add demo image for run ollama * Update download link
This commit is contained in:
		
							parent
							
								
									1083fe5508
								
							
						
					
					
						commit
						68414afcb9
					
				
					 2 changed files with 47 additions and 0 deletions
				
			
		| 
						 | 
				
			
			@ -23,6 +23,7 @@ This section includes efficient guide to show you how to:
 | 
			
		|||
- [Run Dify on Intel GPU](./dify_quickstart.md)
 | 
			
		||||
- [Run llama.cpp with IPEX-LLM on Intel GPU](./llama_cpp_quickstart.md)
 | 
			
		||||
- [Run Ollama with IPEX-LLM on Intel GPU](./ollama_quickstart.md)
 | 
			
		||||
- [Run Ollama Portable Zip on Intel GPU with IPEX-LLM](./ollama_portablze_zip_quickstart.md)
 | 
			
		||||
- [Run Llama 3 on Intel GPU using llama.cpp and ollama with IPEX-LLM](./llama3_llamacpp_ollama_quickstart.md)
 | 
			
		||||
- [Run RAGFlow with IPEX-LLM on Intel GPU](./ragflow_quickstart.md)
 | 
			
		||||
- [Run GraphRAG with IPEX-LLM on Intel GPU](./graphrag_quickstart.md)
 | 
			
		||||
| 
						 | 
				
			
			
 | 
			
		|||
							
								
								
									
										46
									
								
								docs/mddocs/Quickstart/ollama_portablze_zip_quickstart.md
									
									
									
									
									
										Normal file
									
								
							
							
						
						
									
										46
									
								
								docs/mddocs/Quickstart/ollama_portablze_zip_quickstart.md
									
									
									
									
									
										Normal file
									
								
							| 
						 | 
				
			
			@ -0,0 +1,46 @@
 | 
			
		|||
# Run Ollama Portable Zip on Intel GPU with IPEX-LLM
 | 
			
		||||
 | 
			
		||||
This guide demonstrates how to use **Ollama portable zip** to directly run Ollama on Intel GPU with `ipex-llm` (without the need of manual installations).
 | 
			
		||||
 | 
			
		||||
> [!NOTE]
 | 
			
		||||
> Currently, IPEX-LLM only provides Ollama portable zip on Windows.
 | 
			
		||||
 | 
			
		||||
## Table of Contents
 | 
			
		||||
- [Prerequisites](#prerequisitesa)
 | 
			
		||||
- [Step 1: Download and Unzip](#step-1-download-and-unzip)
 | 
			
		||||
- [Step 2: Start Ollama Serve](#step-2-start-ollama-serve)
 | 
			
		||||
- [Step 3: Run Ollama](#step-3-run-ollama)
 | 
			
		||||
 | 
			
		||||
## Prerequisites
 | 
			
		||||
 | 
			
		||||
Check your GPU driver version, and update it if needed:
 | 
			
		||||
 | 
			
		||||
- For Intel Core Ultra processors (Series 2) or Intel Arc B-Series GPU, we recommend updating your GPU driver to the [latest](https://www.intel.com/content/www/us/en/download/785597/intel-arc-iris-xe-graphics-windows.html)
 | 
			
		||||
 | 
			
		||||
- For other Intel iGPU/dGPU, we recommend using GPU driver version [32.0.101.6078](https://www.intel.com/content/www/us/en/download/785597/834050/intel-arc-iris-xe-graphics-windows.html)
 | 
			
		||||
 | 
			
		||||
## Step 1: Download and Unzip
 | 
			
		||||
 | 
			
		||||
Download IPEX-LLM Ollama portable zip from the [link](https://github.com/intel/ipex-llm/releases/download/v2.2.0-nightly/ollama-0.5.4-ipex-llm-2.2.0b20250211.zip).
 | 
			
		||||
 | 
			
		||||
Then, extract the zip file to a folder.
 | 
			
		||||
 | 
			
		||||
## Step 2: Start Ollama Serve
 | 
			
		||||
 | 
			
		||||
Double-click `start-ollama.bat` in the extracted folder to start the Ollama service. A window will then pop up as shown below:
 | 
			
		||||
 | 
			
		||||
<div align="center">
 | 
			
		||||
  <img src="https://llm-assets.readthedocs.io/en/latest/_images/ollama_portable_start_ollama.png"  width=80%/>
 | 
			
		||||
</div>
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
## Step 3: Run Ollama
 | 
			
		||||
 | 
			
		||||
You could then use Ollama to run LLMs on Intel GPUs as follows:
 | 
			
		||||
 | 
			
		||||
- Open "Command Prompt" (cmd), and enter the extracted folder through `cd /d PATH\TO\EXTRACTED\FOLDER`
 | 
			
		||||
- Run `ollama run deepseek-r1:7b` in the "Command Prompt" (you may use any other model)
 | 
			
		||||
 | 
			
		||||
<div align="center">
 | 
			
		||||
  <img src="https://llm-assets.readthedocs.io/en/latest/_images/ollama_portable_run_ollama.png"  width=80%/>
 | 
			
		||||
</div>
 | 
			
		||||
		Loading…
	
		Reference in a new issue