Update wrong file name for portable zip quickstart (#12883)
This commit is contained in:
		
							parent
							
								
									a9c8e73a77
								
							
						
					
					
						commit
						671ddfd847
					
				
					 7 changed files with 9 additions and 9 deletions
				
			
		| 
						 | 
				
			
			@ -23,7 +23,7 @@ This section includes efficient guide to show you how to:
 | 
			
		|||
- [Run Dify on Intel GPU](./dify_quickstart.md)
 | 
			
		||||
- [Run llama.cpp with IPEX-LLM on Intel GPU](./llama_cpp_quickstart.md)
 | 
			
		||||
- [Run Ollama with IPEX-LLM on Intel GPU](./ollama_quickstart.md)
 | 
			
		||||
- [Run Ollama Portable Zip on Intel GPU with IPEX-LLM](./ollama_portablze_zip_quickstart.md)
 | 
			
		||||
- [Run Ollama Portable Zip on Intel GPU with IPEX-LLM](./ollama_portable_zip_quickstart.md)
 | 
			
		||||
- [Run Llama 3 on Intel GPU using llama.cpp and ollama with IPEX-LLM](./llama3_llamacpp_ollama_quickstart.md)
 | 
			
		||||
- [Run RAGFlow with IPEX-LLM on Intel GPU](./ragflow_quickstart.md)
 | 
			
		||||
- [Run GraphRAG with IPEX-LLM on Intel GPU](./graphrag_quickstart.md)
 | 
			
		||||
| 
						 | 
				
			
			
 | 
			
		|||
| 
						 | 
				
			
			@ -1,6 +1,6 @@
 | 
			
		|||
# Run Ollama Portable Zip on Intel GPU with IPEX-LLM
 | 
			
		||||
<p>
 | 
			
		||||
  <b>< English</b> | <a href='./ollama_portablze_zip_quickstart.zh-CN.md'>中文</a> >
 | 
			
		||||
  <b>< English</b> | <a href='./ollama_portable_zip_quickstart.zh-CN.md'>中文</a> >
 | 
			
		||||
</p>
 | 
			
		||||
 | 
			
		||||
This guide demonstrates how to use [Ollama portable zip](https://github.com/intel/ipex-llm/releases/tag/v2.2.0-nightly) to directly run Ollama on Intel GPU with `ipex-llm` (without the need of manual installations).
 | 
			
		||||
| 
						 | 
				
			
			@ -1,6 +1,6 @@
 | 
			
		|||
# 使用 IPEX-LLM 在 Intel GPU 上运行 Ollama Portable Zip
 | 
			
		||||
<p>
 | 
			
		||||
   < <a href='./ollama_portablze_zip_quickstart.md'>English</a> | <b>中文</b> >
 | 
			
		||||
   < <a href='./ollama_portable_zip_quickstart.md'>English</a> | <b>中文</b> >
 | 
			
		||||
</p>
 | 
			
		||||
 | 
			
		||||
本指南演示如何使用 [Ollama portable zip](https://github.com/intel/ipex-llm/releases/tag/v2.2.0-nightly) 通过 `ipex-llm` 在 Intel GPU 上直接免安装运行 Ollama。
 | 
			
		||||
| 
						 | 
				
			
			@ -6,7 +6,7 @@
 | 
			
		|||
[ollama/ollama](https://github.com/ollama/ollama) is popular framework designed to build and run language models on a local machine; you can now use the C++ interface of [`ipex-llm`](https://github.com/intel-analytics/ipex-llm) as an accelerated backend for `ollama` running on Intel **GPU** *(e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max)*.
 | 
			
		||||
 | 
			
		||||
> [!Important]
 | 
			
		||||
> You may use [Ollama portable zip](./ollama_portablze_zip_quickstart.md) to directly run Ollama on Intel GPU with ipex-llm (***without the need of manual installations***).
 | 
			
		||||
> You may use [Ollama portable zip](./ollama_portable_zip_quickstart.md) to directly run Ollama on Intel GPU with ipex-llm (***without the need of manual installations***).
 | 
			
		||||
 | 
			
		||||
> [!NOTE]
 | 
			
		||||
> For installation on Intel Arc B-Series GPU (such as **B580**), please refer to this [guide](./bmg_quickstart.md).
 | 
			
		||||
| 
						 | 
				
			
			
 | 
			
		|||
| 
						 | 
				
			
			@ -6,7 +6,7 @@
 | 
			
		|||
[ollama/ollama](https://github.com/ollama/ollama) 是一个轻量级、可扩展的框架,用于在本地机器上构建和运行大型语言模型。现在,借助 [`ipex-llm`](https://github.com/intel-analytics/ipex-llm) 的 C++ 接口作为其加速后端,你可以在 Intel **GPU** *(如配有集成显卡,以及 Arc,Flex 和 Max 等独立显卡的本地 PC)* 上,轻松部署并运行 `ollama`。
 | 
			
		||||
 | 
			
		||||
> [!Important]
 | 
			
		||||
> 现在可使用 [Ollama Portable Zip](./ollama_portablze_zip_quickstart.zh-CN.md) 在 Intel GPU 上直接***免安装运行 Ollama***.
 | 
			
		||||
> 现在可使用 [Ollama Portable Zip](./ollama_portable_zip_quickstart.zh-CN.md) 在 Intel GPU 上直接***免安装运行 Ollama***.
 | 
			
		||||
 | 
			
		||||
> [!NOTE]
 | 
			
		||||
> 如果是在 Intel Arc B 系列 GPU 上安装(例如 **B580**),请参阅本[指南](./bmg_quickstart.md)。
 | 
			
		||||
| 
						 | 
				
			
			
 | 
			
		|||
| 
						 | 
				
			
			@ -6,7 +6,7 @@
 | 
			
		|||
**`IPEX-LLM`** is an LLM acceleration library for Intel [GPU](Quickstart/install_windows_gpu.md) *(e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max)*, [NPU](Quickstart/npu_quickstart.md) and CPU [^1].
 | 
			
		||||
 | 
			
		||||
## Latest Update 🔥 
 | 
			
		||||
- [2025/02] We added support of [Ollama Portable Zip](https://github.com/intel/ipex-llm/releases/tag/v2.2.0-nightly) to directly run Ollama on Intel GPU for both [Windows](Quickstart/ollama_portablze_zip_quickstart.md#windows-quickstart) and [Linux](docs/mddocs/Quickstart/ollama_portablze_zip_quickstart.md#linux-quickstart) (***without the need of manual installations***).
 | 
			
		||||
- [2025/02] We added support of [Ollama Portable Zip](https://github.com/intel/ipex-llm/releases/tag/v2.2.0-nightly) to directly run Ollama on Intel GPU for both [Windows](Quickstart/ollama_portable_zip_quickstart.md#windows-quickstart) and [Linux](docs/mddocs/Quickstart/ollama_portable_zip_quickstart.md#linux-quickstart) (***without the need of manual installations***).
 | 
			
		||||
- [2025/02] We added support for running [vLLM 0.6.6](DockerGuides/vllm_docker_quickstart.md) on Intel Arc GPUs.
 | 
			
		||||
- [2025/01] We added the guide for running `ipex-llm` on Intel Arc [B580](Quickstart/bmg_quickstart.md) GPU
 | 
			
		||||
- [2025/01] We added support for running [Ollama 0.5.4](Quickstart/ollama_quickstart.md) on Intel GPU.
 | 
			
		||||
| 
						 | 
				
			
			@ -52,7 +52,7 @@
 | 
			
		|||
## `ipex-llm` Quickstart
 | 
			
		||||
 | 
			
		||||
### Use
 | 
			
		||||
- [Ollama Portable Zip](Quickstart/ollama_portablze_zip_quickstart.md): running **Ollama** on Intel GPU ***without the need of manual installations***
 | 
			
		||||
- [Ollama Portable Zip](Quickstart/ollama_portable_zip_quickstart.md): running **Ollama** on Intel GPU ***without the need of manual installations***
 | 
			
		||||
- [Arc B580](Quickstart/bmg_quickstart.md): running `ipex-llm` on Intel Arc **B580** GPU for Ollama, llama.cpp, PyTorch, HuggingFace, etc.
 | 
			
		||||
- [NPU](Quickstart/npu_quickstart.md): running `ipex-llm` on Intel **NPU** in both Python and C++
 | 
			
		||||
- [llama.cpp](Quickstart/llama_cpp_quickstart.md): running **llama.cpp** (*using C++ interface of `ipex-llm`*) on Intel GPU
 | 
			
		||||
| 
						 | 
				
			
			
 | 
			
		|||
| 
						 | 
				
			
			@ -4,7 +4,7 @@
 | 
			
		|||
</p>
 | 
			
		||||
 | 
			
		||||
## 最新更新 🔥 
 | 
			
		||||
- [2025/02] 新增 [Ollama Portable Zip](https://github.com/intel/ipex-llm/releases/tag/v2.2.0-nightly) 在 Intel GPU 上直接**免安装运行 Ollama** (包括 [Windows](Quickstart/ollama_portablze_zip_quickstart.zh-CN.md#windows用户指南) 和 [Linux](Quickstart/ollama_portablze_zip_quickstart.zh-CN.md#linux用户指南))。
 | 
			
		||||
- [2025/02] 新增 [Ollama Portable Zip](https://github.com/intel/ipex-llm/releases/tag/v2.2.0-nightly) 在 Intel GPU 上直接**免安装运行 Ollama** (包括 [Windows](Quickstart/ollama_portable_zip_quickstart.zh-CN.md#windows用户指南) 和 [Linux](Quickstart/ollama_portable_zip_quickstart.zh-CN.md#linux用户指南))。
 | 
			
		||||
- [2025/02] 新增在 Intel Arc GPUs 上运行 [vLLM 0.6.6](DockerGuides/vllm_docker_quickstart.md) 的支持。
 | 
			
		||||
- [2025/01] 新增在 Intel Arc [B580](Quickstart/bmg_quickstart.md) GPU 上运行 `ipex-llm` 的指南。
 | 
			
		||||
- [2025/01] 新增在 Intel GPU 上运行 [Ollama 0.5.4](Quickstart/ollama_quickstart.zh-CN.md) 的支持。
 | 
			
		||||
| 
						 | 
				
			
			@ -50,7 +50,7 @@
 | 
			
		|||
## `ipex-llm` 快速入门
 | 
			
		||||
 | 
			
		||||
### 使用
 | 
			
		||||
- [Ollama Portable Zip](Quickstart/ollama_portablze_zip_quickstart.zh-CN.md): 在 Intel GPU 上直接**免安装运行 Ollama**。
 | 
			
		||||
- [Ollama Portable Zip](Quickstart/ollama_portable_zip_quickstart.zh-CN.md): 在 Intel GPU 上直接**免安装运行 Ollama**。
 | 
			
		||||
- [Arc B580](Quickstart/bmg_quickstart.md): 在 Intel Arc **B580** GPU 上运行 `ipex-llm`(包括 Ollama, llama.cpp, PyTorch, HuggingFace 等)
 | 
			
		||||
- [NPU](Quickstart/npu_quickstart.md): 在 Intel **NPU** 上运行 `ipex-llm`(支持 Python 和 C++)
 | 
			
		||||
- [llama.cpp](Quickstart/llama_cpp_quickstart.zh-CN.md): 在 Intel GPU 上运行 **llama.cpp** (*使用 `ipex-llm` 的 C++ 接口*) 
 | 
			
		||||
| 
						 | 
				
			
			
 | 
			
		|||
		Loading…
	
		Reference in a new issue