diff --git a/README.md b/README.md index ceb89afb..6648c980 100644 --- a/README.md +++ b/README.md @@ -9,8 +9,9 @@ > - ***70+ models** have been optimized/verified on `ipex-llm` (e.g., Llama, Phi, Mistral, Mixtral, Whisper, DeepSeek, Qwen, ChatGLM, MiniCPM, Qwen-VL, MiniCPM-V and more), with state-of-art **LLM optimizations**, **XPU acceleration** and **low-bit (FP8/FP6/FP4/INT4) support**; see the complete list [here](#verified-models).* ## Latest Update 🔥 -- [2025/01] We added the guide for running `ipex-llm` on Intel Arc [B580](docs/mddocs/Quickstart/bmg_quickstart.md) GPU -- [2025/01] We added support for running [Ollama 0.5.1](docs/mddocs/Quickstart/ollama_quickstart.md) on Intel GPU. +- [2025/02] We added initial support of [Ollama Portable Zip](docs/mddocs/Quickstart/ollama_portablze_zip_quickstart.md) to directly run Ollama on Intel GPU (***without the need of manual installations***). +- [2025/01] We added the guide for running `ipex-llm` on Intel Arc [B580](docs/mddocs/Quickstart/bmg_quickstart.md) GPU. +- [2025/01] We added support for running [Ollama 0.5.4](docs/mddocs/Quickstart/ollama_quickstart.md) on Intel GPU. - [2024/12] We added both ***Python*** and ***C++*** support for Intel Core Ultra [NPU](docs/mddocs/Quickstart/npu_quickstart.md) (including 100H, 200V, 200K and 200H series). - [2024/11] We added support for running [vLLM 0.6.2](docs/mddocs/DockerGuides/vllm_docker_quickstart.md) on Intel Arc GPUs. diff --git a/README.zh-CN.md b/README.zh-CN.md index 44e09ae5..9faa777a 100644 --- a/README.zh-CN.md +++ b/README.zh-CN.md @@ -9,8 +9,9 @@ > - ***70+** 模型已经在 `ipex-llm` 上得到优化和验证(如 Llama, Phi, Mistral, Mixtral, Whisper, DeepSeek, Qwen, ChatGLM, MiniCPM, Qwen-VL, MiniCPM-V 等), 以获得先进的 **大模型算法优化**, **XPU 加速** 以及 **低比特(FP8FP8/FP6/FP4/INT4) 支持**;更多模型信息请参阅[这里](#模型验证)。* ## 最新更新 🔥 +- [2025/02] 新增 [Ollama Portable Zip](docs/mddocs/Quickstart/ollama_portablze_zip_quickstart.md) 在 Intel GPU 上直接**免安装运行 Ollama**. - [2025/01] 新增在 Intel Arc [B580](docs/mddocs/Quickstart/bmg_quickstart.md) GPU 上运行 `ipex-llm` 的指南。 -- [2025/01] 新增在 Intel GPU 上运行 [Ollama 0.5.1](docs/mddocs/Quickstart/ollama_quickstart.zh-CN.md) 的支持。 +- [2025/01] 新增在 Intel GPU 上运行 [Ollama 0.5.4](docs/mddocs/Quickstart/ollama_quickstart.zh-CN.md) 的支持。 - [2024/12] 增加了对 Intel Core Ultra [NPU](docs/mddocs/Quickstart/npu_quickstart.md)(包括 100H,200V,200K 和 200H 系列)的 **Python** 和 **C++** 支持。 - [2024/11] 新增在 Intel Arc GPUs 上运行 [vLLM 0.6.2](docs/mddocs/DockerGuides/vllm_docker_quickstart.md) 的支持。 diff --git a/docs/mddocs/Quickstart/ollama_portablze_zip_quickstart.md b/docs/mddocs/Quickstart/ollama_portablze_zip_quickstart.md index fcf24dd3..1c694fd6 100644 --- a/docs/mddocs/Quickstart/ollama_portablze_zip_quickstart.md +++ b/docs/mddocs/Quickstart/ollama_portablze_zip_quickstart.md @@ -1,6 +1,6 @@ # Run Ollama Portable Zip on Intel GPU with IPEX-LLM -This guide demonstrates how to use **Ollama portable zip** to directly run Ollama on Intel GPU with `ipex-llm` (without the need of manual installations). +This guide demonstrates how to use [Ollama portable zip](https://github.com/intel/ipex-llm/releases/download/v2.2.0-nightly/ollama-0.5.4-ipex-llm-2.2.0b20250211.zip) to directly run Ollama on Intel GPU with `ipex-llm` (without the need of manual installations). > [!NOTE] > Currently, IPEX-LLM only provides Ollama portable zip on Windows. @@ -43,4 +43,4 @@ You could then use Ollama to run LLMs on Intel GPUs as follows:
-
\ No newline at end of file +