Update ollama quickstart (#12823)

This commit is contained in:
Jason Dai 2025-02-14 09:55:48 +08:00 committed by GitHub
parent f67986021c
commit a09552e59a
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
2 changed files with 12 additions and 6 deletions

View file

@ -5,13 +5,16 @@
[ollama/ollama](https://github.com/ollama/ollama) is popular framework designed to build and run language models on a local machine; you can now use the C++ interface of [`ipex-llm`](https://github.com/intel-analytics/ipex-llm) as an accelerated backend for `ollama` running on Intel **GPU** *(e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max)*. [ollama/ollama](https://github.com/ollama/ollama) is popular framework designed to build and run language models on a local machine; you can now use the C++ interface of [`ipex-llm`](https://github.com/intel-analytics/ipex-llm) as an accelerated backend for `ollama` running on Intel **GPU** *(e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max)*.
> [!Important]
> You may use [Ollama portable zip](./ollama_portablze_zip_quickstart.md) to directly run Ollama on Intel GPU with ipex-llm (***without the need of manual installations***).
> [!NOTE] > [!NOTE]
> For installation on Intel Arc B-Series GPU (such as **B580**), please refer to this [guide](./bmg_quickstart.md). > For installation on Intel Arc B-Series GPU (such as **B580**), please refer to this [guide](./bmg_quickstart.md).
> [!NOTE] > [!NOTE]
> Our current version is consistent with [v0.5.1](https://github.com/ollama/ollama/releases/tag/v0.5.1) of ollama. > Our current version is consistent with [v0.5.4](https://github.com/ollama/ollama/releases/tag/v0.5.4) of ollama.
> >
> `ipex-llm[cpp]==2.2.0b20250105` is consistent with [v0.4.6](https://github.com/ollama/ollama/releases/tag/v0.4.6) of ollama. > `ipex-llm[cpp]==2.2.0b20250123` is consistent with [v0.5.1](https://github.com/ollama/ollama/releases/tag/v0.5.1) of ollama.
See the demo of running LLaMA2-7B on Intel Arc GPU below. See the demo of running LLaMA2-7B on Intel Arc GPU below.

View file

@ -5,13 +5,16 @@
[ollama/ollama](https://github.com/ollama/ollama) 是一个轻量级、可扩展的框架,用于在本地机器上构建和运行大型语言模型。现在,借助 [`ipex-llm`](https://github.com/intel-analytics/ipex-llm) 的 C++ 接口作为其加速后端,你可以在 Intel **GPU** *(如配有集成显卡,以及 ArcFlex 和 Max 等独立显卡的本地 PC)* 上,轻松部署并运行 `ollama` [ollama/ollama](https://github.com/ollama/ollama) 是一个轻量级、可扩展的框架,用于在本地机器上构建和运行大型语言模型。现在,借助 [`ipex-llm`](https://github.com/intel-analytics/ipex-llm) 的 C++ 接口作为其加速后端,你可以在 Intel **GPU** *(如配有集成显卡,以及 ArcFlex 和 Max 等独立显卡的本地 PC)* 上,轻松部署并运行 `ollama`
> [!NOTE] > [!Important]
> 如果是在 Intel Arc B 系列 GPU 上安装(例,**B580**),请参阅本[指南](./bmg_quickstart.md)。 > 现在可使用 [Ollama Portable Zip](./ollama_portablze_zip_quickstart.md) 在 Intel GPU 上直接***免安装运行 Ollama***.
> [!NOTE] > [!NOTE]
> `ipex-llm[cpp]` 的最新版本与官方 ollama 的 [v0.5.1](https://github.com/ollama/ollama/releases/tag/v0.5.1) 版本保持一致。 > 如果是在 Intel Arc B 系列 GPU 上安装(例如 **B580**),请参阅本[指南](./bmg_quickstart.md)。
> [!NOTE]
> `ipex-llm[cpp]` 的最新版本与官方 ollama 的 [v0.5.1](https://github.com/ollama/ollama/releases/tag/v0.5.4) 版本保持一致。
> >
> `ipex-llm[cpp]==2.2.0b20250105` 与官方 ollama 的 [v0.4.6](https://github.com/ollama/ollama/releases/tag/v0.4.6) 版本保持一致。 > `ipex-llm[cpp]==2.2.0b20250123` 与官方 ollama 的 [v0.5.1](https://github.com/ollama/ollama/releases/tag/v0.5.1) 版本保持一致。
以下是在 Intel Arc GPU 上运行 LLaMA2-7B 的 DEMO 演示。 以下是在 Intel Arc GPU 上运行 LLaMA2-7B 的 DEMO 演示。