From f0b600da77c2830d3cabd9991aae74743471512a Mon Sep 17 00:00:00 2001 From: Yina Chen <33650826+cyita@users.noreply.github.com> Date: Wed, 9 Jul 2025 17:30:27 +0800 Subject: [PATCH] update llama.cpp version (#13251) * update llama.cpp version --- docs/mddocs/Quickstart/llama_cpp_quickstart.md | 4 ++-- docs/mddocs/Quickstart/llama_cpp_quickstart.zh-CN.md | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/docs/mddocs/Quickstart/llama_cpp_quickstart.md b/docs/mddocs/Quickstart/llama_cpp_quickstart.md index 02b9efad..aa772ecb 100644 --- a/docs/mddocs/Quickstart/llama_cpp_quickstart.md +++ b/docs/mddocs/Quickstart/llama_cpp_quickstart.md @@ -12,9 +12,9 @@ > For installation on Intel Arc B-Series GPU (such as **B580**), please refer to this [guide](./bmg_quickstart.md). > [!NOTE] -> Our latest version is consistent with [d7cfe1f](https://github.com/ggml-org/llama.cpp/commit/d7cfe1ffe0f435d0048a6058d529daf76e072d9c) of llama.cpp. +> Our latest version is consistent with [4ad2436](https://github.com/ggml-org/llama.cpp/commit/4ad2436) of llama.cpp. > -> `ipex-llm[cpp]==2.2.0b20250320` is consistent with [ba1cb19](https://github.com/ggml-org/llama.cpp/commit/ba1cb19cdd0d92e012e0f6e009e0620f854b6afd) of llama.cpp. +> `ipex-llm[cpp]==2.2.0b20250629` is consistent with [d7cfe1f](https://github.com/ggml-org/llama.cpp/commit/d7cfe1ffe0f435d0048a6058d529daf76e072d9c) of llama.cpp. See the demo of running LLaMA2-7B on Intel Arc GPU below. diff --git a/docs/mddocs/Quickstart/llama_cpp_quickstart.zh-CN.md b/docs/mddocs/Quickstart/llama_cpp_quickstart.zh-CN.md index 67f00f9c..9942ace7 100644 --- a/docs/mddocs/Quickstart/llama_cpp_quickstart.zh-CN.md +++ b/docs/mddocs/Quickstart/llama_cpp_quickstart.zh-CN.md @@ -12,9 +12,9 @@ > 如果是在 Intel Arc B 系列 GPU 上安装(例,**B580**),请参阅本[指南](./bmg_quickstart.md)。 > [!NOTE] -> `ipex-llm[cpp]` 的最新版本与官方 llama.cpp 的 [d7cfe1f](https://github.com/ggml-org/llama.cpp/commit/d7cfe1ffe0f435d0048a6058d529daf76e072d9c) 版本保持一致。 +> `ipex-llm[cpp]` 的最新版本与官方 llama.cpp 的 [4ad2436](https://github.com/ggml-org/llama.cpp/commit/4ad2436) 版本保持一致。 > -> `ipex-llm[cpp]==2.2.0b20250320` 与官方 llama.cpp 的 [ba1cb19](https://github.com/ggml-org/llama.cpp/commit/ba1cb19cdd0d92e012e0f6e009e0620f854b6afd) 版本保持一致。 +> `ipex-llm[cpp]==2.2.0b20250629` 与官方 llama.cpp 的 [d7cfe1f](https://github.com/ggml-org/llama.cpp/commit/d7cfe1ffe0f435d0048a6058d529daf76e072d9c) 版本保持一致。 以下是在 Intel Arc GPU 上运行 LLaMA2-7B 的 DEMO 演示。