diff --git a/docs/mddocs/Quickstart/llama_cpp_quickstart.md b/docs/mddocs/Quickstart/llama_cpp_quickstart.md index 02b9efad..aa772ecb 100644 --- a/docs/mddocs/Quickstart/llama_cpp_quickstart.md +++ b/docs/mddocs/Quickstart/llama_cpp_quickstart.md @@ -12,9 +12,9 @@ > For installation on Intel Arc B-Series GPU (such as **B580**), please refer to this [guide](./bmg_quickstart.md). > [!NOTE] -> Our latest version is consistent with [d7cfe1f](https://github.com/ggml-org/llama.cpp/commit/d7cfe1ffe0f435d0048a6058d529daf76e072d9c) of llama.cpp. +> Our latest version is consistent with [4ad2436](https://github.com/ggml-org/llama.cpp/commit/4ad2436) of llama.cpp. > -> `ipex-llm[cpp]==2.2.0b20250320` is consistent with [ba1cb19](https://github.com/ggml-org/llama.cpp/commit/ba1cb19cdd0d92e012e0f6e009e0620f854b6afd) of llama.cpp. +> `ipex-llm[cpp]==2.2.0b20250629` is consistent with [d7cfe1f](https://github.com/ggml-org/llama.cpp/commit/d7cfe1ffe0f435d0048a6058d529daf76e072d9c) of llama.cpp. See the demo of running LLaMA2-7B on Intel Arc GPU below. diff --git a/docs/mddocs/Quickstart/llama_cpp_quickstart.zh-CN.md b/docs/mddocs/Quickstart/llama_cpp_quickstart.zh-CN.md index 67f00f9c..9942ace7 100644 --- a/docs/mddocs/Quickstart/llama_cpp_quickstart.zh-CN.md +++ b/docs/mddocs/Quickstart/llama_cpp_quickstart.zh-CN.md @@ -12,9 +12,9 @@ > 如果是在 Intel Arc B 系列 GPU 上安装(例,**B580**),请参阅本[指南](./bmg_quickstart.md)。 > [!NOTE] -> `ipex-llm[cpp]` 的最新版本与官方 llama.cpp 的 [d7cfe1f](https://github.com/ggml-org/llama.cpp/commit/d7cfe1ffe0f435d0048a6058d529daf76e072d9c) 版本保持一致。 +> `ipex-llm[cpp]` 的最新版本与官方 llama.cpp 的 [4ad2436](https://github.com/ggml-org/llama.cpp/commit/4ad2436) 版本保持一致。 > -> `ipex-llm[cpp]==2.2.0b20250320` 与官方 llama.cpp 的 [ba1cb19](https://github.com/ggml-org/llama.cpp/commit/ba1cb19cdd0d92e012e0f6e009e0620f854b6afd) 版本保持一致。 +> `ipex-llm[cpp]==2.2.0b20250629` 与官方 llama.cpp 的 [d7cfe1f](https://github.com/ggml-org/llama.cpp/commit/d7cfe1ffe0f435d0048a6058d529daf76e072d9c) 版本保持一致。 以下是在 Intel Arc GPU 上运行 LLaMA2-7B 的 DEMO 演示。