Update llamacpp_portable_zip_gpu_quickstart.md (#12945)

This commit is contained in:
Jason Dai 2025-03-06 11:58:11 +08:00 committed by GitHub
parent 1432c5d9a0
commit cb3c4b26ad
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
2 changed files with 2 additions and 2 deletions

View file

@ -4,7 +4,7 @@
</p> </p>
>[!Important] >[!Important]
> You can now run **DeepSeek-R1-671B-Q4_K_M** with 1 or 2 Arc A770 on Xeon using the latest [llama.cpp Portable Zip](#flashmoe-for-deepseek-v3r1). > You can now run **DeepSeek-R1-671B-Q4_K_M** with 1 or 2 Arc A770 on Xeon using the latest *llama.cpp Portable Zip*; see the [guide](#flashmoe-for-deepseek-v3r1) below.
This guide demonstrates how to use [llama.cpp portable zip](https://github.com/intel/ipex-llm/releases/tag/v2.2.0-nightly) to directly run llama.cpp on Intel GPU with `ipex-llm` (without the need of manual installations). This guide demonstrates how to use [llama.cpp portable zip](https://github.com/intel/ipex-llm/releases/tag/v2.2.0-nightly) to directly run llama.cpp on Intel GPU with `ipex-llm` (without the need of manual installations).

View file

@ -6,7 +6,7 @@
本指南演示如何使用 [llama.cpp portable zip](https://github.com/intel/ipex-llm/releases/tag/v2.2.0-nightly) 通过 `ipex-llm` 在 Intel GPU 上直接免安装运行。 本指南演示如何使用 [llama.cpp portable zip](https://github.com/intel/ipex-llm/releases/tag/v2.2.0-nightly) 通过 `ipex-llm` 在 Intel GPU 上直接免安装运行。
> [!Important] > [!Important]
> 使用最新版 [llama.cpp Portable Zip](#flashmoe-运行-deepseek-v3r1), 可以在 Xeon 上通过1到2张 Arc A770 GPU 运行 **DeepSeek-R1-671B-Q4_K_M** > 使用最新版 *llama.cpp Portable Zip* 可以在 Xeon 上通过1到2张 Arc A770 GPU 运行 **DeepSeek-R1-671B-Q4_K_M**;详见如下[指南](#flashmoe-运行-deepseek-v3r1)。
> [!NOTE] > [!NOTE]
> llama.cpp portable zip 在如下设备上进行了验证: > llama.cpp portable zip 在如下设备上进行了验证: