Update llamacpp_portable_zip_gpu_quickstart (#12940)

This commit is contained in:
Jason Dai 2025-03-06 08:42:18 +08:00 committed by GitHub
parent 975cf5f21f
commit 32480cc8ed
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
2 changed files with 7 additions and 1 deletions

View file

@ -3,6 +3,9 @@
< <b>English</b> | <a href='./llamacpp_portable_zip_gpu_quickstart.zh-CN.md'>中文</a> > < <b>English</b> | <a href='./llamacpp_portable_zip_gpu_quickstart.zh-CN.md'>中文</a> >
</p> </p>
>[!Important]
> We can now run **DeepSeek-R1-671B-Q4_K_M** with 1 or 2 Arc A770 on Xeon using the latest [llama.cpp Portable Zip](#flashmoe-for-deepseek-v3r1).
This guide demonstrates how to use [llama.cpp portable zip](https://github.com/intel/ipex-llm/releases/tag/v2.2.0-nightly) to directly run llama.cpp on Intel GPU with `ipex-llm` (without the need of manual installations). This guide demonstrates how to use [llama.cpp portable zip](https://github.com/intel/ipex-llm/releases/tag/v2.2.0-nightly) to directly run llama.cpp on Intel GPU with `ipex-llm` (without the need of manual installations).
> [!NOTE] > [!NOTE]
@ -23,7 +26,7 @@ This guide demonstrates how to use [llama.cpp portable zip](https://github.com/i
- [Step 1: Download and Extract](#step-1-download-and-extract) - [Step 1: Download and Extract](#step-1-download-and-extract)
- [Step 2: Runtime Configuration](#step-2-runtime-configuration-1) - [Step 2: Runtime Configuration](#step-2-runtime-configuration-1)
- [Step 3: Run GGUF models](#step-3-run-gguf-models-1) - [Step 3: Run GGUF models](#step-3-run-gguf-models-1)
- [(New) FlashMoE for MoE Models (e.g., DeepSeek V3/R1) using llama.cpp](#flashmoe-for-deepseek-v3r1) - [(New) FlashMoE for DeepSeek V3/R1 using llama.cpp](#flashmoe-for-deepseek-v3r1)
- [Tips & Troubleshooting](#tips--troubleshooting) - [Tips & Troubleshooting](#tips--troubleshooting)
- [Error: Detected different sycl devices](#error-detected-different-sycl-devices) - [Error: Detected different sycl devices](#error-detected-different-sycl-devices)
- [Multi-GPUs usage](#multi-gpus-usage) - [Multi-GPUs usage](#multi-gpus-usage)

View file

@ -5,6 +5,9 @@
本指南演示如何使用 [llama.cpp portable zip](https://github.com/intel/ipex-llm/releases/tag/v2.2.0-nightly) 通过 `ipex-llm` 在 Intel GPU 上直接免安装运行。 本指南演示如何使用 [llama.cpp portable zip](https://github.com/intel/ipex-llm/releases/tag/v2.2.0-nightly) 通过 `ipex-llm` 在 Intel GPU 上直接免安装运行。
> [!Important]
> 使用最新 [llama.cpp Portable Zip](#flashmoe-运行-deepseek-v3r1), 可以在 Xeon 上通过1到2张 Arc A770 GPU 运行 **DeepSeek-R1-671B-Q4_K_M**
> [!NOTE] > [!NOTE]
> llama.cpp portable zip 在如下设备上进行了验证: > llama.cpp portable zip 在如下设备上进行了验证:
> - Intel Core Ultra processors > - Intel Core Ultra processors