[NPU] Add troubleshooting in portable zip doc (#12924)

This commit is contained in:
binbin Deng 2025-03-04 10:41:39 +08:00 committed by GitHub
parent b2d676f1c6
commit 091ab2bd59
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
2 changed files with 17 additions and 2 deletions

View file

@ -17,6 +17,7 @@ IPEX-LLM provides llama.cpp support for running GGUF models on Intel NPU. This g
- [Step 2: Setup](#step-2-setup) - [Step 2: Setup](#step-2-setup)
- [Step 3: Run GGUF Model](#step-3-run-gguf-model) - [Step 3: Run GGUF Model](#step-3-run-gguf-model)
- [More details](npu_quickstart.md) - [More details](npu_quickstart.md)
- [Troubleshooting](#troubleshooting)
## Prerequisites ## Prerequisites
@ -34,7 +35,7 @@ Then, extract the zip file to a folder.
## Step 2: Setup ## Step 2: Setup
- Open "Command Prompt" (cmd), and enter the extracted folder through `cd /d PATH\TO\EXTRACTED\FOLDER` - Open **"Command Prompt" (cmd)**, and enter the extracted folder through `cd /d PATH\TO\EXTRACTED\FOLDER`
- Runtime configuration based on your device: - Runtime configuration based on your device:
- For **Intel Core™ Ultra Processors (Series 2) with processor number 2xxV (code name Lunar Lake)**: - For **Intel Core™ Ultra Processors (Series 2) with processor number 2xxV (code name Lunar Lake)**:
@ -63,3 +64,9 @@ You could then use cli tool to run GGUF models on Intel NPU through running `lla
```cmd ```cmd
llama-cli-npu.exe -m DeepSeek-R1-Distill-Qwen-7B-Q6_K.gguf -n 32 --prompt "What is AI?" llama-cli-npu.exe -m DeepSeek-R1-Distill-Qwen-7B-Q6_K.gguf -n 32 --prompt "What is AI?"
``` ```
## Troubleshooting
### `L0 pfnCreate2 result: ZE_RESULT_ERROR_INVALID_ARGUMENT, code 0x78000004` error
First, verify that your NPU driver version meets the requirement. Then, check the runtime configuration based on your device. And please attention the difference between **Command Prompt** and **Windows PowerShell**. Take Arrow Lake for example, you need to use `set IPEX_LLM_NPU_ARL=1` in **Command Prompt** while `$env:IPEX_LLM_NPU_ARL = "1"` in **Windows PowerShell**.

View file

@ -17,6 +17,7 @@ IPEX-LLM 提供了 llama.cpp 的相关支持以在 Intel NPU 上运行 GGUF 模
- [步骤 2启动](#步骤-2启动) - [步骤 2启动](#步骤-2启动)
- [步骤 3运行 GGUF 模型](#步骤-3运行-gguf-模型) - [步骤 3运行 GGUF 模型](#步骤-3运行-gguf-模型)
- [更多信息](npu_quickstart.md) - [更多信息](npu_quickstart.md)
- [故障排除](#故障排除)
## 系统环境准备 ## 系统环境准备
@ -34,7 +35,7 @@ IPEX-LLM 提供了 llama.cpp 的相关支持以在 Intel NPU 上运行 GGUF 模
## 步骤 2启动 ## 步骤 2启动
- 打开命令提示符cmd并通过在命令行输入指令 "cd /d PATH\TO\EXTRACTED\FOLDER" 进入解压缩后的文件夹 - 打开**命令提示符cmd**,并通过在命令行输入指令 "cd /d PATH\TO\EXTRACTED\FOLDER" 进入解压缩后的文件夹
- 根据你的设备完成运行配置: - 根据你的设备完成运行配置:
- 对于 **处理器为 2xxV 的 Intel Core™ Ultra Processors (Series 2) (代号 Lunar Lake)**: - 对于 **处理器为 2xxV 的 Intel Core™ Ultra Processors (Series 2) (代号 Lunar Lake)**:
@ -63,3 +64,10 @@ IPEX-LLM 提供了 llama.cpp 的相关支持以在 Intel NPU 上运行 GGUF 模
```cmd ```cmd
llama-cli-npu.exe -m DeepSeek-R1-Distill-Qwen-7B-Q6_K.gguf -n 32 --prompt "What is AI?" llama-cli-npu.exe -m DeepSeek-R1-Distill-Qwen-7B-Q6_K.gguf -n 32 --prompt "What is AI?"
``` ```
## 故障排除
### `L0 pfnCreate2 result: ZE_RESULT_ERROR_INVALID_ARGUMENT, code 0x78000004` 报错
首先确认你的 NPU 驱动版本是否符合要求,然后根据你的设备检查运行时配置,请注意 **命令提示符****Windows PowerShell** 的区别。
以 Arrow Lake 为例,在 **命令提示符** 中需要设置 `set IPEX_LLM_NPU_ARL=1`,而在 **Windows PowerShell** 中是 `$env:IPEX_LLM_NPU_ARL = "1"`