fix readme for npu cpp examples and llama.cpp (#12505)
* fix cpp readme * fix cpp readme * fix cpp readme
This commit is contained in:
		
							parent
							
								
									727f29968c
								
							
						
					
					
						commit
						5e1416c9aa
					
				
					 3 changed files with 6 additions and 6 deletions
				
			
		| 
						 | 
					@ -342,9 +342,9 @@ If your machine has multi GPUs, `llama.cpp` will default use all GPUs which may
 | 
				
			||||||
Also, you can use `ONEAPI_DEVICE_SELECTOR=level_zero:[gpu_id]` to select device before excuting your command, more details can refer to [here](../Overview/KeyFeatures/multi_gpus_selection.md#2-oneapi-device-selector).
 | 
					Also, you can use `ONEAPI_DEVICE_SELECTOR=level_zero:[gpu_id]` to select device before excuting your command, more details can refer to [here](../Overview/KeyFeatures/multi_gpus_selection.md#2-oneapi-device-selector).
 | 
				
			||||||
 | 
					
 | 
				
			||||||
#### 9. Program crash with Chinese prompt
 | 
					#### 9. Program crash with Chinese prompt
 | 
				
			||||||
If you run the llama.cpp program on Windows and find that your program crashes or outputs abnormally when accepting Chinese prompts, you can open `Region->Administrative->Change System locale..`, check `Beta: Use Unicode UTF-8 for worldwide language support` option and then restart your computer.
 | 
					If you run the llama.cpp program on Windows and find that your program crashes or outputs abnormally when accepting Chinese prompts, you can search for `region` in the Windows search bar and go to `Region->Administrative->Change System locale..`, tick `Beta: Use Unicode UTF-8 for worldwide language support` option and then restart your computer.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
For detailed instructions on how to do this, see [this issue](https://github.com/intel-analytics/ipex-llm/issues/10989#issuecomment-2105600469).
 | 
					For detailed instructions on how to do this, see [this issue](https://github.com/intel-analytics/ipex-llm/issues/10989#issuecomment-2105598660).
 | 
				
			||||||
 | 
					
 | 
				
			||||||
#### 10. sycl7.dll not found error
 | 
					#### 10. sycl7.dll not found error
 | 
				
			||||||
If you meet `System Error: sycl7.dll not found` on Windows or you meet similar error on Linux, please check:
 | 
					If you meet `System Error: sycl7.dll not found` on Windows or you meet similar error on Linux, please check:
 | 
				
			||||||
| 
						 | 
					
 | 
				
			||||||
| 
						 | 
					@ -343,9 +343,9 @@ Log end
 | 
				
			||||||
此外,你也可以在执行命令前使用 `ONEAPI_DEVICE_SELECTOR=level_zero:[gpu_id]` 来指定要使用的 GPU 设备,更多详情信息请参阅[此处指南](../Overview/KeyFeatures/multi_gpus_selection.md#2-oneapi-device-selector)。
 | 
					此外,你也可以在执行命令前使用 `ONEAPI_DEVICE_SELECTOR=level_zero:[gpu_id]` 来指定要使用的 GPU 设备,更多详情信息请参阅[此处指南](../Overview/KeyFeatures/multi_gpus_selection.md#2-oneapi-device-selector)。
 | 
				
			||||||
 | 
					
 | 
				
			||||||
#### 9. 使用中文提示词时发生程序崩溃
 | 
					#### 9. 使用中文提示词时发生程序崩溃
 | 
				
			||||||
如果你在 Windows 上运行 llama.cpp 程序时,发现程序在接受中文提示时崩溃或者输出异常,可以进入“区域设置 -> 管理 -> 更改系统区域设置”,勾选“Beta: 使用 Unicode UTF-8 提供全球语言支持”选项,然后重启计算机。
 | 
					如果你在 Windows 上运行 llama.cpp 程序时,发现程序在接受中文提示时崩溃或者输出异常,可以在Windows搜索栏搜索“区域设置”,进入“区域设置 -> 管理 -> 更改系统区域设置”,勾选“Beta: 使用 Unicode UTF-8 提供全球语言支持”选项,然后重启计算机。
 | 
				
			||||||
 | 
					
 | 
				
			||||||
有关如何执行此操作的详细说明,请参阅[此问题](https://github.com/intel-analytics/ipex-llm/issues/10989#issuecomment-2105600469)。
 | 
					有关如何执行此操作的详细说明,请参阅[此问题](https://github.com/intel-analytics/ipex-llm/issues/10989#issuecomment-2105598660)。
 | 
				
			||||||
 | 
					
 | 
				
			||||||
#### 10. sycl7.dll 未找到错误
 | 
					#### 10. sycl7.dll 未找到错误
 | 
				
			||||||
如果你在 Linux 或者 Windows 上遇到类似 `System Error: sycl7.dll not found` 的错误, 请根据操作系统进行下列检查:
 | 
					如果你在 Linux 或者 Windows 上遇到类似 `System Error: sycl7.dll not found` 的错误, 请根据操作系统进行下列检查:
 | 
				
			||||||
| 
						 | 
					
 | 
				
			||||||
| 
						 | 
					@ -1,4 +1,4 @@
 | 
				
			||||||
# C++ Example of running LLM on Intel NPU using IPEX-LLM (Experimental)
 | 
					# C++ Example of running LLM on Intel NPU using IPEX-LLM
 | 
				
			||||||
In this directory, you will find a C++ example on how to run LLM models on Intel NPUs using IPEX-LLM (leveraging *Intel NPU Acceleration Library*). See the table blow for verified models.
 | 
					In this directory, you will find a C++ example on how to run LLM models on Intel NPUs using IPEX-LLM (leveraging *Intel NPU Acceleration Library*). See the table blow for verified models.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
## Verified Models
 | 
					## Verified Models
 | 
				
			||||||
| 
						 | 
					@ -121,7 +121,7 @@ Decode 46 tokens cost xxxx ms (avg xx.xx ms each token).
 | 
				
			||||||
### Troubleshooting
 | 
					### Troubleshooting
 | 
				
			||||||
 | 
					
 | 
				
			||||||
#### Program crash with Chinese prompt
 | 
					#### Program crash with Chinese prompt
 | 
				
			||||||
If you run CPP examples on Windows and find that your program raises below error when accepting Chinese prompts, you can open `Region->Administrative->Change System locale..`, check `Beta: Use Unicode UTF-8 for worldwide language support` option and then restart your computer.
 | 
					If you run CPP examples on Windows and find that your program raises below error when accepting Chinese prompts, you can search for `region` in the Windows search bar and go to `Region->Administrative->Change System locale..`, tick `Beta: Use Unicode UTF-8 for worldwide language support` option and then restart your computer.
 | 
				
			||||||
```log
 | 
					```log
 | 
				
			||||||
thread '<unnamed>' panicked at src\lib.rs:151:91:
 | 
					thread '<unnamed>' panicked at src\lib.rs:151:91:
 | 
				
			||||||
called `Result::unwrap()` on an `Err` value: Utf8Error { valid_up_to: 77, error_len: Some(1) }
 | 
					called `Result::unwrap()` on an `Err` value: Utf8Error { valid_up_to: 77, error_len: Some(1) }
 | 
				
			||||||
| 
						 | 
					
 | 
				
			||||||
		Loading…
	
		Reference in a new issue