parent
							
								
									28f72123bd
								
							
						
					
					
						commit
						f0b600da77
					
				
					 2 changed files with 4 additions and 4 deletions
				
			
		| 
						 | 
					@ -12,9 +12,9 @@
 | 
				
			||||||
> For installation on Intel Arc B-Series GPU (such as **B580**), please refer to this [guide](./bmg_quickstart.md).
 | 
					> For installation on Intel Arc B-Series GPU (such as **B580**), please refer to this [guide](./bmg_quickstart.md).
 | 
				
			||||||
 | 
					
 | 
				
			||||||
> [!NOTE]
 | 
					> [!NOTE]
 | 
				
			||||||
> Our latest version is consistent with [d7cfe1f](https://github.com/ggml-org/llama.cpp/commit/d7cfe1ffe0f435d0048a6058d529daf76e072d9c) of llama.cpp.
 | 
					> Our latest version is consistent with [4ad2436](https://github.com/ggml-org/llama.cpp/commit/4ad2436) of llama.cpp.
 | 
				
			||||||
>
 | 
					>
 | 
				
			||||||
> `ipex-llm[cpp]==2.2.0b20250320` is consistent with [ba1cb19](https://github.com/ggml-org/llama.cpp/commit/ba1cb19cdd0d92e012e0f6e009e0620f854b6afd) of llama.cpp.
 | 
					> `ipex-llm[cpp]==2.2.0b20250629` is consistent with [d7cfe1f](https://github.com/ggml-org/llama.cpp/commit/d7cfe1ffe0f435d0048a6058d529daf76e072d9c) of llama.cpp.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
See the demo of running LLaMA2-7B on Intel Arc GPU below.
 | 
					See the demo of running LLaMA2-7B on Intel Arc GPU below.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
| 
						 | 
					
 | 
				
			||||||
| 
						 | 
					@ -12,9 +12,9 @@
 | 
				
			||||||
> 如果是在 Intel Arc B 系列 GPU 上安装(例,**B580**),请参阅本[指南](./bmg_quickstart.md)。
 | 
					> 如果是在 Intel Arc B 系列 GPU 上安装(例,**B580**),请参阅本[指南](./bmg_quickstart.md)。
 | 
				
			||||||
 | 
					
 | 
				
			||||||
> [!NOTE]
 | 
					> [!NOTE]
 | 
				
			||||||
> `ipex-llm[cpp]` 的最新版本与官方 llama.cpp 的 [d7cfe1f](https://github.com/ggml-org/llama.cpp/commit/d7cfe1ffe0f435d0048a6058d529daf76e072d9c) 版本保持一致。 
 | 
					> `ipex-llm[cpp]` 的最新版本与官方 llama.cpp 的 [4ad2436](https://github.com/ggml-org/llama.cpp/commit/4ad2436) 版本保持一致。 
 | 
				
			||||||
>
 | 
					>
 | 
				
			||||||
> `ipex-llm[cpp]==2.2.0b20250320` 与官方 llama.cpp 的 [ba1cb19](https://github.com/ggml-org/llama.cpp/commit/ba1cb19cdd0d92e012e0f6e009e0620f854b6afd) 版本保持一致。
 | 
					> `ipex-llm[cpp]==2.2.0b20250629` 与官方 llama.cpp 的  [d7cfe1f](https://github.com/ggml-org/llama.cpp/commit/d7cfe1ffe0f435d0048a6058d529daf76e072d9c) 版本保持一致。
 | 
				
			||||||
 | 
					
 | 
				
			||||||
以下是在 Intel Arc GPU 上运行 LLaMA2-7B 的 DEMO 演示。
 | 
					以下是在 Intel Arc GPU 上运行 LLaMA2-7B 的 DEMO 演示。
 | 
				
			||||||
 | 
					
 | 
				
			||||||
| 
						 | 
					
 | 
				
			||||||
		Loading…
	
		Reference in a new issue