Update llama_cpp_quickstart.md (#13145)
Signed-off-by: Pranav Singh <pranav.singh@intel.com>
This commit is contained in:
parent
bd71739e64
commit
bd45bf7584
1 changed files with 1 additions and 1 deletions
|
|
@ -3,7 +3,7 @@
|
||||||
<b>< English</b> | <a href='./llama_cpp_quickstart.zh-CN.md'>中文</a> >
|
<b>< English</b> | <a href='./llama_cpp_quickstart.zh-CN.md'>中文</a> >
|
||||||
</p>
|
</p>
|
||||||
|
|
||||||
[ggerganov/llama.cpp](https://github.com/ggerganov/llama.cpp) prvoides fast LLM inference in pure C++ across a variety of hardware; you can now use the C++ interface of [`ipex-llm`](https://github.com/intel-analytics/ipex-llm) as an accelerated backend for `llama.cpp` running on Intel **GPU** *(e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max)*.
|
[ggerganov/llama.cpp](https://github.com/ggerganov/llama.cpp) provides fast LLM inference in pure C++ across a variety of hardware; you can now use the C++ interface of [`ipex-llm`](https://github.com/intel-analytics/ipex-llm) as an accelerated backend for `llama.cpp` running on Intel **GPU** *(e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max)*.
|
||||||
|
|
||||||
> [!Important]
|
> [!Important]
|
||||||
> You may use [llama.cpp Portable Zip](./llamacpp_portable_zip_gpu_quickstart.md) to directly run llama.cpp on Intel GPU with ipex-llm (***without the need of manual installations***).
|
> You may use [llama.cpp Portable Zip](./llamacpp_portable_zip_gpu_quickstart.md) to directly run llama.cpp on Intel GPU with ipex-llm (***without the need of manual installations***).
|
||||||
|
|
|
||||||
Loading…
Reference in a new issue