Update ollama and llama.cpp readme (#12574)

This commit is contained in:
SONG Ge 2024-12-18 17:33:20 +08:00 committed by GitHub
parent e2ae42929a
commit f7a2bd21cf
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
5 changed files with 5 additions and 9 deletions

View file

@ -15,7 +15,7 @@
> - ***70+ models** have been optimized/verified on `ipex-llm` (e.g., Llama, Phi, Mistral, Mixtral, Whisper, Qwen, MiniCPM, Qwen-VL, MiniCPM-V and more), with state-of-art **LLM optimizations**, **XPU acceleration** and **low-bit (FP8/FP6/FP4/INT4) support**; see the complete list [here](#verified-models).*
## Latest Update 🔥
- [2024/12] We added support for running [Ollama 0.40.6](docs/mddocs/Quickstart/ollama_quickstart.md) on Intel GPU.
- [2024/12] We added support for running [Ollama 0.4.6](docs/mddocs/Quickstart/ollama_quickstart.md) on Intel GPU.
- [2024/12] We added both ***Python*** and ***C++*** support for Intel Core Ultra [NPU](docs/mddocs/Quickstart/npu_quickstart.md) (including 100H, 200V and 200K series).
- [2024/11] We added support for running [vLLM 0.6.2](docs/mddocs/DockerGuides/vllm_docker_quickstart.md) on Intel Arc GPUs.

View file

@ -17,9 +17,9 @@ See the demo of running LLaMA2-7B on Intel Arc GPU below.
</table>
> [!NOTE]
> `ipex-llm[cpp]==2.2.0b20240826` is consistent with [62bfef5](https://github.com/ggerganov/llama.cpp/commit/62bfef5194d5582486d62da3db59bf44981b7912) of llama.cpp.
> `ipex-llm[cpp]==2.2.0b20241204` is consistent with [a1631e5](https://github.com/ggerganov/llama.cpp/commit/a1631e53f6763e17da522ba219b030d8932900bd) of llama.cpp.
>
> Our latest version is consistent with [a1631e5](https://github.com/ggerganov/llama.cpp/commit/a1631e53f6763e17da522ba219b030d8932900bd) of llama.cpp.
> Our latest version is consistent with [3f1ae2e](https://github.com/ggerganov/llama.cpp/commit/3f1ae2e32cde00c39b96be6d01c2997c29bae555) of llama.cpp.
> [!NOTE]
> Starting from `ipex-llm[cpp]==2.2.0b20240912`, oneAPI dependency of `ipex-llm[cpp]` on Windows will switch from `2024.0.0` to `2024.2.1` .

View file

@ -17,9 +17,9 @@
</table>
> [!NOTE]
> `ipex-llm[cpp]==2.2.0b20240826` 版本与官方 llama.cpp 版本 [62bfef5](https://github.com/ggerganov/llama.cpp/commit/62bfef5194d5582486d62da3db59bf44981b7912) 一致。
> `ipex-llm[cpp]==2.2.0b20241204` 版本与官方 llama.cpp 版本 [a1631e5](https://github.com/ggerganov/llama.cpp/commit/a1631e53f6763e17da522ba219b030d8932900bd) 一致。
>
> `ipex-llm[cpp]` 的最新版本与官方 llama.cpp 版本 [a1631e5](https://github.com/ggerganov/llama.cpp/commit/a1631e53f6763e17da522ba219b030d8932900bd) 一致。
> `ipex-llm[cpp]` 的最新版本与官方 llama.cpp 版本 [3f1ae2e](https://github.com/ggerganov/llama.cpp/commit/3f1ae2e32cde00c39b96be6d01c2997c29bae555) 一致。
> [!NOTE]
> 从 `ipex-llm[cpp]==2.2.0b20240912` 版本开始Windows 上 `ipex-llm[cpp]` 依赖的 oneAPI 版本已从 `2024.0.0` 更新到 `2024.2.1`

View file

@ -80,7 +80,6 @@ You may launch the Ollama service as below:
export ZES_ENABLE_SYSMAN=1
source /opt/intel/oneapi/setvars.sh
export SYCL_CACHE_PERSISTENT=1
export LD_LIBRARY_PATH=.:$LD_LIBRARY_PATH
# [optional] under most circumstances, the following environment variable may improve performance, but sometimes this may also cause performance degradation
export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
# [optional] if you want to run on single GPU, use below command to limit GPU may improve performance
@ -179,7 +178,6 @@ Then you can create the model in Ollama by `ollama create example -f Modelfile`
```bash
source /opt/intel/oneapi/setvars.sh
export LD_LIBRARY_PATH=.:$LD_LIBRARY_PATH
export no_proxy=localhost,127.0.0.1
./ollama create example -f Modelfile
./ollama run example

View file

@ -80,7 +80,6 @@ IPEX-LLM 现在已支持在 Linux 和 Windows 系统上运行 `Ollama`。
export ZES_ENABLE_SYSMAN=1
source /opt/intel/oneapi/setvars.sh
export SYCL_CACHE_PERSISTENT=1
export LD_LIBRARY_PATH=.:$LD_LIBRARY_PATH
# [optional] under most circumstances, the following environment variable may improve performance, but sometimes this may also cause performance degradation
export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
# [optional] if you want to run on single GPU, use below command to limit GPU may improve performance
@ -176,7 +175,6 @@ PARAMETER num_predict 64
```bash
export no_proxy=localhost,127.0.0.1
source /opt/intel/oneapi/setvars.sh
export LD_LIBRARY_PATH=.:$LD_LIBRARY_PATH
./ollama create example -f Modelfile
./ollama run example
```