From f7a2bd21cf447e08f7317c29cc2191be06ac7754 Mon Sep 17 00:00:00 2001 From: SONG Ge <38711238+sgwhat@users.noreply.github.com> Date: Wed, 18 Dec 2024 17:33:20 +0800 Subject: [PATCH] Update ollama and llama.cpp readme (#12574) --- README.md | 2 +- docs/mddocs/Quickstart/llama_cpp_quickstart.md | 4 ++-- docs/mddocs/Quickstart/llama_cpp_quickstart.zh-CN.md | 4 ++-- docs/mddocs/Quickstart/ollama_quickstart.md | 2 -- docs/mddocs/Quickstart/ollama_quickstart.zh-CN.md | 2 -- 5 files changed, 5 insertions(+), 9 deletions(-) diff --git a/README.md b/README.md index fb03a2d2..4fc6c6b6 100644 --- a/README.md +++ b/README.md @@ -15,7 +15,7 @@ > - ***70+ models** have been optimized/verified on `ipex-llm` (e.g., Llama, Phi, Mistral, Mixtral, Whisper, Qwen, MiniCPM, Qwen-VL, MiniCPM-V and more), with state-of-art **LLM optimizations**, **XPU acceleration** and **low-bit (FP8/FP6/FP4/INT4) support**; see the complete list [here](#verified-models).* ## Latest Update 🔥 -- [2024/12] We added support for running [Ollama 0.40.6](docs/mddocs/Quickstart/ollama_quickstart.md) on Intel GPU. +- [2024/12] We added support for running [Ollama 0.4.6](docs/mddocs/Quickstart/ollama_quickstart.md) on Intel GPU. - [2024/12] We added both ***Python*** and ***C++*** support for Intel Core Ultra [NPU](docs/mddocs/Quickstart/npu_quickstart.md) (including 100H, 200V and 200K series). - [2024/11] We added support for running [vLLM 0.6.2](docs/mddocs/DockerGuides/vllm_docker_quickstart.md) on Intel Arc GPUs. diff --git a/docs/mddocs/Quickstart/llama_cpp_quickstart.md b/docs/mddocs/Quickstart/llama_cpp_quickstart.md index a54bb1c7..52c9a702 100644 --- a/docs/mddocs/Quickstart/llama_cpp_quickstart.md +++ b/docs/mddocs/Quickstart/llama_cpp_quickstart.md @@ -17,9 +17,9 @@ See the demo of running LLaMA2-7B on Intel Arc GPU below. > [!NOTE] -> `ipex-llm[cpp]==2.2.0b20240826` is consistent with [62bfef5](https://github.com/ggerganov/llama.cpp/commit/62bfef5194d5582486d62da3db59bf44981b7912) of llama.cpp. +> `ipex-llm[cpp]==2.2.0b20241204` is consistent with [a1631e5](https://github.com/ggerganov/llama.cpp/commit/a1631e53f6763e17da522ba219b030d8932900bd) of llama.cpp. > -> Our latest version is consistent with [a1631e5](https://github.com/ggerganov/llama.cpp/commit/a1631e53f6763e17da522ba219b030d8932900bd) of llama.cpp. +> Our latest version is consistent with [3f1ae2e](https://github.com/ggerganov/llama.cpp/commit/3f1ae2e32cde00c39b96be6d01c2997c29bae555) of llama.cpp. > [!NOTE] > Starting from `ipex-llm[cpp]==2.2.0b20240912`, oneAPI dependency of `ipex-llm[cpp]` on Windows will switch from `2024.0.0` to `2024.2.1` . diff --git a/docs/mddocs/Quickstart/llama_cpp_quickstart.zh-CN.md b/docs/mddocs/Quickstart/llama_cpp_quickstart.zh-CN.md index 4584355d..9eacdd84 100644 --- a/docs/mddocs/Quickstart/llama_cpp_quickstart.zh-CN.md +++ b/docs/mddocs/Quickstart/llama_cpp_quickstart.zh-CN.md @@ -17,9 +17,9 @@ > [!NOTE] -> `ipex-llm[cpp]==2.2.0b20240826` 版本与官方 llama.cpp 版本 [62bfef5](https://github.com/ggerganov/llama.cpp/commit/62bfef5194d5582486d62da3db59bf44981b7912) 一致。 +> `ipex-llm[cpp]==2.2.0b20241204` 版本与官方 llama.cpp 版本 [a1631e5](https://github.com/ggerganov/llama.cpp/commit/a1631e53f6763e17da522ba219b030d8932900bd) 一致。 > -> `ipex-llm[cpp]` 的最新版本与官方 llama.cpp 版本 [a1631e5](https://github.com/ggerganov/llama.cpp/commit/a1631e53f6763e17da522ba219b030d8932900bd) 一致。 +> `ipex-llm[cpp]` 的最新版本与官方 llama.cpp 版本 [3f1ae2e](https://github.com/ggerganov/llama.cpp/commit/3f1ae2e32cde00c39b96be6d01c2997c29bae555) 一致。 > [!NOTE] > 从 `ipex-llm[cpp]==2.2.0b20240912` 版本开始,Windows 上 `ipex-llm[cpp]` 依赖的 oneAPI 版本已从 `2024.0.0` 更新到 `2024.2.1`。 diff --git a/docs/mddocs/Quickstart/ollama_quickstart.md b/docs/mddocs/Quickstart/ollama_quickstart.md index 9cc09da9..39ccf84b 100644 --- a/docs/mddocs/Quickstart/ollama_quickstart.md +++ b/docs/mddocs/Quickstart/ollama_quickstart.md @@ -80,7 +80,6 @@ You may launch the Ollama service as below: export ZES_ENABLE_SYSMAN=1 source /opt/intel/oneapi/setvars.sh export SYCL_CACHE_PERSISTENT=1 - export LD_LIBRARY_PATH=.:$LD_LIBRARY_PATH # [optional] under most circumstances, the following environment variable may improve performance, but sometimes this may also cause performance degradation export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1 # [optional] if you want to run on single GPU, use below command to limit GPU may improve performance @@ -179,7 +178,6 @@ Then you can create the model in Ollama by `ollama create example -f Modelfile` ```bash source /opt/intel/oneapi/setvars.sh - export LD_LIBRARY_PATH=.:$LD_LIBRARY_PATH export no_proxy=localhost,127.0.0.1 ./ollama create example -f Modelfile ./ollama run example diff --git a/docs/mddocs/Quickstart/ollama_quickstart.zh-CN.md b/docs/mddocs/Quickstart/ollama_quickstart.zh-CN.md index bc84cf0f..d27a8de9 100644 --- a/docs/mddocs/Quickstart/ollama_quickstart.zh-CN.md +++ b/docs/mddocs/Quickstart/ollama_quickstart.zh-CN.md @@ -80,7 +80,6 @@ IPEX-LLM 现在已支持在 Linux 和 Windows 系统上运行 `Ollama`。 export ZES_ENABLE_SYSMAN=1 source /opt/intel/oneapi/setvars.sh export SYCL_CACHE_PERSISTENT=1 - export LD_LIBRARY_PATH=.:$LD_LIBRARY_PATH # [optional] under most circumstances, the following environment variable may improve performance, but sometimes this may also cause performance degradation export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1 # [optional] if you want to run on single GPU, use below command to limit GPU may improve performance @@ -176,7 +175,6 @@ PARAMETER num_predict 64 ```bash export no_proxy=localhost,127.0.0.1 source /opt/intel/oneapi/setvars.sh - export LD_LIBRARY_PATH=.:$LD_LIBRARY_PATH ./ollama create example -f Modelfile ./ollama run example ```