From 460bc96d3263f043f0df35c87e2731995db42f3f Mon Sep 17 00:00:00 2001 From: Ruonan Wang Date: Tue, 27 Aug 2024 06:21:44 -0700 Subject: [PATCH] update version of llama.cpp / ollama (#11930) * update version * fix version --- docs/mddocs/Quickstart/llama_cpp_quickstart.md | 6 ++---- docs/mddocs/Quickstart/ollama_quickstart.md | 4 ++-- 2 files changed, 4 insertions(+), 6 deletions(-) diff --git a/docs/mddocs/Quickstart/llama_cpp_quickstart.md b/docs/mddocs/Quickstart/llama_cpp_quickstart.md index a2b0c96b..a08dbe0a 100644 --- a/docs/mddocs/Quickstart/llama_cpp_quickstart.md +++ b/docs/mddocs/Quickstart/llama_cpp_quickstart.md @@ -14,9 +14,9 @@ See the demo of running LLaMA2-7B on Intel Arc GPU below. > [!NOTE] -> `ipex-llm[cpp]==2.5.0b20240527` is consistent with [c780e75](https://github.com/ggerganov/llama.cpp/commit/c780e75305dba1f67691a8dc0e8bc8425838a452) of llama.cpp. +> `ipex-llm[cpp]==2.2.0b20240826` is consistent with [62bfef5](https://github.com/ggerganov/llama.cpp/commit/62bfef5194d5582486d62da3db59bf44981b7912) of llama.cpp. > -> Our latest version is consistent with [62bfef5](https://github.com/ggerganov/llama.cpp/commit/62bfef5194d5582486d62da3db59bf44981b7912) of llama.cpp. +> Our latest version is consistent with [a1631e5](https://github.com/ggerganov/llama.cpp/commit/a1631e53f6763e17da522ba219b030d8932900bd) of llama.cpp. ## Table of Contents - [Prerequisites](./llama_cpp_quickstart.md#0-prerequisites) @@ -25,8 +25,6 @@ See the demo of running LLaMA2-7B on Intel Arc GPU below. - [Example: Running community GGUF models with IPEX-LLM](./llama_cpp_quickstart.md#3-example-running-community-gguf-models-with-ipex-llm) - [Troubleshooting](./llama_cpp_quickstart.md#troubleshooting) - - ## Quick Start This quickstart guide walks you through installing and running `llama.cpp` with `ipex-llm`. diff --git a/docs/mddocs/Quickstart/ollama_quickstart.md b/docs/mddocs/Quickstart/ollama_quickstart.md index e32d04bb..221873a9 100644 --- a/docs/mddocs/Quickstart/ollama_quickstart.md +++ b/docs/mddocs/Quickstart/ollama_quickstart.md @@ -14,9 +14,9 @@ See the demo of running LLaMA2-7B on Intel Arc GPU below. > [!NOTE] -> `ipex-llm[cpp]==2.5.0b20240527` is consistent with [v0.1.34](https://github.com/ollama/ollama/releases/tag/v0.1.34) of ollama. +> `ipex-llm[cpp]==2.2.0b20240826` is consistent with [v0.1.39](https://github.com/ollama/ollama/releases/tag/v0.1.39) of ollama. > -> Our current version is consistent with [v0.1.39](https://github.com/ollama/ollama/releases/tag/v0.1.39) of ollama. +> Our current version is consistent with [v0.3.6](https://github.com/ollama/ollama/releases/tag/v0.3.6) of ollama. ## Table of Contents - [Install IPEX-LLM for Ollama](./ollama_quickstart.md#1-install-ipex-llm-for-ollama)