diff --git a/docs/readthedocs/source/doc/LLM/Quickstart/llama_cpp_quickstart.md b/docs/readthedocs/source/doc/LLM/Quickstart/llama_cpp_quickstart.md
index 5b681bab..95035f58 100644
--- a/docs/readthedocs/source/doc/LLM/Quickstart/llama_cpp_quickstart.md
+++ b/docs/readthedocs/source/doc/LLM/Quickstart/llama_cpp_quickstart.md
@@ -6,6 +6,12 @@ See the demo of running LLaMA2-7B on Intel Arc GPU below.
+```eval_rst
+.. note::
+
+ Our current version is consistent with `c780e75 `_ of llama.cpp.
+```
+
## Quick Start
This quickstart guide walks you through installing and running `llama.cpp` with `ipex-llm`.
diff --git a/docs/readthedocs/source/doc/LLM/Quickstart/ollama_quickstart.md b/docs/readthedocs/source/doc/LLM/Quickstart/ollama_quickstart.md
index 96d63ed8..fdda3950 100644
--- a/docs/readthedocs/source/doc/LLM/Quickstart/ollama_quickstart.md
+++ b/docs/readthedocs/source/doc/LLM/Quickstart/ollama_quickstart.md
@@ -6,6 +6,12 @@ See the demo of running LLaMA2-7B on Intel Arc GPU below.
+```eval_rst
+.. note::
+
+ Our current version is consistent with `v0.1.34 `_ of ollama.
+```
+
## Quickstart
### 1 Install IPEX-LLM for Ollama