diff --git a/README.md b/README.md
index a1ed4078..ceb89afb 100644
--- a/README.md
+++ b/README.md
@@ -94,7 +94,7 @@ See demos of running local LLMs *on Intel Core Ultra iGPU, Intel Core Ultra NPU,
TextGeneration-WebUI
(Llama3-8B, FP8)
- llama.cpp (DeepSeek-R1-Distill-Qwen-32B, Q4_K)
+ llama.cpp (DeepSeek-R1-Distill-Qwen-32B, Q4_K)
|
diff --git a/README.zh-CN.md b/README.zh-CN.md
index 0b35e8d6..44e09ae5 100644
--- a/README.zh-CN.md
+++ b/README.zh-CN.md
@@ -85,7 +85,7 @@
- Ollama (Mistral-7B, Q4_K)
+ Ollama (Mistral-7B, Q4_K)
|
HuggingFace (Llama3.2-3B, SYM_INT4)
@@ -94,7 +94,7 @@
TextGeneration-WebUI (Llama3-8B, FP8)
|
- llama.cpp (DeepSeek-R1-Distill-Qwen-32B, Q4_K)
+ llama.cpp (DeepSeek-R1-Distill-Qwen-32B, Q4_K)
|