From 71ea539351a5e2c38d82b21c48486cd0ecc9e5bf Mon Sep 17 00:00:00 2001 From: Jinhe Date: Thu, 7 Nov 2024 15:49:20 +0800 Subject: [PATCH] Add troubleshootings for ollama and llama.cpp (#12358) * add ollama troubleshoot en * zh ollama troubleshoot * llamacpp trouble shoot * llamacpp trouble shoot * fix * save gpu memory --- docs/mddocs/Quickstart/llama_cpp_quickstart.md | 3 +++ docs/mddocs/Quickstart/llama_cpp_quickstart.zh-CN.md | 3 +++ docs/mddocs/Quickstart/ollama_quickstart.md | 3 +++ docs/mddocs/Quickstart/ollama_quickstart.zh-CN.md | 3 +++ 4 files changed, 12 insertions(+) diff --git a/docs/mddocs/Quickstart/llama_cpp_quickstart.md b/docs/mddocs/Quickstart/llama_cpp_quickstart.md index b3295d65..9a85162e 100644 --- a/docs/mddocs/Quickstart/llama_cpp_quickstart.md +++ b/docs/mddocs/Quickstart/llama_cpp_quickstart.md @@ -366,3 +366,6 @@ On latest version of `ipex-llm`, you might come across `native API failed` error #### 15. `signal: bus error (core dumped)` error If you meet this error, please check your Linux kernel version first. You may encounter this issue on higher kernel versions (like kernel 6.15). You can also refer to [this issue](https://github.com/intel-analytics/ipex-llm/issues/10955) to see if it helps. + +#### 16. `backend buffer base cannot be NULL` error +If you meet `ggml-backend.c:96: GGML_ASSERT(base != NULL && "backend buffer base cannot be NULL") failed`, simply adding `-c xx` parameter during inference, for example `-c 1024` would resolve this problem. \ No newline at end of file diff --git a/docs/mddocs/Quickstart/llama_cpp_quickstart.zh-CN.md b/docs/mddocs/Quickstart/llama_cpp_quickstart.zh-CN.md index 482b97b9..1cbc06a0 100644 --- a/docs/mddocs/Quickstart/llama_cpp_quickstart.zh-CN.md +++ b/docs/mddocs/Quickstart/llama_cpp_quickstart.zh-CN.md @@ -367,3 +367,6 @@ Log end #### 15. `signal: bus error (core dumped)` 错误 如果你遇到此错误,请先检查你的 Linux 内核版本。较高版本的内核(例如 6.15)可能会导致此问题。你也可以参考[此问题](https://github.com/intel-analytics/ipex-llm/issues/10955)来查看是否有帮助。 + +#### 16. `backend buffer base cannot be NULL` 错误 +如果你遇到`ggml-backend.c:96: GGML_ASSERT(base != NULL && "backend buffer base cannot be NULL") failed`错误,在推理时传入参数`-c xx`,如`-c 1024`即可解决。 \ No newline at end of file diff --git a/docs/mddocs/Quickstart/ollama_quickstart.md b/docs/mddocs/Quickstart/ollama_quickstart.md index 62ffeb0d..f3f44b23 100644 --- a/docs/mddocs/Quickstart/ollama_quickstart.md +++ b/docs/mddocs/Quickstart/ollama_quickstart.md @@ -223,3 +223,6 @@ If you find ollama hang when multiple different questions is asked or context is #### 7. `signal: bus error (core dumped)` error If you meet this error, please check your Linux kernel version first. You may encounter this issue on higher kernel versions (like kernel 6.15). You can also refer to [this issue](https://github.com/intel-analytics/ipex-llm/issues/10955) to see if it helps. + +#### 8. Save GPU memory by specify `OLLAMA_NUM_PARALLEL=1` +If you have a limited GPU memory, use `set OLLAMA_NUM_PARALLEL=1` on Windows or `export OLLAMA_NUM_PARALLEL=1` on Linux before `ollama serve` to reduce GPU usage. The default `OLLAMA_NUM_PARALLEL` in ollama upstream is set to 4. diff --git a/docs/mddocs/Quickstart/ollama_quickstart.zh-CN.md b/docs/mddocs/Quickstart/ollama_quickstart.zh-CN.md index aaede386..62bc0746 100644 --- a/docs/mddocs/Quickstart/ollama_quickstart.zh-CN.md +++ b/docs/mddocs/Quickstart/ollama_quickstart.zh-CN.md @@ -218,3 +218,6 @@ Ollama 默认每 5 分钟从 GPU 内存卸载一次模型。针对 ollama 的最 #### 7. `signal: bus error (core dumped)` 错误 如果你遇到此错误,请先检查你的 Linux 内核版本。较高版本的内核(例如 6.15)可能会导致此问题。你也可以参考[此问题](https://github.com/intel-analytics/ipex-llm/issues/10955)来查看是否有帮助。 + +#### 8. 通过设置`OLLAMA_NUM_PARALLEL=1`节省GPU内存 +如果你的GPU内存较小,可以通过在运行`ollama serve`前运行`set OLLAMA_NUM_PARALLEL=1`(Windows)或`export OLLAMA_NUM_PARALLEL=1`(Linux)来减少内存使用。Ollama默认使用的`OLLAMA_NUM_PARALLEL`为4。