From df8df751c4eadf1886f33408d505307484aaf331 Mon Sep 17 00:00:00 2001 From: Guancheng Fu <110874468+gc-fu@users.noreply.github.com> Date: Mon, 9 Oct 2023 09:56:09 +0800 Subject: [PATCH] Modify readme for bigdl-llm-serving-cpu (#9105) --- docker/llm/serving/cpu/kubernetes/README.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/docker/llm/serving/cpu/kubernetes/README.md b/docker/llm/serving/cpu/kubernetes/README.md index b0027f12..d5394d29 100644 --- a/docker/llm/serving/cpu/kubernetes/README.md +++ b/docker/llm/serving/cpu/kubernetes/README.md @@ -15,6 +15,8 @@ After downloading the model, please change name from `vicuna-7b-v1.5` to `vicuna You can download the model from [here](https://huggingface.co/lmsys/vicuna-7b-v1.5). +For ChatGLM models, users do not need to add `bigdl` into model path. We have already used the `BigDL-LLM` backend for this model. + ### Kubernetes config We recommend to setup your kubernetes cluster before deployment. Mostly importantly, please set `cpu-management-policy` to `static` by using this [tutorial](https://kubernetes.io/docs/tasks/administer-cluster/cpu-management-policies/). Also, it would be great to also set the `topology management policy` to `single-numa-node`.