From ef585d33600e4bf0b3dee351a7fee3a948632db6 Mon Sep 17 00:00:00 2001 From: "Xu, Shuo" <100334393+ATMxsp01@users.noreply.github.com> Date: Thu, 26 Dec 2024 10:52:47 +0800 Subject: [PATCH] Polish Readme for ModelScope-related examples (#12603) --- python/llm/example/GPU/HuggingFace/LLM/chatglm3/README.md | 4 ++-- python/llm/example/GPU/HuggingFace/LLM/codegeex2/README.md | 2 +- python/llm/example/GPU/HuggingFace/LLM/glm4/README.md | 2 +- python/llm/example/GPU/HuggingFace/LLM/minicpm/README.md | 2 +- python/llm/example/GPU/HuggingFace/LLM/minicpm3/README.md | 2 +- .../GPU/HuggingFace/Multimodal/MiniCPM-V-2_6/README.md | 2 +- .../llm/example/GPU/HuggingFace/Multimodal/glm-4v/README.md | 2 +- 7 files changed, 8 insertions(+), 8 deletions(-) diff --git a/python/llm/example/GPU/HuggingFace/LLM/chatglm3/README.md b/python/llm/example/GPU/HuggingFace/LLM/chatglm3/README.md index c7261a15..b28f60dd 100644 --- a/python/llm/example/GPU/HuggingFace/LLM/chatglm3/README.md +++ b/python/llm/example/GPU/HuggingFace/LLM/chatglm3/README.md @@ -108,7 +108,7 @@ python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROM ``` Arguments info: -- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the **Hugging Face** or **ModelScope** repo id for the ChatGLM3 model to be downloaded, or the path to the checkpoint folder. It is default to be `'THUDM/chatglm3-6b'` for **Hugging Face** or `'ZhipuAI/chatglm3-6b'` for **ModelScope**. +- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the **Hugging Face** (e.g. `THUDM/chatglm3-6b`) or **ModelScope** (e.g. `ZhipuAI/chatglm3-6b`) repo id for the ChatGLM3 model to be downloaded, or the path to the checkpoint folder. It is default to be `'THUDM/chatglm3-6b'` for **Hugging Face** or `'ZhipuAI/chatglm3-6b'` for **ModelScope**. - `--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `'AI是什么?'`. - `--n-predict N_PREDICT`: argument defining the max number of tokens to predict. It is default to be `32`. - `--modelscope`: using **ModelScope** as model hub instead of **Hugging Face**. @@ -162,7 +162,7 @@ python ./streamchat.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --question ``` Arguments info: -- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the **Hugging Face** or **ModelScope** repo id for the ChatGLM3 model to be downloaded, or the path to the checkpoint folder. It is default to be `'THUDM/chatglm3-6b'` for **Hugging Face** or `'ZhipuAI/chatglm3-6b'` for **ModelScope**. +- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the ***Hugging Face** (e.g. `THUDM/chatglm3-6b`) or **ModelScope** (e.g. `ZhipuAI/chatglm3-6b`) repo id for the ChatGLM3 model to be downloaded, or the path to the checkpoint folder. It is default to be `'THUDM/chatglm3-6b'` for **Hugging Face** or `'ZhipuAI/chatglm3-6b'` for **ModelScope**. - `--question QUESTION`: argument defining the question to ask. It is default to be `"晚上睡不着应该怎么办"`. - `--disable-stream`: argument defining whether to stream chat. If include `--disable-stream` when running the script, the stream chat is disabled and `chat()` API is used. - `--modelscope`: using **ModelScope** as model hub instead of **Hugging Face**. diff --git a/python/llm/example/GPU/HuggingFace/LLM/codegeex2/README.md b/python/llm/example/GPU/HuggingFace/LLM/codegeex2/README.md index 6514e1bf..e8beeb21 100644 --- a/python/llm/example/GPU/HuggingFace/LLM/codegeex2/README.md +++ b/python/llm/example/GPU/HuggingFace/LLM/codegeex2/README.md @@ -119,7 +119,7 @@ python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROM ``` Arguments info: -- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the **Hugging Face** or **ModelScope** repo id for the CodeGeeX2 model to be downloaded, or the path to the checkpoint folder. It is default to be `'THUDM/codegeex2-6b'` for **Hugging Face** or `'ZhipuAI/codegeex-6b'` for **ModelScope**. +- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the **Hugging Face** (e.g. `THUDM/codegeex2-6b`) or **ModelScope** (e.g. `ZhipuAI/codegeex-6b`) repo id for the CodeGeeX2 model to be downloaded, or the path to the checkpoint folder. It is default to be `'THUDM/codegeex2-6b'` for **Hugging Face** or `'ZhipuAI/codegeex-6b'` for **ModelScope**. - `--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `'# language: Python\n# write a bubble sort function\n'`. - `--n-predict N_PREDICT`: argument defining the max number of tokens to predict. It is default to be `128`. - `--modelscope`: using **ModelScope** as model hub instead of **Hugging Face**. diff --git a/python/llm/example/GPU/HuggingFace/LLM/glm4/README.md b/python/llm/example/GPU/HuggingFace/LLM/glm4/README.md index 2bb58d7a..c8c39295 100644 --- a/python/llm/example/GPU/HuggingFace/LLM/glm4/README.md +++ b/python/llm/example/GPU/HuggingFace/LLM/glm4/README.md @@ -113,7 +113,7 @@ python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROM ``` Arguments info: -- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the **Hugging Face** or **ModelScope** repo id for the GLM-4 model (e.g. `THUDM/glm-4-9b-chat`) to be downloaded, or the path to the checkpoint folder. It is default to be `'THUDM/glm-4-9b-chat'` for **Hugging Face** or `'ZhipuAI/glm-4-9b-chat'` for **ModelScope**. +- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the **Hugging Face** (e.g. `THUDM/glm-4-9b-chat`) or **ModelScope** (e.g. `ZhipuAI/glm-4-9b-chat`) repo id for the GLM-4 model to be downloaded, or the path to the checkpoint folder. It is default to be `'THUDM/glm-4-9b-chat'` for **Hugging Face** or `'ZhipuAI/glm-4-9b-chat'` for **ModelScope**. - `--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `'AI是什么?'`. - `--n-predict N_PREDICT`: argument defining the max number of tokens to predict. It is default to be `32`. - `--modelscope`: using **ModelScope** as model hub instead of **Hugging Face**. diff --git a/python/llm/example/GPU/HuggingFace/LLM/minicpm/README.md b/python/llm/example/GPU/HuggingFace/LLM/minicpm/README.md index 8553d4e9..4bdfc086 100644 --- a/python/llm/example/GPU/HuggingFace/LLM/minicpm/README.md +++ b/python/llm/example/GPU/HuggingFace/LLM/minicpm/README.md @@ -108,7 +108,7 @@ python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROM ``` Arguments info: -- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the **Hugging Face** or **ModelScope** repo id for the MiniCPM model (e.g. `openbmb/MiniCPM-2B-sft-bf16` or `openbmb/MiniCPM-1B-sft-bf16`) to be downloaded, or the path to the checkpoint folder. It is default to be `'openbmb/MiniCPM-2B-sft-bf16'` for **Hugging Face** and `'OpenBMB/MiniCPM-2B-sft-bf16'` for **ModelScope**. +- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the **Hugging Face** (e.g. `openbmb/MiniCPM-2B-sft-bf16` or `openbmb/MiniCPM-1B-sft-bf16`) or **ModelScope** (e.g. `OpenBMB/MiniCPM-2B-sft-bf16` or `OpenBMB/MiniCPM-1B-sft-bf16`) repo id for the MiniCPM model to be downloaded, or the path to the checkpoint folder. It is default to be `'openbmb/MiniCPM-2B-sft-bf16'` for **Hugging Face** and `'OpenBMB/MiniCPM-2B-sft-bf16'` for **ModelScope**. - `--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `'What is AI?'`. - `--n-predict N_PREDICT`: argument defining the max number of tokens to predict. It is default to be `32`. - `--modelscope`: using **ModelScope** as model hub instead of **Hugging Face**. diff --git a/python/llm/example/GPU/HuggingFace/LLM/minicpm3/README.md b/python/llm/example/GPU/HuggingFace/LLM/minicpm3/README.md index 6419d238..64254df8 100644 --- a/python/llm/example/GPU/HuggingFace/LLM/minicpm3/README.md +++ b/python/llm/example/GPU/HuggingFace/LLM/minicpm3/README.md @@ -110,7 +110,7 @@ python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROM ``` Arguments info: -- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the **Hugging Face** or **ModelScope** repo id for the MiniCPM3 model (e.g. `openbmb/MiniCPM3-4B`) to be downloaded, or the path to the checkpoint folder. It is default to be `'openbmb/MiniCPM3-4B'` for **Hugging Face** or `'OpenBMB/MiniCPM3-4B'` for **ModelScope**. +- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the **Hugging Face** (e.g. `openbmb/MiniCPM3-4B`) or **ModelScope** (e.g. `OpenBMB/MiniCPM3-4B`) repo id for the MiniCPM3 model to be downloaded, or the path to the checkpoint folder. It is default to be `'openbmb/MiniCPM3-4B'` for **Hugging Face** or `'OpenBMB/MiniCPM3-4B'` for **ModelScope**. - `--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `'What is AI?'`. - `--n-predict N_PREDICT`: argument defining the max number of tokens to predict. It is default to be `32`. - `--modelscope`: using **ModelScope** as model hub instead of **Hugging Face**. diff --git a/python/llm/example/GPU/HuggingFace/Multimodal/MiniCPM-V-2_6/README.md b/python/llm/example/GPU/HuggingFace/Multimodal/MiniCPM-V-2_6/README.md index bbcc0ee7..5cf87c9c 100644 --- a/python/llm/example/GPU/HuggingFace/Multimodal/MiniCPM-V-2_6/README.md +++ b/python/llm/example/GPU/HuggingFace/Multimodal/MiniCPM-V-2_6/README.md @@ -138,7 +138,7 @@ set SYCL_CACHE_PERSISTENT=1 > For chatting in streaming mode, it is recommended to set the environment variable `PYTHONUNBUFFERED=1`. Arguments info: -- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the **Hugging Face** or **ModelScope** repo id for the MiniCPM-V-2_6 (e.g. `openbmb/MiniCPM-V-2_6`) to be downloaded, or the path to the checkpoint folder. It is default to be `'openbmb/MiniCPM-V-2_6'` for **Hugging Face** or `'OpenBMB/MiniCPM-V-2_6'` for **ModelScope**. +- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the **Hugging Face** (e.g. `openbmb/MiniCPM-V-2_6`) or **ModelScope** (e.g. `OpenBMB/MiniCPM-V-2_6`) repo id for the MiniCPM-V-2_6 to be downloaded, or the path to the checkpoint folder. It is default to be `'openbmb/MiniCPM-V-2_6'` for **Hugging Face** or `'OpenBMB/MiniCPM-V-2_6'` for **ModelScope**. - `--lowbit-path LOWBIT_MODEL_PATH`: argument defining the path to save/load the model with IPEX-LLM low-bit optimization. If it is an empty string, the original pretrained model specified by `REPO_ID_OR_MODEL_PATH` will be loaded. If it is an existing path, the saved model with low-bit optimization in `LOWBIT_MODEL_PATH` will be loaded. If it is a non-existing path, the original pretrained model specified by `REPO_ID_OR_MODEL_PATH` will be loaded, and the optimized low-bit model will be saved into `LOWBIT_MODEL_PATH`. It is default to be `''`, i.e. an empty string. - `--image-url-or-path IMAGE_URL_OR_PATH`: argument defining the image to be infered. It is default to be `'http://farm6.staticflickr.com/5268/5602445367_3504763978_z.jpg'`. - `--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `'What is in the image?'`. diff --git a/python/llm/example/GPU/HuggingFace/Multimodal/glm-4v/README.md b/python/llm/example/GPU/HuggingFace/Multimodal/glm-4v/README.md index 3720b059..1e4e3599 100644 --- a/python/llm/example/GPU/HuggingFace/Multimodal/glm-4v/README.md +++ b/python/llm/example/GPU/HuggingFace/Multimodal/glm-4v/README.md @@ -110,7 +110,7 @@ python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROM ``` Arguments info: -- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the **Hugging Face** or **ModelScope** repo id for the GLM-4V model (e.g. `THUDM/glm-4v-9b`) to be downloaded, or the path to the checkpoint folder. It is default to be `'THUDM/glm-4v-9b'` for **Hugging Face** or `'ZhipuAI/glm-4v-9b'` for **ModelScope**. +- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the **Hugging Face** (e.g. `THUDM/glm-4v-9b`) or **ModelScope** (e.g. `ZhipuAI/glm-4v-9b`) repo id for the GLM-4V model to be downloaded, or the path to the checkpoint folder. It is default to be `'THUDM/glm-4v-9b'` for **Hugging Face** or `'ZhipuAI/glm-4v-9b'` for **ModelScope**. - `--image-url-or-path IMAGE_URL_OR_PATH`: argument defining the image to be infered. It is default to be `'http://farm6.staticflickr.com/5268/5602445367_3504763978_z.jpg'`. - `--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `'What is in the image?'`. - `--n-predict N_PREDICT`: argument defining the max number of tokens to predict. It is default to be `32`.