From ad81b5d838cf593a741230bb9ed8e93637c32702 Mon Sep 17 00:00:00 2001 From: Ziteng Zhang <87107332+Jasonzzt@users.noreply.github.com> Date: Fri, 10 Nov 2023 15:19:25 +0800 Subject: [PATCH] Update qlora README.md (#9422) --- python/llm/example/CPU/QLoRA-FineTuning/README.md | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/python/llm/example/CPU/QLoRA-FineTuning/README.md b/python/llm/example/CPU/QLoRA-FineTuning/README.md index 5acd255b..caa81a70 100644 --- a/python/llm/example/CPU/QLoRA-FineTuning/README.md +++ b/python/llm/example/CPU/QLoRA-FineTuning/README.md @@ -20,7 +20,12 @@ pip install datasets ### 2. Finetune model +If the machine memory is not enough, you can try to set `use_gradient_checkpointing=True` in [here](https://github.com/intel-analytics/BigDL/blob/1747ffe60019567482b6976a24b05079274e7fc8/python/llm/example/CPU/QLoRA-FineTuning/qlora_finetuning_cpu.py#L53C6-L53C6). + +And remember to use `bigdl-llm-init` before you start finetuning, which can accelerate the job. + ``` +source bigdl-llm-init -t python ./qlora_finetuning_cpu.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --dataset DATASET ``` @@ -73,4 +78,4 @@ Inference time: 2.864234209060669 s “QLoRA fine-tuning using BigDL-LLM 4bit optimizations on Intel CPU is Efficient and convenient” ->: -------------------- Output -------------------- “QLoRA fine-tuning using BigDL-LLM 4bit optimizations on Intel CPU is Efficient and convenient” ->: ['bigdl'] ['deep-learning'] ['distributed-computing'] ['intel'] ['optimization'] ['training'] ['training-speed'] -``` \ No newline at end of file +```