From aeef73a182510b8145cb5356732c452a6145899a Mon Sep 17 00:00:00 2001 From: Heyang Sun <60865256+Uxito-Ada@users.noreply.github.com> Date: Fri, 15 Sep 2023 13:45:40 +0800 Subject: [PATCH] Tell User How to Find Fine-tuned Model in README (#8985) * Tell User How to Find Fine-tuned Model in README * Update README.md --- docker/llm/finetune/lora/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docker/llm/finetune/lora/README.md b/docker/llm/finetune/lora/README.md index 8d928933..90528c16 100644 --- a/docker/llm/finetune/lora/README.md +++ b/docker/llm/finetune/lora/README.md @@ -52,7 +52,7 @@ kubectl exec -it bash -n bigdl-ppml-finetuning # enter launc cat launcher.log # display logs collected from other workers ``` -From the log, you can see whether finetuning process has been invoked successfully in all MPI worker pods, and a progress bar with finetuning speed and estimated time will be showed after some data preprocessing steps (this may take quiet a while). +From the log, you can see whether finetuning process has been invoked successfully in all MPI worker pods, and a progress bar with finetuning speed and estimated time will be showed after some data preprocessing steps (this may take quiet a while). For the fine-tuned model, it is written by the worker 0 (who holds rank 0), so you can find the model output inside the pod or the `output` folder under the NFS path (because it has been mounted to worker 0 as output path). ## To run in TDX-CoCo and enable Remote Attestation API