diff --git a/docker/llm/finetune/qlora/xpu/docker/README.md b/docker/llm/finetune/qlora/xpu/docker/README.md index 6575fc27..649b06ee 100644 --- a/docker/llm/finetune/qlora/xpu/docker/README.md +++ b/docker/llm/finetune/qlora/xpu/docker/README.md @@ -37,13 +37,13 @@ docker run -itd \ --net=host \ --device=/dev/dri \ --memory="32G" \ - --name=ipex-llm-fintune-qlora-xpu \ + --name=ipex-llm-finetune-qlora-xpu \ -e http_proxy=${HTTP_PROXY} \ -e https_proxy=${HTTPS_PROXY} \ -v $BASE_MODE_PATH:/model \ -v $DATA_PATH:/data/alpaca-cleaned \ --shm-size="16g" \ - intelanalytics/ipex-llm-fintune-qlora-xpu:2.1.0-SNAPSHOT + intelanalytics/ipex-llm-finetune-qlora-xpu:2.1.0-SNAPSHOT ``` The download and mount of base model and data to a docker container demonstrates a standard fine-tuning process. You can skip this step for a quick start, and in this way, the fine-tuning codes will automatically download the needed files: @@ -56,11 +56,11 @@ docker run -itd \ --net=host \ --device=/dev/dri \ --memory="32G" \ - --name=ipex-llm-fintune-qlora-xpu \ + --name=ipex-llm-finetune-qlora-xpu \ -e http_proxy=${HTTP_PROXY} \ -e https_proxy=${HTTPS_PROXY} \ --shm-size="16g" \ - intelanalytics/ipex-llm-fintune-qlora-xpu:2.1.0-SNAPSHOT + intelanalytics/ipex-llm-finetune-qlora-xpu:2.1.0-SNAPSHOT ``` However, we do recommend you to handle them manually, because the automatical download can be blocked by Internet access and Huggingface authentication etc. according to different environment, and the manual method allows you to fine-tune in a custom way (with different base model and dataset). @@ -70,7 +70,7 @@ However, we do recommend you to handle them manually, because the automatical do Enter the running container: ```bash -docker exec -it ipex-llm-fintune-qlora-xpu bash +docker exec -it ipex-llm-finetune-qlora-xpu bash ``` Then, start QLoRA fine-tuning: