ipex-llm/docker/llm/finetune/lora/docker
Heyang Sun 4b843d1dbf change lora-model output behavior on k8s (#9038)
Co-authored-by: leonardozcm <leonardo1997zcm@gmail.com>
2023-09-25 09:28:44 +08:00
..
bigdl-lora-finetuing-entrypoint.sh change lora-model output behavior on k8s (#9038) 2023-09-25 09:28:44 +08:00
bigdl_aa.py [PPML] Enable TLS in Attestation API Serving for LLM finetuning (#8945) 2023-09-18 09:32:25 +08:00
Dockerfile [PPML] Add attestation for LLM Finetuning (#8908) 2023-09-08 10:24:04 +08:00
get_worker_quote.sh [PPML] Add attestation for LLM Finetuning (#8908) 2023-09-08 10:24:04 +08:00
lora_finetune.py BF16 Lora Finetuning on K8S with OneCCL and Intel MPI (#8775) 2023-08-31 14:56:23 +08:00
quote_generator.py [PPML] Add attestation for LLM Finetuning (#8908) 2023-09-08 10:24:04 +08:00
README.md BF16 Lora Finetuning on K8S with OneCCL and Intel MPI (#8775) 2023-08-31 14:56:23 +08:00
requirements.txt BF16 Lora Finetuning on K8S with OneCCL and Intel MPI (#8775) 2023-08-31 14:56:23 +08:00
worker_quote_generate.py [PPML] Add attestation for LLM Finetuning (#8908) 2023-09-08 10:24:04 +08:00

Prepare BigDL image for Lora Finetuning

You can download directly from Dockerhub like:

docker pull intelanalytics/bigdl-lora-finetuning:2.4.0-SNAPSHOT

Or build the image from source:

export HTTP_PROXY=your_http_proxy
export HTTPS_PROXY=your_https_proxy

docker build \
  --build-arg HTTP_PROXY=${HTTP_PROXY} \
  --build-arg HTTPS_PROXY=${HTTPS_PROXY} \
  -t intelanalytics/bigdl-lora-finetuning:2.4.0-SNAPSHOT \
  -f ./Dockerfile .