Shaojun Liu
0e5ab5ebfc
update docker tag to 2.5.0-SNAPSHOT ( #9443 )
2023-11-13 16:53:40 +08:00
Ziteng Zhang
4df66f5cbc
Update llm-finetune-lora-cpu dockerfile and readme
...
* Update README.md
* Update Dockerfile
2023-11-02 16:26:24 +08:00
Ziteng Zhang
ca2965fb9f
hosted k8s.png on readthedocs ( #9258 )
2023-10-24 15:07:16 +08:00
Shaojun Liu
9dc76f19c0
fix hadolint error ( #9223 )
2023-10-19 16:22:32 +08:00
Ziteng Zhang
2f14f53b1c
changed '/ppml' into '/bigdl' and modified llama-7b ( #9209 )
2023-10-18 10:25:12 +08:00
Ziteng Zhang
4a0a3c376a
Add stand-alone mode on cpu for finetuning ( #9127 )
...
* Added steps for finetune on CPU in stand-alone mode
* Add stand-alone mode to bigdl-lora-finetuing-entrypoint.sh
* delete redundant docker commands
* Update README.md
Turn to intelanalytics/bigdl-llm-finetune-cpu:2.4.0-SNAPSHOT and append example outputs to allow users to check the running
* Update bigdl-lora-finetuing-entrypoint.sh
Add some tunable parameters
* Add parameters --cpus and -e WORKER_COUNT_DOCKER
* Modified the cpu number range parameters
* Set -ppn to CCL_WORKER_COUNT
* Add related configuration suggestions in README.md
2023-10-11 15:01:21 +08:00
Lilac09
1e78b0ac40
Optimize LoRA Docker by Shrinking Image Size ( #9110 )
...
* modify dockerfile
* modify dockerfile
2023-10-10 15:53:17 +08:00
Heyang Sun
2c0c9fecd0
refine LLM containers ( #9109 )
2023-10-09 15:45:30 +08:00
Heyang Sun
0b40ef8261
separate trusted and native llm cpu finetune from lora ( #9050 )
...
* seperate trusted-llm and bigdl from lora finetuning
* add k8s for trusted llm finetune
* refine
* refine
* rename cpu to tdx in trusted llm
* solve conflict
* fix typo
* resolving conflict
* Delete docker/llm/finetune/lora/README.md
* fix
---------
Co-authored-by: Uxito-Ada <seusunheyang@foxmail.com>
Co-authored-by: leonardozcm <leonardo1997zcm@gmail.com>
2023-10-07 15:26:59 +08:00
Ziteng Zhang
a717352c59
Replace Llama 7b to Llama2-7b in README.md ( #9055 )
...
* Replace Llama 7b with Llama2-7b in README.md
Need to replace the base model to Llama2-7b as we are operating on Llama2 here.
* Replace Llama 7b to Llama2-7b in README.md
a llama 7b in the 1st line is missed
* Update architecture graph
---------
Co-authored-by: Heyang Sun <60865256+Uxito-Ada@users.noreply.github.com>
2023-09-26 09:56:46 +08:00
Heyang Sun
4b843d1dbf
change lora-model output behavior on k8s ( #9038 )
...
Co-authored-by: leonardozcm <leonardo1997zcm@gmail.com>
2023-09-25 09:28:44 +08:00
Xiangyu Tian
52878d3e5f
[PPML] Enable TLS in Attestation API Serving for LLM finetuning ( #8945 )
...
Add enableTLS flag to enable TLS in Attestation API Serving for LLM finetuning.
2023-09-18 09:32:25 +08:00
Heyang Sun
aeef73a182
Tell User How to Find Fine-tuned Model in README ( #8985 )
...
* Tell User How to Find Fine-tuned Model in README
* Update README.md
2023-09-15 13:45:40 +08:00
Xiangyu Tian
4dce238867
Fix incorrect usage in docs of Finetuning to enable TDX ( #8932 )
2023-09-08 16:03:14 +08:00
Xiangyu Tian
ea6d4148e9
[PPML] Add attestation for LLM Finetuning ( #8908 )
...
Add TDX attestation for LLM Finetuning in TDX CoCo
---------
Co-authored-by: Heyang Sun <60865256+Uxito-Ada@users.noreply.github.com>
2023-09-08 10:24:04 +08:00
Heyang Sun
2d97827ec5
fix typo in lora entrypoint ( #8862 )
2023-09-06 13:52:25 +08:00
Heyang Sun
b1ac8dc1bc
BF16 Lora Finetuning on K8S with OneCCL and Intel MPI ( #8775 )
...
* BF16 Lora Finetuning on K8S with OneCCL and Intel MPI
* Update README.md
* format
* refine
* Update README.md
* refine
* Update README.md
* increase nfs volume size to improve IO performance
* fix bugs
* Update README.md
* Update README.md
* fix permission
* move output destination
* Update README.md
* fix wrong base model name in doc
* fix output path in entrypoint
* add a permission-precreated output dir
* format
* move output logs to a persistent storage
2023-08-31 14:56:23 +08:00