* seperate trusted-llm and bigdl from lora finetuning * add k8s for trusted llm finetune * refine * refine * rename cpu to tdx in trusted llm * solve conflict * fix typo * resolving conflict * Delete docker/llm/finetune/lora/README.md * fix --------- Co-authored-by: Uxito-Ada <seusunheyang@foxmail.com> Co-authored-by: leonardozcm <leonardo1997zcm@gmail.com>
		
			
				
	
	
		
			9 lines
		
	
	
	
		
			385 B
		
	
	
	
		
			YAML
		
	
	
	
	
	
			
		
		
	
	
			9 lines
		
	
	
	
		
			385 B
		
	
	
	
		
			YAML
		
	
	
	
	
	
imageName: intelanalytics/bigdl-llm-finetune-cpu:2.4.0-SNAPSHOT
 | 
						|
trainerNum: 8
 | 
						|
microBatchSize: 8
 | 
						|
nfsServerIp: your_nfs_server_ip
 | 
						|
nfsPath: a_nfs_shared_folder_path_on_the_server
 | 
						|
dataSubPath: alpaca_data_cleaned_archive.json # a subpath of the data file under nfs directory
 | 
						|
modelSubPath: llama-7b-hf # a subpath of the model file (dir) under nfs directory
 | 
						|
ompNumThreads: 14
 | 
						|
cpuPerPod: 42
 |