* Create Dockerfile.k8s * Update Dockerfile More slim standalone image * Update Dockerfile * Update Dockerfile.k8s * Update bigdl-qlora-finetuing-entrypoint.sh * Update qlora_finetuning_cpu.py * Update alpaca_qlora_finetuning_cpu.py Refer to this [pr](https://github.com/intel-analytics/BigDL/pull/9551/files#diff-2025188afa54672d21236e6955c7c7f7686bec9239532e41c7983858cc9aaa89), update the LoraConfig * update * update * update * update * update * update * update * update transformer version * update Dockerfile * update Docker image name * fix error
		
			
				
	
	
		
			10 lines
		
	
	
	
		
			596 B
		
	
	
	
		
			YAML
		
	
	
	
	
	
			
		
		
	
	
			10 lines
		
	
	
	
		
			596 B
		
	
	
	
		
			YAML
		
	
	
	
	
	
imageName: intelanalytics/bigdl-llm-finetune-qlora-cpu-k8s:2.5.0-SNAPSHOT
 | 
						|
trainerNum: 2
 | 
						|
microBatchSize: 8
 | 
						|
enableGradientCheckpoint: false # true will save more memory but increase latency
 | 
						|
nfsServerIp: your_nfs_server_ip
 | 
						|
nfsPath: a_nfs_shared_folder_path_on_the_server
 | 
						|
dataSubPath: alpaca_data_cleaned_archive.json # a subpath of the data file under nfs directory
 | 
						|
modelSubPath: Llama-2-7b-chat-hf # a subpath of the model file (dir) under nfs directory
 | 
						|
httpProxy: "your_http_proxy_like_http://xxx:xxxx_if_needed_else_empty"
 | 
						|
httpsProxy: "your_https_proxy_like_http://xxx:xxxx_if_needed_else_empty"
 |