Ziteng Zhang 
								
							 
						 
						
							
							
							
							
								
							
							
								986f65cea9 
								
							 
						 
						
							
							
								
								[LLM] Add trust_remote_code for local renamed model in bigdl_llm_model.py ( #9762 )  
							
							 
							
							
							
						 
						
							2023-12-25 11:31:14 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Guancheng Fu 
								
							 
						 
						
							
							
							
							
								
							
							
								daf536fb2d 
								
							 
						 
						
							
							
								
								vLLM: Apply attention optimizations for selective batching ( #9758 )  
							
							 
							
							... 
							
							
							
							* fuse_rope for prefil
* apply kv_cache optimizations
* apply fast_decoding_path
* Re-enable kv_cache optimizations for prefill
* reduce KV_CACHE_ALLOC_BLOCK for selective_batching 
							
						 
						
							2023-12-25 10:29:31 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Qiyuan Gong 
								
							 
						 
						
							
							
							
							
								
							
							
								4c487313f2 
								
							 
						 
						
							
							
								
								Revert "[LLM] IPEX auto importer turn on by default for XPU ( #9730 )" ( #9759 )  
							
							 
							
							... 
							
							
							
							This reverts commit 0284801fbd . 
							
						 
						
							2023-12-22 16:38:24 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Qiyuan Gong 
								
							 
						 
						
							
							
							
							
								
							
							
								0284801fbd 
								
							 
						 
						
							
							
								
								[LLM] IPEX auto importer turn on by default for XPU ( #9730 )  
							
							 
							
							... 
							
							
							
							* Set BIGDL_IMPORT_IPEX default to true, i.e., auto import IPEX for XPU.
* Remove import intel_extension_for_pytorch as ipex from GPU example.
* Add support for bigdl-core-xe-21. 
							
						 
						
							2023-12-22 16:20:32 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Guancheng Fu 
								
							 
						 
						
							
							
							
							
								
							
							
								fdf93c9267 
								
							 
						 
						
							
							
								
								Implement selective batching for vLLM ( #9659 )  
							
							 
							
							... 
							
							
							
							* add control to load hf model
* finish initial version of selective_batching
* temp
* finish
* Remove print statement
* fix error
* Apply yang's optimization
* a version that works
* We need to check kv_cache passed in, this could be an error. TODO: add fast decoding path
* format
* temp solution: not batching prefill requests
* a version that works for prefill batching
* format
* a solid version: works normally
* a temp version
* Solid version: remove redundant functions
* fix format
* format
* solid: add option to enable selective_batching
* remove logic for using transformer models
* format
* format
* solid: enable argument VLLM_ENABLE_SELECTIVE_BATCHING
* format
* finish
* format 
							
						 
						
							2023-12-22 13:45:46 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Ruonan Wang 
								
							 
						 
						
							
							
							
							
								
							
							
								2f36769208 
								
							 
						 
						
							
							
								
								LLM: bigdl-llm lora support & lora example ( #9740 )  
							
							 
							
							... 
							
							
							
							* lora support and single card example
* support multi-card, refactor code
* fix model id and style
* remove torch patch, add two new class for bf16, update example
* fix style
* change to training_mode
* small fix
* add more info in help
* fixstyle, update readme
* fix ut
* fix ut
* Handling compatibility issues with default LoraConfig 
							
						 
						
							2023-12-22 11:05:39 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									SONG Ge 
								
							 
						 
						
							
							
							
							
								
							
							
								ba0b939579 
								
							 
						 
						
							
							
								
								[LLM] Support transformers-v4.36.0 on mistral model ( #9744 )  
							
							 
							
							... 
							
							
							
							* add support transformers-v4.36.0 on mistral model
* python/llm/src/bigdl/llm/transformers/models/mistral.py
* make the redundant implementation as utils
* fix code style
* fix
* fix style
* update with utils enough_kv_room 
							
						 
						
							2023-12-22 09:59:27 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Xin Qiu 
								
							 
						 
						
							
							
							
							
								
							
							
								e36111e713 
								
							 
						 
						
							
							
								
								mixstral fused qkv and rope ( #9724 )  
							
							 
							
							... 
							
							
							
							* mixstral fused qkv and rope
* fix and clean
* fix style
* update
* update
* fix
* update
* fix 
							
						 
						
							2023-12-22 09:26:35 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Jiao Wang 
								
							 
						 
						
							
							
							
							
								
							
							
								e4f6e43675 
								
							 
						 
						
							
							
								
								safetenor to false ( #9728 )  
							
							 
							
							
							
						 
						
							2023-12-21 14:41:51 -08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
							
							
								
							
							
								426660b88e 
								
							 
						 
						
							
							
								
								simplify qwen attention ( #9747 )  
							
							 
							
							
							
						 
						
							2023-12-21 17:53:29 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Wang, Jian4 
								
							 
						 
						
							
							
							
							
								
							
							
								984697afe2 
								
							 
						 
						
							
							
								
								LLM: Add bloom gguf support ( #9734 )  
							
							 
							
							... 
							
							
							
							* init
* update bloom add merges
* update
* update readme
* update for llama error
* update 
							
						 
						
							2023-12-21 14:06:25 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Heyang Sun 
								
							 
						 
						
							
							
							
							
								
							
							
								df775cf316 
								
							 
						 
						
							
							
								
								fix python style ( #9742 )  
							
							 
							
							... 
							
							
							
							* fix python style
* fix
* fix 
							
						 
						
							2023-12-21 11:25:05 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Xin Qiu 
								
							 
						 
						
							
							
							
							
								
							
							
								6c3e698bf1 
								
							 
						 
						
							
							
								
								mistral decoding_fast_path and fused mlp ( #9714 )  
							
							 
							
							... 
							
							
							
							* mistral decoding_fast_path and fused mlp
* meet code review 
							
						 
						
							2023-12-21 10:11:37 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Heyang Sun 
								
							 
						 
						
							
							
							
							
								
							
							
								d157f623b6 
								
							 
						 
						
							
							
								
								Load Mixtral gguf in a block-wise way ( #9725 )  
							
							 
							
							... 
							
							
							
							* Load Mixtral gguf in a block-wise way
* refine 
							
						 
						
							2023-12-21 10:03:23 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Zhao Changmin 
								
							 
						 
						
							
							
							
							
								
							
							
								4bda975a3e 
								
							 
						 
						
							
							
								
								LLM: Align lowbit model config ( #9735 )  
							
							 
							
							... 
							
							
							
							* align lowbit model config 
							
						 
						
							2023-12-21 09:48:58 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Wang, Jian4 
								
							 
						 
						
							
							
							
							
								
							
							
								e1e921f425 
								
							 
						 
						
							
							
								
								LLM: gguf other model using dtype ( #9729 )  
							
							 
							
							
							
						 
						
							2023-12-21 09:33:40 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
							
							
								
							
							
								13ea6330bd 
								
							 
						 
						
							
							
								
								optimize qwen rope ( #9737 )  
							
							 
							
							
							
						 
						
							2023-12-20 17:34:34 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Ziteng Zhang 
								
							 
						 
						
							
							
							
							
								
							
							
								4c032a433e 
								
							 
						 
						
							
							
								
								[LLM] Add glibc checker ( #9624 )  
							
							 
							
							... 
							
							
							
							* Add glibc checker
* Add env BIGDL_GLIBC_CHECK to control glibc checker. The default is false, i.e., don't check. 
							
						 
						
							2023-12-20 16:52:43 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yina Chen 
								
							 
						 
						
							
							
							
							
								
							
							
								cd652a1710 
								
							 
						 
						
							
							
								
								Support fp8 e5m2 on arc ( #9711 )  
							
							 
							
							... 
							
							
							
							* init
* fix style
* update
* fix style
* update 
							
						 
						
							2023-12-20 16:26:17 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
							
							
								
							
							
								e54c428d30 
								
							 
						 
						
							
							
								
								add bf16/fp16 fuse mlp support ( #9726 )  
							
							 
							
							
							
						 
						
							2023-12-20 10:40:45 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Heyang Sun 
								
							 
						 
						
							
							
							
							
								
							
							
								612651cb5d 
								
							 
						 
						
							
							
								
								fix typo ( #9723 )  
							
							 
							
							
							
						 
						
							2023-12-20 09:41:59 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
							
							
								
							
							
								522cf5ed82 
								
							 
						 
						
							
							
								
								[LLM] Improve chatglm2/3 rest token performance with long context ( #9716 )  
							
							 
							
							
							
						 
						
							2023-12-19 17:29:38 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
							
							
								
							
							
								f2e6abb563 
								
							 
						 
						
							
							
								
								fix mlp batch size check ( #9718 )  
							
							 
							
							
							
						 
						
							2023-12-19 14:22:22 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Heyang Sun 
								
							 
						 
						
							
							
							
							
								
							
							
								1fa7793fc0 
								
							 
						 
						
							
							
								
								Load Mixtral GGUF Model ( #9690 )  
							
							 
							
							... 
							
							
							
							* Load Mixtral GGUF Model
* refactor
* fix empty tensor when to cpu
* update gpu and cpu readmes
* add dtype when set tensor into module 
							
						 
						
							2023-12-19 13:54:38 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Qiyuan Gong 
								
							 
						 
						
							
							
							
							
								
							
							
								d0a3095b97 
								
							 
						 
						
							
							
								
								[LLM] IPEX auto importer ( #9706 )  
							
							 
							
							... 
							
							
							
							* IPEX auto importer and get_ipex_version.
* Add BIGDL_IMPORT_IPEX to control auto import, default is false. 
							
						 
						
							2023-12-19 13:39:38 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yang Wang 
								
							 
						 
						
							
							
							
							
								
							
							
								f4fb58d99c 
								
							 
						 
						
							
							
								
								fusing qkv project and rope ( #9612 )  
							
							 
							
							... 
							
							
							
							* Try fusing qkv project and rope
* add fused mlp
* fuse append cache
* fix style and clean up code
* clean up 
							
						 
						
							2023-12-18 16:45:00 -08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Cengguang Zhang 
								
							 
						 
						
							
							
							
							
								
							
							
								4d22add4af 
								
							 
						 
						
							
							
								
								LLM: fix qwen efficiency issue in perf-test.  
							
							 
							
							
							
						 
						
							2023-12-18 18:32:54 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Ruonan Wang 
								
							 
						 
						
							
							
							
							
								
							
							
								8ed89557e5 
								
							 
						 
						
							
							
								
								LLM: add mlp optimization of mixtral ( #9709 )  
							
							 
							
							
							
						 
						
							2023-12-18 16:59:52 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Xin Qiu 
								
							 
						 
						
							
							
							
							
								
							
							
								320110d158 
								
							 
						 
						
							
							
								
								handle empty fused norm result ( #9688 )  
							
							 
							
							... 
							
							
							
							* handle empty fused norm result
* remove fast_rms_norm
* fix style 
							
						 
						
							2023-12-18 09:56:11 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									SONG Ge 
								
							 
						 
						
							
							
							
							
								
							
							
								d5b81af7bd 
								
							 
						 
						
							
							
								
								Support mixtral attention optimization on transformers-v4.36.0 ( #9674 )  
							
							 
							
							... 
							
							
							
							* add example code to support mistral/mixtral attention on transformers v4.36.0
* update
* style fix
* add update for seen-tokens
* support mixtral
* rm mistral change
* small fix
* add more comments and remove use_cache part
---------
Co-authored-by: plusbang <binbin1.deng@intel.com> 
							
						 
						
							2023-12-15 14:30:23 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Cengguang Zhang 
								
							 
						 
						
							
							
							
							
								
							
							
								adbef56001 
								
							 
						 
						
							
							
								
								LLM: update qwen attention forward. ( #9695 )  
							
							 
							
							... 
							
							
							
							* feat: update qwen attention forward.
* fix: style. 
							
						 
						
							2023-12-15 14:06:15 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Wang, Jian4 
								
							 
						 
						
							
							
							
							
								
							
							
								b8437a1c1e 
								
							 
						 
						
							
							
								
								LLM: Add gguf mistral model support ( #9691 )  
							
							 
							
							... 
							
							
							
							* add mistral support
* need to upgrade transformers version
* update 
							
						 
						
							2023-12-15 13:37:39 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Wang, Jian4 
								
							 
						 
						
							
							
							
							
								
							
							
								496bb2e845 
								
							 
						 
						
							
							
								
								LLM: Support load BaiChuan model family gguf model ( #9685 )  
							
							 
							
							... 
							
							
							
							* support baichuan model family gguf model
* update gguf generate.py
* add verify models
* add support model_family
* update
* update style
* update type
* update readme
* update
* remove support model_family 
							
						 
						
							2023-12-15 13:34:33 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
							
							
								
							
							
								9a330bfc2b 
								
							 
						 
						
							
							
								
								fix fuse mlp when using q5_0 or fp8 ( #9689 )  
							
							 
							
							
							
						 
						
							2023-12-14 16:16:05 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Xin Qiu 
								
							 
						 
						
							
							
							
							
								
							
							
								5e46e0e5af 
								
							 
						 
						
							
							
								
								fix baichuan2-7b 1st token performance regression on xpu ( #9683 )  
							
							 
							
							... 
							
							
							
							* fix baichuan2-7b 1st token performance regression
* add comments
* fix style 
							
						 
						
							2023-12-14 09:58:32 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
							
							
								
							
							
								09ca540f9b 
								
							 
						 
						
							
							
								
								use fuse mlp in qwen ( #9672 )  
							
							 
							
							
							
						 
						
							2023-12-13 17:20:08 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Ruonan Wang 
								
							 
						 
						
							
							
							
							
								
							
							
								c7741c4e84 
								
							 
						 
						
							
							
								
								LLM: update moe block convert to optimize rest token latency of Mixtral ( #9669 )  
							
							 
							
							... 
							
							
							
							* update moe block convert
* further accelerate final_hidden_states
* fix style
* fix style 
							
						 
						
							2023-12-13 16:17:06 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Xiangyu Tian 
								
							 
						 
						
							
							
							
							
								
							
							
								1c6499e880 
								
							 
						 
						
							
							
								
								[LLM] vLLM: Support Mixtral Model ( #9670 )  
							
							 
							
							... 
							
							
							
							Add Mixtral support for BigDL vLLM. 
							
						 
						
							2023-12-13 14:44:47 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Ruonan Wang 
								
							 
						 
						
							
							
							
							
								
							
							
								dc5b1d7e9d 
								
							 
						 
						
							
							
								
								LLM: integrate sdp kernel for FP16 rest token inference on GPU [DG2/ATSM] ( #9633 )  
							
							 
							
							... 
							
							
							
							* integrate sdp
* update api
* fix style
* meet code review
* fix
* distinguish mtl from arc
* small fix 
							
						 
						
							2023-12-13 11:29:57 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Qiyuan Gong 
								
							 
						 
						
							
							
							
							
								
							
							
								5b0e7e308c 
								
							 
						 
						
							
							
								
								[LLM] Add support for empty activation ( #9664 )  
							
							 
							
							... 
							
							
							
							* Add support for empty activation, e.g., [0, 4096]. Empty activation is allowed by PyTorch.
* Add comments. 
							
						 
						
							2023-12-13 11:07:45 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									SONG Ge 
								
							 
						 
						
							
							
							
							
								
							
							
								284e7697b1 
								
							 
						 
						
							
							
								
								[LLM] Optimize ChatGLM2 kv_cache to support beam_search on ARC ( #9579 )  
							
							 
							
							... 
							
							
							
							* optimize kv_cache to support beam_search on Arc
* correctness test update
* fix query_length issue
* simplify implementation
* only enable the optimization on gpu device
* limit the beam_search support only enabled with gpu device and batch_size > 1
* add comments for beam_search case and revert ut change
* meet comments
* add more comments to describe the differece between multi-cases 
							
						 
						
							2023-12-13 11:02:14 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Ziteng Zhang 
								
							 
						 
						
							
							
							
							
								
							
							
								8931f2eb62 
								
							 
						 
						
							
							
								
								[LLM] Fix transformer qwen size mismatch and rename causal_mask ( #9655 )  
							
							 
							
							... 
							
							
							
							* Fix size mismatching caused by context_layer
* Change registered_causal_mask to causal_mask 
							
						 
						
							2023-12-12 20:57:40 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									binbin Deng 
								
							 
						 
						
							
							
							
							
								
							
							
								59ce86d292 
								
							 
						 
						
							
							
								
								LLM: support optimize_model=True for Mixtral model ( #9657 )  
							
							 
							
							
							
						 
						
							2023-12-12 16:41:26 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Heyang Sun 
								
							 
						 
						
							
							
							
							
								
							
							
								9f02f96160 
								
							 
						 
						
							
							
								
								[LLM] support for Yi AWQ model ( #9648 )  
							
							 
							
							
							
						 
						
							2023-12-11 14:07:34 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Xin Qiu 
								
							 
						 
						
							
							
							
							
								
							
							
								82255f9726 
								
							 
						 
						
							
							
								
								Enable fused layernorm  ( #9614 )  
							
							 
							
							... 
							
							
							
							* bloom layernorm
* fix
* layernorm
* fix
* fix
* fix
* style fix
* fix
* replace nn.LayerNorm 
							
						 
						
							2023-12-11 09:26:13 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yina Chen 
								
							 
						 
						
							
							
							
							
								
							
							
								70f5e7bf0d 
								
							 
						 
						
							
							
								
								Support peft LoraConfig ( #9636 )  
							
							 
							
							... 
							
							
							
							* support peft loraconfig
* use testcase to test
* fix style
* meet comments 
							
						 
						
							2023-12-08 16:13:03 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Xin Qiu 
								
							 
						 
						
							
							
							
							
								
							
							
								0b6f29a7fc 
								
							 
						 
						
							
							
								
								add fused rms norm for Yi and Qwen ( #9640 )  
							
							 
							
							
							
						 
						
							2023-12-08 16:04:38 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Xin Qiu 
								
							 
						 
						
							
							
							
							
								
							
							
								5636b0ba80 
								
							 
						 
						
							
							
								
								set new linear status ( #9639 )  
							
							 
							
							
							
						 
						
							2023-12-08 11:02:49 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yuwen Hu 
								
							 
						 
						
							
							
							
							
								
							
							
								6f34978b94 
								
							 
						 
						
							
							
								
								[LLM] Add more performance tests for win iGPU (more in-out pairs, RWKV model) ( #9626 )  
							
							 
							
							... 
							
							
							
							* Add supports for loading rwkv models using from_pretrained api
* Temporarily enable pr tests
* Add RWKV in tests and more in-out pairs
* Add rwkv for 512 tests
* Make iterations smaller
* Change back to nightly trigger 
							
						 
						
							2023-12-07 18:55:16 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Ruonan Wang 
								
							 
						 
						
							
							
							
							
								
							
							
								d9b0c01de3 
								
							 
						 
						
							
							
								
								LLM: fix unlora module in qlora finetune ( #9621 )  
							
							 
							
							... 
							
							
							
							* fix unlora module
* split train and inference 
							
						 
						
							2023-12-07 16:32:02 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
							
							
								
							
							
								7319f2c227 
								
							 
						 
						
							
							
								
								use fused mlp in baichuan2 ( #9620 )  
							
							 
							
							
							
						 
						
							2023-12-07 15:50:57 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Xiangyu Tian 
								
							 
						 
						
							
							
							
							
								
							
							
								deee65785c 
								
							 
						 
						
							
							
								
								[LLM] vLLM: Delete last_kv_cache before prefilling ( #9619 )  
							
							 
							
							... 
							
							
							
							Remove last_kv_cache before prefilling to reduce peak memory usage. 
							
						 
						
							2023-12-07 11:32:33 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Xiangyu Tian 
								
							 
						 
						
							
							
							
							
								
							
							
								0327169b50 
								
							 
						 
						
							
							
								
								[LLM] vLLM: fix memory leak in prepare_kv_cache ( #9616 )  
							
							 
							
							... 
							
							
							
							Revert modification in prepare_kv_cache to fix memory leak. 
							
						 
						
							2023-12-07 10:08:18 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Xin Qiu 
								
							 
						 
						
							
							
							
							
								
							
							
								13d47955a8 
								
							 
						 
						
							
							
								
								use fused rms norm in chatglm2 and baichuan ( #9613 )  
							
							 
							
							... 
							
							
							
							* use fused rms norm in chatglm2 and baichuan
* style fix 
							
						 
						
							2023-12-07 09:21:41 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yina Chen 
								
							 
						 
						
							
							
							
							
								
							
							
								404e101ded 
								
							 
						 
						
							
							
								
								QALora example ( #9551 )  
							
							 
							
							... 
							
							
							
							* Support qa-lora
* init
* update
* update
* update
* update
* update
* update merge
* update
* fix style & update scripts
* update
* address comments
* fix typo
* fix typo
---------
Co-authored-by: Yang Wang <yang3.wang@intel.com> 
							
						 
						
							2023-12-06 15:36:21 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Guancheng Fu 
								
							 
						 
						
							
							
							
							
								
							
							
								6978b2c316 
								
							 
						 
						
							
							
								
								[VLLM] Change padding patterns for vLLM & clean code ( #9609 )  
							
							 
							
							... 
							
							
							
							* optimize
* fix minor error
* optimizations
* fix style 
							
						 
						
							2023-12-06 15:27:26 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Zheng, Yi 
								
							 
						 
						
							
							
							
							
								
							
							
								d154b38bf9 
								
							 
						 
						
							
							
								
								Add llama2 gpu low memory example ( #9514 )  
							
							 
							
							... 
							
							
							
							* Add low memory example
* Minor fixes
* Update readme.md 
							
						 
						
							2023-12-05 17:29:48 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Ziteng Zhang 
								
							 
						 
						
							
							
							
							
								
							
							
								65934c9f4f 
								
							 
						 
						
							
							
								
								[LLM] Fix Qwen causal_mask and attention_mask size mismatching ( #9600 )  
							
							 
							
							... 
							
							
							
							* Fix  #9582  , caused by Qwen modified modeling_qwen.py 7f62181c94 (d2h-049182) 
							
						 
						
							2023-12-05 15:15:54 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Qiyuan Gong 
								
							 
						 
						
							
							
							
							
								
							
							
								f211f136b6 
								
							 
						 
						
							
							
								
								Configurable TORCH_LINEAR_THRESHOLD from env ( #9588 )  
							
							 
							
							... 
							
							
							
							* Add TORCH_LINEAR_THRESHOLD from env (BIGDL_LLM_LINEAR_THRESHOLD)
* Change default to 512 
							
						 
						
							2023-12-05 13:19:47 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Xiangyu Tian 
								
							 
						 
						
							
							
							
							
								
							
							
								5c03651309 
								
							 
						 
						
							
							
								
								[LLM] vLLM: Add Preempt for scheduler ( #9568 )  
							
							 
							
							... 
							
							
							
							Implement Preempt_by_recompute method for vllm. 
							
						 
						
							2023-12-03 20:16:25 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Xin Qiu 
								
							 
						 
						
							
							
							
							
								
							
							
								69c49d21f5 
								
							 
						 
						
							
							
								
								use fused rms norm ( #9572 )  
							
							 
							
							... 
							
							
							
							* use fused rms norm
* meet code review 
							
						 
						
							2023-11-30 21:47:41 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
							
							
								
							
							
								7f6465518a 
								
							 
						 
						
							
							
								
								support loading llama tokenizer from gguf model ( #9565 )  
							
							 
							
							
							
						 
						
							2023-11-30 14:56:12 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yuwen Hu 
								
							 
						 
						
							
							
							
							
								
							
							
								34503efa6a 
								
							 
						 
						
							
							
								
								Fix cpu pinned embedding ( #9556 )  
							
							 
							
							
							
						 
						
							2023-11-29 18:27:56 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									binbin Deng 
								
							 
						 
						
							
							
							
							
								
							
							
								4ff2ca9d0d 
								
							 
						 
						
							
							
								
								LLM: fix loss error on Arc ( #9550 )  
							
							 
							
							
							
						 
						
							2023-11-29 15:16:18 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
							
							
								
							
							
								65121c7997 
								
							 
						 
						
							
							
								
								support loading q4_1/q5_0/q5_1/q8_0 gguf model ( #9546 )  
							
							 
							
							
							
						 
						
							2023-11-29 14:40:37 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yuwen Hu 
								
							 
						 
						
							
							
							
							
								
							
							
								5f5ca38b74 
								
							 
						 
						
							
							
								
								[LLM Doc] Fix api doc rendering error ( #9542 )  
							
							 
							
							... 
							
							
							
							* Fix api rendering error
* Fix python style 
							
						 
						
							2023-11-29 09:17:09 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
							
							
								
							
							
								a86c6e0b56 
								
							 
						 
						
							
							
								
								[LLM] support loading gguf model ( #9544 )  
							
							 
							
							
							
						 
						
							2023-11-28 15:51:15 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Xiangyu Tian 
								
							 
						 
						
							
							
							
							
								
							
							
								916c338772 
								
							 
						 
						
							
							
								
								fix bugs in vllm length check ( #9543 )  
							
							 
							
							
							
						 
						
							2023-11-28 11:09:54 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Zhao Changmin 
								
							 
						 
						
							
							
							
							
								
							
							
								e7e0cd3b5e 
								
							 
						 
						
							
							
								
								CPU Pinned embedding Layer ( #9538 )  
							
							 
							
							... 
							
							
							
							* CPU Pinned embedding 
							
						 
						
							2023-11-28 09:46:31 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Guancheng Fu 
								
							 
						 
						
							
							
							
							
								
							
							
								963a5c8d79 
								
							 
						 
						
							
							
								
								Add vLLM-XPU version's README/examples ( #9536 )  
							
							 
							
							... 
							
							
							
							* test
* test
* fix last kv cache
* add xpu readme
* remove numactl for xpu example
* fix link error
* update max_num_batched_tokens logic
* add explaination
* add xpu environement version requirement
* refine gpu memory
* fix
* fix style 
							
						 
						
							2023-11-28 09:44:03 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Guancheng Fu 
								
							 
						 
						
							
							
							
							
								
							
							
								b6c3520748 
								
							 
						 
						
							
							
								
								Remove xformers from vLLM-CPU ( #9535 )  
							
							 
							
							
							
						 
						
							2023-11-27 11:21:25 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									binbin Deng 
								
							 
						 
						
							
							
							
							
								
							
							
								6bec0faea5 
								
							 
						 
						
							
							
								
								LLM: support Mistral AWQ models ( #9520 )  
							
							 
							
							
							
						 
						
							2023-11-24 16:20:22 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Ruonan Wang 
								
							 
						 
						
							
							
							
							
								
							
							
								914a5a5a27 
								
							 
						 
						
							
							
								
								LLM: fix abnormal Mistral GPU accuracy by updating rms_norm ( #9529 )  
							
							 
							
							
							
						 
						
							2023-11-24 15:37:50 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									SONG Ge 
								
							 
						 
						
							
							
							
							
								
							
							
								3d24823cda 
								
							 
						 
						
							
							
								
								hot-fix mistral kv_cache ( #9528 )  
							
							 
							
							
							
						 
						
							2023-11-24 14:33:04 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Zhao Changmin 
								
							 
						 
						
							
							
							
							
								
							
							
								42b7a16bc5 
								
							 
						 
						
							
							
								
								Replace torch.bmm with safe_bmm ( #9519 )  
							
							 
							
							... 
							
							
							
							* replace bmm with safe one
* rename args and deprecated warning 
							
						 
						
							2023-11-24 12:16:48 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Ruonan Wang 
								
							 
						 
						
							
							
							
							
								
							
							
								b63aae8a8e 
								
							 
						 
						
							
							
								
								LLM: add flash attention support for llama ( #9518 )  
							
							 
							
							... 
							
							
							
							* add initial flash attention for llama
* accelerate fp32 first token by changing to fp16 in advance
* support fp32 
							
						 
						
							2023-11-23 18:40:18 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Guancheng Fu 
								
							 
						 
						
							
							
							
							
								
							
							
								bf579507c2 
								
							 
						 
						
							
							
								
								Integrate vllm ( #9310 )  
							
							 
							
							... 
							
							
							
							* done
* Rename structure
* add models
* Add structure/sampling_params,sequence
* add input_metadata
* add outputs
* Add policy,logger
* add and update
* add parallelconfig back
* core/scheduler.py
* Add llm_engine.py
* Add async_llm_engine.py
* Add tested entrypoint
* fix minor error
* Fix everything
* fix kv cache view
* fix
* fix
* fix
* format&refine
* remove logger from repo
* try to add token latency
* remove logger
* Refine config.py
* finish worker.py
* delete utils.py
* add license
* refine
* refine sequence.py
* remove sampling_params.py
* finish
* add license
* format
* add license
* refine
* refine
* Refine line too long
* remove exception
* so dumb style-check
* refine
* refine
* refine
* refine
* refine
* refine
* add README
* refine README
* add warning instead error
* fix padding
* add license
* format
* format
* format fix
* Refine vllm dependency (#1 )
vllm dependency clear
* fix licence
* fix format
* fix format
* fix
* adapt LLM engine
* fix
* add license
* fix format
* fix
* Moving README.md to the correct position
* Fix readme.md
* done
* guide for adding models
* fix
* Fix README.md
* Add new model readme
* remove ray-logic
* refactor arg_utils.py
* remove distributed_init_method logic
* refactor entrypoints
* refactor input_metadata
* refactor model_loader
* refactor utils.py
* refactor models
* fix api server
* remove vllm.stucture
* revert by txy 1120
* remove utils
* format
* fix license
* add bigdl model
* Refer to a specfic commit
* Change code base
* add comments
* add async_llm_engine comment
* refine
* formatted
* add worker comments
* add comments
* add comments
* fix style
* add changes
---------
Co-authored-by: xiangyuT <xiangyu.tian@intel.com>
Co-authored-by: Xiangyu Tian <109123695+xiangyuT@users.noreply.github.com>
Co-authored-by: leonardozcm <leonardo1997zcm@gmail.com> 
							
						 
						
							2023-11-23 16:46:45 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Qiyuan Gong 
								
							 
						 
						
							
							
							
							
								
							
							
								0f0c6bb631 
								
							 
						 
						
							
							
								
								[LLM] Fix Qwen registered_causal_mask is None ( #9513 )  
							
							 
							
							... 
							
							
							
							* Add registered_causal_mask init based on 2abd8e5777 . 
							
						 
						
							2023-11-23 09:28:04 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Ruonan Wang 
								
							 
						 
						
							
							
							
							
								
							
							
								076d106ef5 
								
							 
						 
						
							
							
								
								LLM: GPU QLoRA update to bf16 to accelerate gradient checkpointing ( #9499 )  
							
							 
							
							... 
							
							
							
							* update to bf16 to accelerate gradient checkpoint
* add utils and fix ut 
							
						 
						
							2023-11-21 17:08:36 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Xin Qiu 
								
							 
						 
						
							
							
							
							
								
							
							
								50b01058f1 
								
							 
						 
						
							
							
								
								enable new q4_1 ( #9479 )  
							
							 
							
							
							
						 
						
							2023-11-17 14:58:57 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Zhao Changmin 
								
							 
						 
						
							
							
							
							
								
							
							
								30abd304a7 
								
							 
						 
						
							
							
								
								LLM: Fix baichuan pre-normalize model tensor assigning issue when loading ( #9481 )  
							
							 
							
							... 
							
							
							
							* No need to normalized when loading 
							
						 
						
							2023-11-16 21:57:28 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Ruonan Wang 
								
							 
						 
						
							
							
							
							
								
							
							
								c0ef70df02 
								
							 
						 
						
							
							
								
								llm: quick fix of fast_rms_norm ( #9480 )  
							
							 
							
							
							
						 
						
							2023-11-16 14:42:16 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yina Chen 
								
							 
						 
						
							
							
							
							
								
							
							
								d5263e6681 
								
							 
						 
						
							
							
								
								Add awq load support ( #9453 )  
							
							 
							
							... 
							
							
							
							* Support directly loading GPTQ models from huggingface
* fix style
* fix tests
* change example structure
* address comments
* fix style
* init
* address comments
* add examples
* fix style
* fix style
* fix style
* fix style
* update
* remove
* meet comments
* fix style
---------
Co-authored-by: Yang Wang <yang3.wang@intel.com> 
							
						 
						
							2023-11-16 14:06:25 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Ruonan Wang 
								
							 
						 
						
							
							
							
							
								
							
							
								d2c064124a 
								
							 
						 
						
							
							
								
								LLM: update rms related usage to suport ipex 2.1 new api ( #9466 )  
							
							 
							
							... 
							
							
							
							* update rms related usage
* fix style 
							
						 
						
							2023-11-16 11:21:50 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yuwen Hu 
								
							 
						 
						
							
							
							
							
								
							
							
								731b0aaade 
								
							 
						 
						
							
							
								
								Empty cache after embedding to cpu ( #9477 )  
							
							 
							
							
							
						 
						
							2023-11-16 10:52:30 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yang Wang 
								
							 
						 
						
							
							
							
							
								
							
							
								51d07a9fd8 
								
							 
						 
						
							
							
								
								Support directly loading gptq models from huggingface ( #9391 )  
							
							 
							
							... 
							
							
							
							* Support directly loading GPTQ models from huggingface
* fix style
* fix tests
* change example structure
* address comments
* fix style
* address comments 
							
						 
						
							2023-11-13 20:48:12 -08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									SONG Ge 
								
							 
						 
						
							
							
							
							
								
							
							
								2888818b3a 
								
							 
						 
						
							
							
								
								[LLM] Support mixed_fp8 on Arc ( #9415 )  
							
							 
							
							... 
							
							
							
							* ut gpu allocation memory fix
* support mix_8bit on arc
* rename mixed_4bit to mixed_fp4 and mixed_8bit to mixed_fp8
* revert unexpected changes
* revert unexpected changes
* unify common logits
* rename in llm xmx_checker
* fix typo error and re-unify 
							
						 
						
							2023-11-13 09:26:30 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Heyang Sun 
								
							 
						 
						
							
							
							
							
								
							
							
								df8e4d7889 
								
							 
						 
						
							
							
								
								[LLM] apply allreduce and bias to training in LowBitLinear ( #9395 )  
							
							 
							
							
							
						 
						
							2023-11-09 14:35:54 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Wang, Jian4 
								
							 
						 
						
							
							
							
							
								
							
							
								40cead6b5b 
								
							 
						 
						
							
							
								
								LLM: Fix CPU qlora dtype convert issue ( #9394 )  
							
							 
							
							
							
						 
						
							2023-11-09 14:34:01 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Ruonan Wang 
								
							 
						 
						
							
							
							
							
								
							
							
								bfca76dfa7 
								
							 
						 
						
							
							
								
								LLM: optimize QLoRA by updating lora convert logic ( #9372 )  
							
							 
							
							... 
							
							
							
							* update convert logic of qlora
* update
* refactor and further improve performance
* fix style
* meet code review 
							
						 
						
							2023-11-08 17:46:49 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Ruonan Wang 
								
							 
						 
						
							
							
							
							
								
							
							
								7e8fb29b7c 
								
							 
						 
						
							
							
								
								LLM: optimize QLoRA by reducing convert time ( #9370 )  
							
							 
							
							
							
						 
						
							2023-11-08 13:14:34 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
							
							
								
							
							
								bfd9f88f0d 
								
							 
						 
						
							
							
								
								[LLM] Use fp32 as dtype when batch_size <=8 and qtype is q4_0/q8_0/fp8 ( #9365 )  
							
							 
							
							
							
						 
						
							2023-11-08 09:54:53 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Heyang Sun 
								
							 
						 
						
							
							
							
							
								
							
							
								fae6db3ddc 
								
							 
						 
						
							
							
								
								[LLM] refactor cpu low-bit forward logic ( #9366 )  
							
							 
							
							... 
							
							
							
							* [LLM] refactor cpu low-bit forward logic
* fix style
* Update low_bit_linear.py
* Update low_bit_linear.py
* refine 
							
						 
						
							2023-11-07 15:09:16 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Heyang Sun 
								
							 
						 
						
							
							
							
							
								
							
							
								af94058203 
								
							 
						 
						
							
							
								
								[LLM] Support CPU deepspeed distributed inference ( #9259 )  
							
							 
							
							... 
							
							
							
							* [LLM] Support CPU Deepspeed distributed inference
* Update run_deepspeed.py
* Rename
* fix style
* add new codes
* refine
* remove annotated codes
* refine
* Update README.md
* refine doc and example code 
							
						 
						
							2023-11-06 17:56:42 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Xin Qiu 
								
							 
						 
						
							
							
							
							
								
							
							
								1420e45cc0 
								
							 
						 
						
							
							
								
								Chatglm2 rope optimization on xpu ( #9350 )  
							
							 
							
							
							
						 
						
							2023-11-06 13:56:34 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yuwen Hu 
								
							 
						 
						
							
							
							
							
								
							
							
								a0150bb205 
								
							 
						 
						
							
							
								
								[LLM] Move embedding layer to CPU for iGPU inference ( #9343 )  
							
							 
							
							... 
							
							
							
							* Move embedding layer to CPU for iGPU llm inference
* Empty cache after to cpu
* Remove empty cache as it seems to have some negative effect to first token 
							
						 
						
							2023-11-03 11:13:45 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
							
							
								
							
							
								726203d778 
								
							 
						 
						
							
							
								
								[LLM] Replace Embedding layer to fix it on CPU ( #9254 )  
							
							 
							
							
							
						 
						
							2023-11-01 13:58:10 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yang Wang 
								
							 
						 
						
							
							
							
							
								
							
							
								e1bc18f8eb 
								
							 
						 
						
							
							
								
								fix import ipex problem ( #9323 )  
							
							 
							
							... 
							
							
							
							* fix import ipex problem
* fix style 
							
						 
						
							2023-10-31 20:31:34 -07:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yina Chen 
								
							 
						 
						
							
							
							
							
								
							
							
								2262ae4d13 
								
							 
						 
						
							
							
								
								Support MoFQ4 on arc ( #9301 )  
							
							 
							
							... 
							
							
							
							* init
* update
* fix style
* fix style
* fix style
* meet comments 
							
						 
						
							2023-11-01 10:59:46 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yang Wang 
								
							 
						 
						
							
							
							
							
								
							
							
								163d033616 
								
							 
						 
						
							
							
								
								Support qlora in CPU ( #9233 )  
							
							 
							
							... 
							
							
							
							* support qlora in CPU
* revert example
* fix style 
							
						 
						
							2023-10-27 14:01:15 -07:00