binbin Deng 
								
							 
						 
						
							
							
							
							
								
							
							
								2bb96c775c 
								
							 
						 
						
							
							
								
								LLM: fix device setting during saving optimized model ( #10154 )  
							
							 
							
							
							
						 
						
							2024-02-20 09:52:59 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Xin Qiu 
								
							 
						 
						
							
							
							
							
								
							
							
								1f6d5b9f30 
								
							 
						 
						
							
							
								
								enable fused rmsnorm and rope qwen2 ( #10163 )  
							
							 
							
							... 
							
							
							
							* qwen2
* change convert
* cleanup 
							
						 
						
							2024-02-20 08:33:09 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									yb-peng 
								
							 
						 
						
							
							
							
							
								
							
							
								e31210ba00 
								
							 
						 
						
							
							
								
								Modify html table style and add fp16.csv in harness ( #10169 )  
							
							 
							
							... 
							
							
							
							* Specify the version of pandas in harness evaluation workflow
* Specify the version of pandas in harness evaluation workflow
* Modify html table style and add fp16.csv in harness
* Modify comments 
							
						 
						
							2024-02-19 18:13:40 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									WeiguangHan 
								
							 
						 
						
							
							
							
							
								
							
							
								6c09aed90d 
								
							 
						 
						
							
							
								
								LLM: add qwen_1.5_7b model for arc perf test ( #10166 )  
							
							 
							
							... 
							
							
							
							* LLM: add qwen_1.5_7b model for arc perf test
* small fix
* revert some codes 
							
						 
						
							2024-02-19 17:21:00 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yuxuan Xia 
								
							 
						 
						
							
							
							
							
								
							
							
								209122559a 
								
							 
						 
						
							
							
								
								Add Ceval workflow and modify the result printing ( #10140 )  
							
							 
							
							... 
							
							
							
							* Add c-eval workflow and modify running files
* Modify the chatglm evaluator file
* Modify the ceval workflow for triggering test
* Modify the ceval workflow file
* Modify the ceval workflow file
* Modify ceval workflow
* Adjust the ceval dataset download
* Add ceval workflow dependencies
* Modify ceval workflow dataset download
* Add ceval test dependencies
* Add ceval test dependencies
* Correct the result print 
							
						 
						
							2024-02-19 17:06:53 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Zhao Changmin 
								
							 
						 
						
							
							
							
							
								
							
							
								f8730e8dc1 
								
							 
						 
						
							
							
								
								Skip rescale rwkv linear when load_low_bit ( #10164 )  
							
							 
							
							... 
							
							
							
							* rwkv_ld 
							
						 
						
							2024-02-19 15:56:42 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Heyang Sun 
								
							 
						 
						
							
							
							
							
								
							
							
								3e2af5ec0a 
								
							 
						 
						
							
							
								
								Fix IPEX Baichuan Speculative ( #10162 )  
							
							 
							
							... 
							
							
							
							* Fix IPEX Baichuan Speculative
* compatible with 13B
* Update speculative.py 
							
						 
						
							2024-02-19 15:27:34 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yina Chen 
								
							 
						 
						
							
							
							
							
								
							
							
								23c91cdce6 
								
							 
						 
						
							
							
								
								[LLM] Add min_step_draft in speculative decoding ( #10142 )  
							
							 
							
							... 
							
							
							
							* Fix gptj kvcache & position id
* Add min_draft_tokens in speculative decoding
* fix style
* update 
							
						 
						
							2024-02-19 14:31:41 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Chen, Zhentao 
								
							 
						 
						
							
							
							
							
								
							
							
								14ba2c5135 
								
							 
						 
						
							
							
								
								Harness: remove deprecated files ( #10165 )  
							
							 
							
							
							
						 
						
							2024-02-19 14:27:49 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Wang, Jian4 
								
							 
						 
						
							
							
							
							
								
							
							
								d3591383d5 
								
							 
						 
						
							
							
								
								LLM : Add CPU chatglm3 speculative example ( #10004 )  
							
							 
							
							... 
							
							
							
							* init chatglm
* update
* update 
							
						 
						
							2024-02-19 13:38:52 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Wang, Jian4 
								
							 
						 
						
							
							
							
							
								
							
							
								f2417e083c 
								
							 
						 
						
							
							
								
								LLM: enable chatglm3-6b target_model ipex ( #10085 )  
							
							 
							
							... 
							
							
							
							* init
* always make casual_mask
* not return last tensor
* update
* optimize_model = False
* enable optimized=False
* enable optimized_model=true
* speed_up ipex target_model
* remove if True
* use group_size
* update python style
* update
* update 
							
						 
						
							2024-02-19 13:38:32 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Heyang Sun 
								
							 
						 
						
							
							
							
							
								
							
							
								177273c1a4 
								
							 
						 
						
							
							
								
								IPEX Speculative Support for Baichuan2 7B ( #10112 )  
							
							 
							
							... 
							
							
							
							* IPEX Speculative Support for Baichuan2 7B
* fix license problems
* refine 
							
						 
						
							2024-02-19 09:12:57 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yina Chen 
								
							 
						 
						
							
							
							
							
								
							
							
								1508d6b089 
								
							 
						 
						
							
							
								
								Fix gptj kvcache & position id ( #10141 )  
							
							 
							
							
							
						 
						
							2024-02-18 10:02:49 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									yb-peng 
								
							 
						 
						
							
							
							
							
								
							
							
								b4dc33def6 
								
							 
						 
						
							
							
								
								In harness-evaluation workflow, add statistical tables ( #10118 )  
							
							 
							
							... 
							
							
							
							* chnage storage
* fix typo
* change label
* change label to arc03
* change needs in the last step
* add generate csv in harness/make_table_results.py
* modify needs in the last job
* add csv to html
* mfix path issue in llm-harness-summary-nightly
* modify output_path
* modify args in make_table_results.py
* modify make table command in summary
* change pr env label
* remove irrelevant code in summary; add set output path step; add limit in harness run
* re-organize code structure
* modify limit in run harness
* modify csv_to_html input path
* modify needs in summary-nightly 
							
						 
						
							2024-02-08 19:01:05 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
							
							
								
							
							
								4d33aac7f9 
								
							 
						 
						
							
							
								
								quick fix qwen2 fp8 kv cache ( #10135 )  
							
							 
							
							
							
						 
						
							2024-02-08 17:04:59 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Cengguang Zhang 
								
							 
						 
						
							
							
							
							
								
							
							
								39d90839aa 
								
							 
						 
						
							
							
								
								LLM: add quantize kv cache for llama. ( #10086 )  
							
							 
							
							... 
							
							
							
							* feat: add quantize kv cache for llama.
* fix style.
* add quantized attention forward function.
* revert style.
* fix style.
* fix style.
* update quantized kv cache and add quantize_qkv
* fix style.
* fix style.
* optimize quantize kv cache.
* fix style. 
							
						 
						
							2024-02-08 16:49:22 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
							
							
								
							
							
								d848efe17c 
								
							 
						 
						
							
							
								
								add quantize kv cache support for qwen2 ( #10134 )  
							
							 
							
							
							
						 
						
							2024-02-08 16:17:21 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									SONG Ge 
								
							 
						 
						
							
							
							
							
								
							
							
								3f79128ed7 
								
							 
						 
						
							
							
								
								[LLM] Enable kv_cache optimization for Qwen2 on transformers-v4.37.0 ( #10131 )  
							
							 
							
							... 
							
							
							
							* add support for kv_cache optimization on transformers-v4.37.0
* enable attention forward
* style fix
* disable rotary for now 
							
						 
						
							2024-02-08 14:20:26 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Ruonan Wang 
								
							 
						 
						
							
							
							
							
								
							
							
								063dc145ac 
								
							 
						 
						
							
							
								
								LLM: basic support for q2k ( #10132 )  
							
							 
							
							... 
							
							
							
							* basic support for q2k
* fix style 
							
						 
						
							2024-02-08 13:52:01 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									binbin Deng 
								
							 
						 
						
							
							
							
							
								
							
							
								11fe5a87ec 
								
							 
						 
						
							
							
								
								LLM: add Modelscope model example ( #10126 )  
							
							 
							
							
							
						 
						
							2024-02-08 11:18:07 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Cengguang Zhang 
								
							 
						 
						
							
							
							
							
								
							
							
								0cf6a12691 
								
							 
						 
						
							
							
								
								LLM: add default torch_dtype for fp16. ( #10124 )  
							
							 
							
							... 
							
							
							
							* set default torch_dtype for fp16.
* fix style.
* bug fix.
* update bug fix. 
							
						 
						
							2024-02-08 10:24:16 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
							
							
								
							
							
								1aa0c623ce 
								
							 
						 
						
							
							
								
								disable fused layer norm on UHD ( #10130 )  
							
							 
							
							
							
						 
						
							2024-02-08 10:20:01 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yuwen Hu 
								
							 
						 
						
							
							
							
							
								
							
							
								a8450fc300 
								
							 
						 
						
							
							
								
								[LLM] Support MLP optimization for Qwen1.5 ( #10123 )  
							
							 
							
							
							
						 
						
							2024-02-08 09:15:34 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yuwen Hu 
								
							 
						 
						
							
							
							
							
								
							
							
								81ed65fbe7 
								
							 
						 
						
							
							
								
								[LLM] Add qwen1.5-7B in iGPU perf ( #10127 )  
							
							 
							
							... 
							
							
							
							* Add qwen1.5 test config yaml with transformers 4.37.0
* Update for yaml file 
							
						 
						
							2024-02-07 22:31:20 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Jin Qiao 
								
							 
						 
						
							
							
							
							
								
							
							
								0fcfbfaf6f 
								
							 
						 
						
							
							
								
								LLM: add rwkv5 eagle GPU HF example ( #10122 )  
							
							 
							
							... 
							
							
							
							* LLM: add rwkv5 eagle example
* fix
* fix link 
							
						 
						
							2024-02-07 16:58:29 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									binbin Deng 
								
							 
						 
						
							
							
							
							
								
							
							
								925f82107e 
								
							 
						 
						
							
							
								
								LLM: support models hosted by modelscope ( #10106 )  
							
							 
							
							
							
						 
						
							2024-02-07 16:46:36 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									binbin Deng 
								
							 
						 
						
							
							
							
							
								
							
							
								c1ec3d8921 
								
							 
						 
						
							
							
								
								LLM: update FAQ about too many open files ( #10119 )  
							
							 
							
							
							
						 
						
							2024-02-07 15:02:24 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Keyan (Kyrie) Zhang 
								
							 
						 
						
							
							
							
							
								
							
							
								2e80701f58 
								
							 
						 
						
							
							
								
								Unit test on final logits and the logits of the last attention layer ( #10093 )  
							
							 
							
							... 
							
							
							
							* Add unit test on final logits and attention
* Add unit test on final logits and attention
* Modify unit test on final logits and attention 
							
						 
						
							2024-02-07 14:25:36 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yuxuan Xia 
								
							 
						 
						
							
							
							
							
								
							
							
								3832eb0ce0 
								
							 
						 
						
							
							
								
								Add ChatGLM C-Eval Evaluator ( #10095 )  
							
							 
							
							... 
							
							
							
							* Add ChatGLM ceval evaluator
* Modify ChatGLM Evaluator Reference 
							
						 
						
							2024-02-07 11:27:06 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Jin Qiao 
								
							 
						 
						
							
							
							
							
								
							
							
								63050c954d 
								
							 
						 
						
							
							
								
								fix ( #10117 )  
							
							 
							
							
							
						 
						
							2024-02-07 11:05:11 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Jin Qiao 
								
							 
						 
						
							
							
							
							
								
							
							
								d3d2ee1b63 
								
							 
						 
						
							
							
								
								LLM: add speech T5 GPU example ( #10090 )  
							
							 
							
							... 
							
							
							
							* add speech t5 example
* fix
* fix 
							
						 
						
							2024-02-07 10:50:02 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Jin Qiao 
								
							 
						 
						
							
							
							
							
								
							
							
								2f4c754759 
								
							 
						 
						
							
							
								
								LLM: add bark gpu example ( #10091 )  
							
							 
							
							... 
							
							
							
							* add bark gpu example
* fix
* fix license
* add bark
* add example
* fix
* another way 
							
						 
						
							2024-02-07 10:47:11 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Xiangyu Tian 
								
							 
						 
						
							
							
							
							
								
							
							
								8953acd7d6 
								
							 
						 
						
							
							
								
								[LLM] Fix log condition for BIGDL_OPT_IPEX ( #10115 )  
							
							 
							
							... 
							
							
							
							Fix log condition for BIGDL_OPT_IPEX 
							
						 
						
							2024-02-07 10:27:10 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									SONG Ge 
								
							 
						 
						
							
							
							
							
								
							
							
								0eccb94d75 
								
							 
						 
						
							
							
								
								remove text-generation-webui from bigdl repo ( #10107 )  
							
							 
							
							
							
						 
						
							2024-02-06 17:46:52 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Ovo233 
								
							 
						 
						
							
							
							
							
								
							
							
								2aaa21c41d 
								
							 
						 
						
							
							
								
								LLM: Update ppl tests ( #10092 )  
							
							 
							
							... 
							
							
							
							* update ppl tests
* use load_dataset api
* add exception handling
* add language argument
* address comments 
							
						 
						
							2024-02-06 17:31:48 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yuwen Hu 
								
							 
						 
						
							
							
							
							
								
							
							
								3a46b57253 
								
							 
						 
						
							
							
								
								[LLM] Add RWKV4 HF GPU Example ( #10105 )  
							
							 
							
							... 
							
							
							
							* Add GPU HF example for RWKV 4
* Add link to rwkv4
* fix 
							
						 
						
							2024-02-06 16:30:24 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yuwen Hu 
								
							 
						 
						
							
							
							
							
								
							
							
								518ef95abc 
								
							 
						 
						
							
							
								
								Small fix for Nonetype error ( #10104 )  
							
							 
							
							
							
						 
						
							2024-02-06 14:58:52 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Ruonan Wang 
								
							 
						 
						
							
							
							
							
								
							
							
								d61f4905ac 
								
							 
						 
						
							
							
								
								LLM: 2bit quantization initial support ( #10042 )  
							
							 
							
							... 
							
							
							
							* basis quantize support
* fix new module name
* small update
* and mixed int4 with iq2_xxs
* remove print
* code refactor
* fix style
* meet code review 
							
						 
						
							2024-02-06 14:58:32 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									dingbaorong 
								
							 
						 
						
							
							
							
							
								
							
							
								36c9442c6d 
								
							 
						 
						
							
							
								
								Arc Stable version test ( #10087 )  
							
							 
							
							... 
							
							
							
							* add batch_size in stable version test
* add batch_size in excludes
* add excludes for batch_size
* fix ci
* triger regression test
* fix xpu version
* disable ci
* address kai's comment
---------
Co-authored-by: Ariadne <wyn2000330@126.com> 
							
						 
						
							2024-02-06 10:23:50 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Jiao Wang 
								
							 
						 
						
							
							
							
							
								
							
							
								33b9e7744d 
								
							 
						 
						
							
							
								
								fix dimension ( #10097 )  
							
							 
							
							
							
						 
						
							2024-02-05 15:07:38 -08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									SONG Ge 
								
							 
						 
						
							
							
							
							
								
							
							
								4b02ff188b 
								
							 
						 
						
							
							
								
								[WebUI] Add prompt format and stopping words for Qwen ( #10066 )  
							
							 
							
							... 
							
							
							
							* add prompt format and stopping_words for qwen mdoel
* performance optimization
* optimize
* update
* meet comments 
							
						 
						
							2024-02-05 18:23:13 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									WeiguangHan 
								
							 
						 
						
							
							
							
							
								
							
							
								0aecd8637b 
								
							 
						 
						
							
							
								
								LLM: small fix for the html script ( #10094 )  
							
							 
							
							
							
						 
						
							2024-02-05 17:27:34 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Zhicun 
								
							 
						 
						
							
							
							
							
								
							
							
								7d2be7994f 
								
							 
						 
						
							
							
								
								add phixtral and optimize phi-moe ( #10052 )  
							
							 
							
							
							
						 
						
							2024-02-05 11:12:47 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Zhicun 
								
							 
						 
						
							
							
							
							
								
							
							
								676d6923f2 
								
							 
						 
						
							
							
								
								LLM: modify transformersembeddings.embed() in langchain ( #10051 )  
							
							 
							
							
							
						 
						
							2024-02-05 10:42:10 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Jin Qiao 
								
							 
						 
						
							
							
							
							
								
							
							
								ad050107b3 
								
							 
						 
						
							
							
								
								LLM: fix mpt load_low_bit issue ( #10075 )  
							
							 
							
							... 
							
							
							
							* fix
* retry
* retry 
							
						 
						
							2024-02-05 10:17:07 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									SONG Ge 
								
							 
						 
						
							
							
							
							
								
							
							
								9050991e4e 
								
							 
						 
						
							
							
								
								fix gradio check issue temply ( #10082 )  
							
							 
							
							
							
						 
						
							2024-02-04 16:46:29 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									WeiguangHan 
								
							 
						 
						
							
							
							
							
								
							
							
								c2e562d037 
								
							 
						 
						
							
							
								
								LLM: add batch_size to the csv and html ( #10080 )  
							
							 
							
							... 
							
							
							
							* LLM: add batch_size to the csv and html
* small fix 
							
						 
						
							2024-02-04 16:35:44 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									binbin Deng 
								
							 
						 
						
							
							
							
							
								
							
							
								7e49fbc5dd 
								
							 
						 
						
							
							
								
								LLM: make finetuning examples more common for other models ( #10078 )  
							
							 
							
							
							
						 
						
							2024-02-04 16:03:52 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Heyang Sun 
								
							 
						 
						
							
							
							
							
								
							
							
								90f004b80b 
								
							 
						 
						
							
							
								
								remove benchmarkwrapper form deepspeed example ( #10079 )  
							
							 
							
							
							
						 
						
							2024-02-04 15:42:15 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Ruonan Wang 
								
							 
						 
						
							
							
							
							
								
							
							
								8e33cb0f38 
								
							 
						 
						
							
							
								
								LLM: support speecht5_tts ( #10077 )  
							
							 
							
							... 
							
							
							
							* support speecht5_tts
* fix 
							
						 
						
							2024-02-04 13:26:42 +08:00