Qiyuan Gong 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								a88c132e54 
								
							 
						 
						
							
							
								
								Reduce Mistral softmax memory only in low memory mode ( #11775 )  
							
							 
							
							... 
							
							
							
							* Reduce Mistral softmax memory only in low memory mode 
							
						 
						
							2024-08-13 14:50:54 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								aa861df066 
								
							 
						 
						
							
							
								
								use new fp32 softmax kernel ( #11776 )  
							
							 
							
							
							
						 
						
							2024-08-13 14:48:11 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									binbin Deng 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								23d3acdc77 
								
							 
						 
						
							
							
								
								Add experimental support of fused decoder layer for llama2 ( #11768 )  
							
							 
							
							
							
						 
						
							2024-08-13 14:41:36 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Jin, Qiao 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								c28b3389e6 
								
							 
						 
						
							
							
								
								Update npu multimodal example ( #11773 )  
							
							 
							
							
							
						 
						
							2024-08-13 14:14:59 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yuwen Hu 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								81824ff8c9 
								
							 
						 
						
							
							
								
								Fix stdout in all-in-one benchmark to utf-8 ( #11772 )  
							
							 
							
							
							
						 
						
							2024-08-13 10:51:08 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								a1eb793f70 
								
							 
						 
						
							
							
								
								optimize minicpm v 2_6 firs token perf ( #11770 )  
							
							 
							
							
							
						 
						
							2024-08-13 09:51:18 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yina Chen 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								841dbcdf3a 
								
							 
						 
						
							
							
								
								Fix compresskv with lookahead issue ( #11767 )  
							
							 
							
							... 
							
							
							
							* fix compresskv + lookahead attn_mask qwen2
* support llama chatglm
* support mistral & chatglm
* address comments
* revert run.py 
							
						 
						
							2024-08-12 18:53:55 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yuwen Hu 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								f97a77ea4e 
								
							 
						 
						
							
							
								
								Update all-in-one benchmark for continuation task input preparation ( #11760 )  
							
							 
							
							... 
							
							
							
							* All use 8192.txt for prompt preparation for now
* Small fix
* Fix text encoding mode to utf-8
* Small update 
							
						 
						
							2024-08-12 17:49:45 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Xu, Shuo 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								1b05caba2b 
								
							 
						 
						
							
							
								
								Set mistral fuse rope to false except fp6 & fp16 ( #11765 )  
							
							 
							
							... 
							
							
							
							* set mistral fuse rope to false except fp6 & fp16
* lint
* lint
---------
Co-authored-by: ATMxsp01 <shou.xu@intel.com> 
							
						 
						
							2024-08-12 17:25:07 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Ruonan Wang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								8db34057b4 
								
							 
						 
						
							
							
								
								optimize lookahead init time ( #11769 )  
							
							 
							
							
							
						 
						
							2024-08-12 17:19:12 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Jin, Qiao 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								05989ad0f9 
								
							 
						 
						
							
							
								
								Update npu example and all in one benckmark ( #11766 )  
							
							 
							
							
							
						 
						
							2024-08-12 16:46:46 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								57d177738d 
								
							 
						 
						
							
							
								
								optimize minicpm-v-2_6 repetition penalty ( #11763 )  
							
							 
							
							
							
						 
						
							2024-08-12 14:10:10 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Wang, Jian4 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								245dba0abc 
								
							 
						 
						
							
							
								
								Fix lightweight-serving codegeex error ( #11759 )  
							
							 
							
							
							
						 
						
							2024-08-12 10:35:37 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Ruonan Wang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								66fe2ee464 
								
							 
						 
						
							
							
								
								initial support of IPEX_LLM_PERFORMANCE_MODE  ( #11754 )  
							
							 
							
							... 
							
							
							
							* add perf mode
* update
* fix style 
							
						 
						
							2024-08-09 19:04:09 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yina Chen 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								4b9c57cc60 
								
							 
						 
						
							
							
								
								Support compress kv with lookahead ( #11752 )  
							
							 
							
							... 
							
							
							
							* support compress kv with lookahead
* enough kv miss param 
							
						 
						
							2024-08-09 17:39:57 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								93455aac09 
								
							 
						 
						
							
							
								
								fix minicpm V 2.6 repeat output ( #11753 )  
							
							 
							
							
							
						 
						
							2024-08-09 17:39:24 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Ruonan Wang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								7e917d6cfb 
								
							 
						 
						
							
							
								
								fix gptq of llama ( #11749 )  
							
							 
							
							... 
							
							
							
							* fix gptq of llama
* small fix 
							
						 
						
							2024-08-09 16:39:25 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yina Chen 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								dd46c141bd 
								
							 
						 
						
							
							
								
								Phi3 support compresskv ( #11733 )  
							
							 
							
							... 
							
							
							
							* phi3 support compresskv
* fix phi3 mtl error
* fix conflict with quant kv
* fix abnormal on mtl
* fix style
* use slide windows size to compress kv
* support sliding window
* fix style
* fix style
* temp: partial support quant kv
* support quant kv with compress kv, todo: model check
* temp
* fix style
* fix style
* remove prepare
* address comment
* default -> 1.8k 
							
						 
						
							2024-08-09 15:43:43 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Qiyuan Gong 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								d8808cc2e3 
								
							 
						 
						
							
							
								
								Mistral apply_rotary_pos_emb_no_cache_xpu use rope_theta from config ( #11747 )  
							
							 
							
							... 
							
							
							
							mistral-7B-instruct-v0.2 and mistral-7B-instruct-v0.1 use different rope_theta (0.2 is 1e, 0.1 is 1e5). Pass self.config.rope_theta to apply_rotary_pos_emb_no_cache_xpu to avoid output difference. 
							
						 
						
							2024-08-09 10:35:51 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Xiangyu Tian 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								044e486480 
								
							 
						 
						
							
							
								
								Fix vLLM CPU /chat endpoint ( #11748 )  
							
							 
							
							
							
						 
						
							2024-08-09 10:33:52 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Jinhe 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								27b4b104ed 
								
							 
						 
						
							
							
								
								Add qwen2-1.5b-instruct into igpu performance ( #11735 )  
							
							 
							
							... 
							
							
							
							* updated qwen1.5B to all transformer==4.37 yaml
* updated qwen1.5B to all transformer==4.37 yaml 
							
						 
						
							2024-08-08 16:42:18 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Shaojun Liu 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								107f7aafd0 
								
							 
						 
						
							
							
								
								enable inference mode for deepspeed tp serving ( #11742 )  
							
							 
							
							
							
						 
						
							2024-08-08 14:38:30 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Zijie Li 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								9e65cf00b3 
								
							 
						 
						
							
							
								
								Add openai-whisper pytorch gpu ( #11736 )  
							
							 
							
							... 
							
							
							
							* Add openai-whisper pytorch gpu
* Update README.md
* Update README.md
* fix typo
* fix names update readme
* Update README.md 
							
						 
						
							2024-08-08 12:32:59 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Jinhe 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								d0c89fb715 
								
							 
						 
						
							
							
								
								updated llama.cpp and ollama quickstart ( #11732 )  
							
							 
							
							... 
							
							
							
							* updated llama.cpp and ollama quickstart.md
* added qwen2-1.5B sample output
* revision on quickstart updates
* revision on quickstart updates
* revision on qwen2 readme
* added 2 troubleshoots“
”
* troubleshoot revision 
							
						 
						
							2024-08-08 11:04:01 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								54cc9353db 
								
							 
						 
						
							
							
								
								support and optimize minicpm-v-2_6 ( #11738 )  
							
							 
							
							
							
						 
						
							2024-08-07 18:21:16 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yina Chen 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								e956e71fc1 
								
							 
						 
						
							
							
								
								fix conflict with quant kv ( #11737 )  
							
							 
							
							
							
						 
						
							2024-08-07 18:10:30 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Ruonan Wang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								00a5574c8a 
								
							 
						 
						
							
							
								
								Use merge_qkv to replace fused_qkv for llama2 ( #11727 )  
							
							 
							
							... 
							
							
							
							* update 4.38
* support new versions
* update
* fix style
* fix style
* update rope
* temp test sdpa
* fix style
* fix cpu ut 
							
						 
						
							2024-08-07 18:04:01 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yina Chen 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								d2abc9711b 
								
							 
						 
						
							
							
								
								Fix MTL 4k input qwen2 compresskv error ( #11734 )  
							
							 
							
							... 
							
							
							
							* fix
* fix style 
							
						 
						
							2024-08-07 16:21:57 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yina Chen 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								a71ae7c22b 
								
							 
						 
						
							
							
								
								Support minicpm compresskv & modify default compresskv config & default enable compresskv on mtl 2.5k~4.5k ( #11726 )  
							
							 
							
							... 
							
							
							
							* support minicpm & modify default & default enable on mtl 2.5k~4.5k
* fix style 
							
						 
						
							2024-08-07 11:35:39 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								c093f7d980 
								
							 
						 
						
							
							
								
								fix phi3 ( #11729 )  
							
							 
							
							
							
						 
						
							2024-08-07 09:39:46 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Zijie Li 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								e7f7141781 
								
							 
						 
						
							
							
								
								Add benchmark util for transformers 4.42 ( #11725 )  
							
							 
							
							... 
							
							
							
							* add new benchmark_util.py
Add new benchmark_util.py for transformers>=4.43.1. The old one renamed to benchmark_util_prev.py.
* Small fix to import code
* Update __init__.py
* fix file names
* Update lint-python
Update lint-python to exclude benchmark_util_4_29.py
benchmark_util_4_43.py
* Update benchmark_util_4_43.py
* add benchmark_util for transformers 4.42 
							
						 
						
							2024-08-07 08:48:07 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Ch1y0q 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								4676af2054 
								
							 
						 
						
							
							
								
								add gemma2 example ( #11724 )  
							
							 
							
							... 
							
							
							
							* add `gemma2`
* update `transformers` version
* update `README.md` 
							
						 
						
							2024-08-06 21:17:50 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									SichengStevenLi 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								985213614b 
								
							 
						 
						
							
							
								
								Removed no longer needed models for Arc nightly perf  ( #11722 )  
							
							 
							
							... 
							
							
							
							* removed LLMs that are no longer needed
Removed: 
mistralai/Mistral-7B-v0.1
deepseek-ai/deepseek-coder-6.7b-instruct
* Update arc-perf-test-batch4.yaml
Removed: 
deepseek-ai/deepseek-coder-6.7b-instruct
mistralai/Mistral-7B-v0.1
* Update arc-perf-test.yaml
Removed: 
deepseek-ai/deepseek-coder-6.7b-instruct
mistralai/Mistral-7B-v0.1
* Create arc-perf-transformers-438.yaml
* Moved arc-perf-transformers-438.yaml location
* Create arc-perf-transformers-438-batch2.yaml
* Create arc-perf-transformers-438-batch4.yaml
* Delete python/llm/test/benchmark/arc-perf-transformers-438-batch2.yaml
* Delete python/llm/test/benchmark/arc-perf-transformers-438-batch4.yaml
* Delete python/llm/test/benchmark/arc-perf-transformers-438.yaml 
							
						 
						
							2024-08-06 16:12:00 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								929675aa6b 
								
							 
						 
						
							
							
								
								support latest phi3 ( #11721 )  
							
							 
							
							
							
						 
						
							2024-08-06 15:52:55 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Jin, Qiao 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								11650b6f81 
								
							 
						 
						
							
							
								
								upgrade glm-4v example transformers version ( #11719 )  
							
							 
							
							
							
						 
						
							2024-08-06 14:55:09 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								bbdff6edeb 
								
							 
						 
						
							
							
								
								optimize internvl2 4b performance ( #11720 )  
							
							 
							
							
							
						 
						
							2024-08-06 14:25:08 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								f44b732aa8 
								
							 
						 
						
							
							
								
								support internvl2-4b ( #11718 )  
							
							 
							
							
							
						 
						
							2024-08-06 13:36:32 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Jin, Qiao 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								7f241133da 
								
							 
						 
						
							
							
								
								Add MiniCPM-Llama3-V-2_5 GPU example ( #11693 )  
							
							 
							
							... 
							
							
							
							* Add MiniCPM-Llama3-V-2_5 GPU example
* fix 
							
						 
						
							2024-08-06 10:22:41 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Jin, Qiao 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								808d9a7bae 
								
							 
						 
						
							
							
								
								Add MiniCPM-V-2 GPU example ( #11699 )  
							
							 
							
							... 
							
							
							
							* Add MiniCPM-V-2 GPU example
* add example in README.md
* add example in README.md 
							
						 
						
							2024-08-06 10:22:33 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Zijie Li 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								8fb36b9f4a 
								
							 
						 
						
							
							
								
								add new benchmark_util.py ( #11713 )  
							
							 
							
							... 
							
							
							
							* add new benchmark_util.py 
							
						 
						
							2024-08-05 16:18:48 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Wang, Jian4 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								493cbd9a36 
								
							 
						 
						
							
							
								
								Support  lightweight-serving with internlm-xcomposer2-vl-7b multimodal  input ( #11703 )  
							
							 
							
							... 
							
							
							
							* init image_list
* enable internlm-xcomposer2 image input
* update style
* add readme
* update model
* update readme 
							
						 
						
							2024-08-05 09:36:04 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Ruonan Wang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								aa98ef96fe 
								
							 
						 
						
							
							
								
								change mixed_precision to q6_k ( #11706 )  
							
							 
							
							
							
						 
						
							2024-08-02 15:55:16 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Xiangyu Tian 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								1baa3efe0e 
								
							 
						 
						
							
							
								
								Optimizations for Pipeline Parallel Serving ( #11702 )  
							
							 
							
							... 
							
							
							
							Optimizations for Pipeline Parallel Serving 
							
						 
						
							2024-08-02 12:06:59 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yina Chen 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								8d1e0bd2f4 
								
							 
						 
						
							
							
								
								add sdp causal support in llama ( #11705 )  
							
							 
							
							
							
						 
						
							2024-08-02 10:27:40 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Ruonan Wang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								736a7ef72e 
								
							 
						 
						
							
							
								
								add sdp_causal for mistral 4.36 ( #11686 )  
							
							 
							
							... 
							
							
							
							* add sdp_causal for mistral
* fix
* update 
							
						 
						
							2024-08-01 18:57:31 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yina Chen 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								45c730ff39 
								
							 
						 
						
							
							
								
								Chatglm support compresskv ( #11690 )  
							
							 
							
							... 
							
							
							
							* chatglm4 support compresskv
* fix
* fix style
* support chatglm2
* fix quantkv conflict
* fix style 
							
						 
						
							2024-08-01 18:20:20 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Qiyuan Gong 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								762ad49362 
								
							 
						 
						
							
							
								
								Add RANK_WAIT_TIME into DeepSpeed-AutoTP to avoid CPU memory OOM ( #11704 )  
							
							 
							
							... 
							
							
							
							* DeepSpeed-AutoTP will start multiple processors to load models and convert them in CPU memory. If model/rank_num is large, this will lead to OOM. Add RANK_WAIT_TIME to reduce memory usage by controlling model reading parallelism. 
							
						 
						
							2024-08-01 18:16:21 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									hxsz1997 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								8ef4caaf5d 
								
							 
						 
						
							
							
								
								add 3k and 4k input of nightly perf test on iGPU ( #11701 )  
							
							 
							
							... 
							
							
							
							* Add 3k&4k input in workflow for iGPU (#11685 )
* add 3k&4k input in workflow
* comment for test
* comment models for accelarate test
* remove OOM models
* modify typo
* change test model (#11696 )
* reverse test models (#11700 ) 
							
						 
						
							2024-08-01 14:17:46 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Guancheng Fu 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								afeca38a47 
								
							 
						 
						
							
							
								
								Fix import vllm condition ( #11682 )  
							
							 
							
							
							
						 
						
							2024-07-31 13:50:01 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Ruonan Wang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								54bf3a23a6 
								
							 
						 
						
							
							
								
								add fallback for unsupported k-quants ( #11691 )  
							
							 
							
							... 
							
							
							
							* add fallback
* fix style
* fix 
							
						 
						
							2024-07-31 11:39:58 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Zijie Li 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								5079ed9e06 
								
							 
						 
						
							
							
								
								Add Llama3.1 example ( #11689 )  
							
							 
							
							... 
							
							
							
							* Add Llama3.1 example
Add Llama3.1 example for Linux arc and Windows MTL
* Changes made to adjust compatibilities
transformers changed to 4.43.1
* Update index.rst
* Update README.md
* Update index.rst
* Update index.rst
* Update index.rst 
							
						 
						
							2024-07-31 10:53:30 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Jin, Qiao 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								6e3ce28173 
								
							 
						 
						
							
							
								
								Upgrade glm-4 example transformers version ( #11659 )  
							
							 
							
							... 
							
							
							
							* upgrade glm-4 example transformers version
* move pip install in one line 
							
						 
						
							2024-07-31 10:24:50 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Jin, Qiao 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								a44ab32153 
								
							 
						 
						
							
							
								
								Switch to conhost when running on NPU ( #11687 )  
							
							 
							
							
							
						 
						
							2024-07-30 17:08:06 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Wang, Jian4 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								b119825152 
								
							 
						 
						
							
							
								
								Remove tgi parameter validation ( #11688 )  
							
							 
							
							... 
							
							
							
							* remove validation
* add min warm up
* remove no need source 
							
						 
						
							2024-07-30 16:37:44 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yina Chen 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								670ad887fc 
								
							 
						 
						
							
							
								
								Qwen support compress kv ( #11680 )  
							
							 
							
							... 
							
							
							
							* Qwen support compress kv
* fix style
* fix 
							
						 
						
							2024-07-30 11:16:42 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									hxsz1997 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								9b36877897 
								
							 
						 
						
							
							
								
								disable default quantize_kv of GQA on MTL ( #11679 )  
							
							 
							
							... 
							
							
							
							* disable default quantizekv of gqa in mtl
* fix stype
* fix stype
* fix stype
* fix stype
* fix stype
* fix stype 
							
						 
						
							2024-07-30 09:38:46 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								c02003925b 
								
							 
						 
						
							
							
								
								add mlp for gemma2 ( #11678 )  
							
							 
							
							
							
						 
						
							2024-07-29 16:10:23 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									RyuKosei 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								1da1f1dd0e 
								
							 
						 
						
							
							
								
								Combine two versions of run_wikitext.py ( #11597 )  
							
							 
							
							... 
							
							
							
							* Combine two versions of run_wikitext.py
* Update run_wikitext.py
* Update run_wikitext.py
* aligned the format
* update error display
* simplified argument parser
---------
Co-authored-by: jenniew <jenniewang123@gmail.com> 
							
						 
						
							2024-07-29 15:56:16 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								6f999e6e90 
								
							 
						 
						
							
							
								
								add sdp for gemma2 ( #11677 )  
							
							 
							
							
							
						 
						
							2024-07-29 15:15:47 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Ruonan Wang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								c11d5301d7 
								
							 
						 
						
							
							
								
								add sdp fp8 for llama ( #11671 )  
							
							 
							
							... 
							
							
							
							* add sdp fp8 for llama
* fix style
* refactor 
							
						 
						
							2024-07-29 13:46:22 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								7f88ce23cd 
								
							 
						 
						
							
							
								
								add more gemma2 optimization ( #11673 )  
							
							 
							
							
							
						 
						
							2024-07-29 11:13:00 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								3e8819734b 
								
							 
						 
						
							
							
								
								add basic gemma2 optimization ( #11672 )  
							
							 
							
							
							
						 
						
							2024-07-29 10:46:51 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Guoqiong Song 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								336dfc04b1 
								
							 
						 
						
							
							
								
								fix 1482 ( #11661 )  
							
							 
							
							... 
							
							
							
							Co-authored-by: rnwang04 <ruonan1.wang@intel.com> 
							
						 
						
							2024-07-26 12:39:09 -07:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Heyang Sun 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								ba01b85c13 
								
							 
						 
						
							
							
								
								empty cache only for 1st token but rest token to speed up ( #11665 )  
							
							 
							
							
							
						 
						
							2024-07-26 16:46:21 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yina Chen 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								fc7f8feb83 
								
							 
						 
						
							
							
								
								Support compress kv ( #11642 )  
							
							 
							
							... 
							
							
							
							* mistral snapkv
* update
* mtl update
* update
* update
* update
* add comments
* style fix
* fix style
* support llama
* llama use compress kv
* support mistral 4.40
* fix style
* support diff transformers versions
* move snapkv util to kv
* fix style
* meet comments & small fix
* revert all in one
* fix indent
---------
Co-authored-by: leonardozcm <leonardo1997zcm@gmail.com> 
							
						 
						
							2024-07-26 16:02:00 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								6bcdc6cc8f 
								
							 
						 
						
							
							
								
								fix qwen2 cpu ( #11663 )  
							
							 
							
							
							
						 
						
							2024-07-26 13:41:51 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Wang, Jian4 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								23681fbf5c 
								
							 
						 
						
							
							
								
								Support codegeex4-9b for lightweight-serving ( #11648 )  
							
							 
							
							... 
							
							
							
							* add options, support prompt and not return end_token
* enable openai parameter
* set do_sample None and update style 
							
						 
						
							2024-07-26 09:41:03 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Guancheng Fu 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								a4d30a8211 
								
							 
						 
						
							
							
								
								Change logic for detecting if vllm is available ( #11657 )  
							
							 
							
							... 
							
							
							
							* fix
* fix 
							
						 
						
							2024-07-25 15:24:19 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Qiyuan Gong 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								0c6e0b86c0 
								
							 
						 
						
							
							
								
								Refine continuation get input_str ( #11652 )  
							
							 
							
							... 
							
							
							
							* Remove duplicate code in continuation get input_str.
* Avoid infinite loop in all-in-one due to test_length not in the list. 
							
						 
						
							2024-07-25 14:41:19 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									RyuKosei 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								2fbd375a94 
								
							 
						 
						
							
							
								
								update several models for nightly perf test ( #11643 )  
							
							 
							
							... 
							
							
							
							Co-authored-by: Yishuo Wang <yishuo.wang@intel.com> 
							
						 
						
							2024-07-25 14:06:08 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Xiangyu Tian 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								4499d25c26 
								
							 
						 
						
							
							
								
								LLM: Fix ParallelLMHead convert in vLLM cpu ( #11654 )  
							
							 
							
							
							
						 
						
							2024-07-25 13:07:19 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									binbin Deng 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								777e61d8c8 
								
							 
						 
						
							
							
								
								Fix qwen2 & int4 on NPU ( #11646 )  
							
							 
							
							
							
						 
						
							2024-07-24 13:14:39 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								1b3b46e54d 
								
							 
						 
						
							
							
								
								fix chatglm new model ( #11639 )  
							
							 
							
							
							
						 
						
							2024-07-23 13:44:56 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Xu, Shuo 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								7f80db95eb 
								
							 
						 
						
							
							
								
								Change run.py in benchmark to support phi-3-vision in arc-perf ( #11638 )  
							
							 
							
							... 
							
							
							
							Co-authored-by: ATMxsp01 <shou.xu@intel.com> 
							
						 
						
							2024-07-23 09:51:36 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Xiangyu Tian 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								060792a648 
								
							 
						 
						
							
							
								
								LLM: Refine Pipeline Parallel FastAPI ( #11587 )  
							
							 
							
							... 
							
							
							
							Refine Pipeline Parallel FastAPI 
							
						 
						
							2024-07-22 15:52:05 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Wang, Jian4 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								1eed0635f2 
								
							 
						 
						
							
							
								
								Add lightweight serving and support tgi parameter ( #11600 )  
							
							 
							
							... 
							
							
							
							* init tgi request
* update openai api
* update for pp
* update and add readme
* add to docker
* add start bash
* update
* update
* update 
							
						 
						
							2024-07-19 13:15:56 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Xiangyu Tian 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								d27a8cd08c 
								
							 
						 
						
							
							
								
								Fix Pipeline Parallel dtype ( #11623 )  
							
							 
							
							
							
						 
						
							2024-07-19 13:07:40 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								d020ad6397 
								
							 
						 
						
							
							
								
								add save_low_bit support for DiskEmbedding ( #11621 )  
							
							 
							
							
							
						 
						
							2024-07-19 10:34:53 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Guoqiong Song 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								380717f50d 
								
							 
						 
						
							
							
								
								fix gemma for 4.41 ( #11531 )  
							
							 
							
							... 
							
							
							
							* fix gemma for 4.41 
							
						 
						
							2024-07-18 15:02:50 -07:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Guoqiong Song 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								5a6211fd56 
								
							 
						 
						
							
							
								
								fix minicpm for transformers>=4.39 ( #11533 )  
							
							 
							
							... 
							
							
							
							* fix minicpm for transformers>=4.39 
							
						 
						
							2024-07-18 15:01:57 -07:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								0209427cf4 
								
							 
						 
						
							
							
								
								Add disk_embedding parameter to support put Embedding layer on CPU ( #11617 )  
							
							 
							
							
							
						 
						
							2024-07-18 17:06:06 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yuwen Hu 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								2478e2c14b 
								
							 
						 
						
							
							
								
								Add check in iGPU perf workflow for results integrity ( #11616 )  
							
							 
							
							... 
							
							
							
							* Add csv check for igpu benchmark workflow (#11610 )
* add csv check for igpu benchmark workflow
* ready to test
---------
Co-authored-by: ATMxsp01 <shou.xu@intel.com>
* Restore the temporarily removed models in iGPU-perf (#11615 )
Co-authored-by: ATMxsp01 <shou.xu@intel.com>
---------
Co-authored-by: Xu, Shuo <100334393+ATMxsp01@users.noreply.github.com>
Co-authored-by: ATMxsp01 <shou.xu@intel.com> 
							
						 
						
							2024-07-18 14:13:16 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Xiangyu Tian 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								4594a3dd6c 
								
							 
						 
						
							
							
								
								LLM: Fix DummyLayer.weight device in Pipeline Parallel ( #11612 )  
							
							 
							
							
							
						 
						
							2024-07-18 13:39:34 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Ruonan Wang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								4da93709b1 
								
							 
						 
						
							
							
								
								update doc/setup to use onednn gemm for cpp ( #11598 )  
							
							 
							
							... 
							
							
							
							* update doc/setup to use onednn gemm
* small fix
* Change TOC of graphrag quickstart back 
							
						 
						
							2024-07-18 13:04:38 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								f4077fa905 
								
							 
						 
						
							
							
								
								fix llama3-8b npu long input stuck ( #11613 )  
							
							 
							
							
							
						 
						
							2024-07-18 11:08:17 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Zhao Changmin 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								e5c0058c0e 
								
							 
						 
						
							
							
								
								fix baichuan ( #11606 )  
							
							 
							
							
							
						 
						
							2024-07-18 09:43:36 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Guoqiong Song 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								bfcdc35b04 
								
							 
						 
						
							
							
								
								phi-3 on "transformers>=4.37.0,<=4.42.3" ( #11534 )  
							
							 
							
							
							
						 
						
							2024-07-17 17:19:57 -07:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Guoqiong Song 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								d64711900a 
								
							 
						 
						
							
							
								
								Fix cohere model on transformers>=4.41 ( #11575 )  
							
							 
							
							... 
							
							
							
							* fix cohere model for 4-41 
							
						 
						
							2024-07-17 17:18:59 -07:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Guoqiong Song 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								5b6eb85b85 
								
							 
						 
						
							
							
								
								phi model readme ( #11595 )  
							
							 
							
							... 
							
							
							
							Co-authored-by: rnwang04 <ruonan1.wang@intel.com> 
							
						 
						
							2024-07-17 17:18:34 -07:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Wang, Jian4 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								9c15abf825 
								
							 
						 
						
							
							
								
								Refactor fastapi-serving and add one card serving( #11581 )  
							
							 
							
							... 
							
							
							
							* init fastapi-serving one card
* mv api code to source
* update worker
* update for style-check
* add worker
* update bash
* update
* update worker name and add readme
* rename update
* rename to fastapi 
							
						 
						
							2024-07-17 11:12:43 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								5837bc0014 
								
							 
						 
						
							
							
								
								fix chatglm3 npu output ( #11590 )  
							
							 
							
							
							
						 
						
							2024-07-16 18:16:30 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Guancheng Fu 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								06930ab258 
								
							 
						 
						
							
							
								
								Enable ipex-llm optimization for lm head ( #11589 )  
							
							 
							
							... 
							
							
							
							* basic
* Modify convert.py
* fix 
							
						 
						
							2024-07-16 16:48:44 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Heyang Sun 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								365adad59f 
								
							 
						 
						
							
							
								
								Support LoRA ChatGLM with Alpaca Dataset ( #11580 )  
							
							 
							
							... 
							
							
							
							* Support LoRA ChatGLM with Alpaca Dataset
* refine
* fix
* add 2-card alpaca 
							
						 
						
							2024-07-16 15:40:02 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yina Chen 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								99c22745b2 
								
							 
						 
						
							
							
								
								fix qwen 14b fp6 abnormal output ( #11583 )  
							
							 
							
							
							
						 
						
							2024-07-16 10:59:00 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								c279849d27 
								
							 
						 
						
							
							
								
								add disk embedding api ( #11585 )  
							
							 
							
							
							
						 
						
							2024-07-16 10:43:39 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Xiangyu Tian 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								79c742dfd5 
								
							 
						 
						
							
							
								
								LLM: Add XPU Memory Optimizations for Pipeline Parallel ( #11567 )  
							
							 
							
							... 
							
							
							
							Add XPU Memory Optimizations for Pipeline Parallel 
							
						 
						
							2024-07-16 09:44:50 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Ch1y0q 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								50cf563a71 
								
							 
						 
						
							
							
								
								Add example: MiniCPM-V ( #11570 )  
							
							 
							
							
							
						 
						
							2024-07-15 10:55:48 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Zhao Changmin 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								06745e5742 
								
							 
						 
						
							
							
								
								Add npu benchmark all-in-one script ( #11571 )  
							
							 
							
							... 
							
							
							
							* npu benchmark 
							
						 
						
							2024-07-15 10:42:37 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								019da6c0ab 
								
							 
						 
						
							
							
								
								use mlp silu_mul fusion in qwen2 to optimize memory usage ( #11574 )  
							
							 
							
							
							
						 
						
							2024-07-13 16:32:54 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Xu, Shuo 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								13a72dc51d 
								
							 
						 
						
							
							
								
								Test MiniCPM performance on iGPU in a more stable way ( #11573 )  
							
							 
							
							... 
							
							
							
							* Test MiniCPM performance on iGPU in a more stable way
* small fix
---------
Co-authored-by: ATMxsp01 <shou.xu@intel.com> 
							
						 
						
							2024-07-12 17:07:41 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Xiangyu Tian 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								0981b72275 
								
							 
						 
						
							
							
								
								Fix /generate_stream api in Pipeline Parallel FastAPI ( #11569 )  
							
							 
							
							
							
						 
						
							2024-07-12 13:19:42 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								a945500a98 
								
							 
						 
						
							
							
								
								fix internlm xcomposser stream chat ( #11564 )  
							
							 
							
							
							
						 
						
							2024-07-11 18:21:17 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Zhao Changmin 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								b9c66994a5 
								
							 
						 
						
							
							
								
								add npu sdp ( #11562 )  
							
							 
							
							
							
						 
						
							2024-07-11 16:57:35 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									binbin Deng 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								2b8ad8731e 
								
							 
						 
						
							
							
								
								Support pipeline parallel for glm-4v ( #11545 )  
							
							 
							
							
							
						 
						
							2024-07-11 16:06:06 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Xiangyu Tian 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								7f5111a998 
								
							 
						 
						
							
							
								
								LLM: Refine start script for Pipeline Parallel Serving ( #11557 )  
							
							 
							
							... 
							
							
							
							Refine start script and readme for Pipeline Parallel Serving 
							
						 
						
							2024-07-11 15:45:27 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Xu, Shuo 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								1355b2ce06 
								
							 
						 
						
							
							
								
								Add model Qwen-VL-Chat to iGPU-perf ( #11558 )  
							
							 
							
							... 
							
							
							
							* Add model Qwen-VL-Chat to iGPU-perf
* small fix
---------
Co-authored-by: ATMxsp01 <shou.xu@intel.com> 
							
						 
						
							2024-07-11 15:39:02 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Zhao Changmin 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								105e124752 
								
							 
						 
						
							
							
								
								optimize phi3-v encoder npu performance and add multimodal example ( #11553 )  
							
							 
							
							... 
							
							
							
							* phi3-v
* readme 
							
						 
						
							2024-07-11 13:59:14 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Cengguang Zhang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								70ab1a6f1a 
								
							 
						 
						
							
							
								
								LLM: unify memory optimization env variables. ( #11549 )  
							
							 
							
							... 
							
							
							
							* LLM: unify memory optimization env variables.
* fix comments. 
							
						 
						
							2024-07-11 11:01:28 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Xu, Shuo 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								028ad4f63c 
								
							 
						 
						
							
							
								
								Add model phi-3-vision-128k-instruct to iGPU-perf benchmark ( #11554 )  
							
							 
							
							... 
							
							
							
							* try to improve MIniCPM performance
* Add model phi-3-vision-128k-instruct to iGPU-perf benchmark
---------
Co-authored-by: ATMxsp01 <shou.xu@intel.com> 
							
						 
						
							2024-07-10 17:26:30 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								994e49a510 
								
							 
						 
						
							
							
								
								optimize internlm xcomposser performance again ( #11551 )  
							
							 
							
							
							
						 
						
							2024-07-10 17:08:56 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Xu, Shuo 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								61613b210c 
								
							 
						 
						
							
							
								
								try to improve MIniCPM performance ( #11552 )  
							
							 
							
							... 
							
							
							
							Co-authored-by: ATMxsp01 <shou.xu@intel.com> 
							
						 
						
							2024-07-10 16:58:23 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								82f9514303 
								
							 
						 
						
							
							
								
								optimize internlm xcomposer2 performance ( #11550 )  
							
							 
							
							
							
						 
						
							2024-07-10 15:57:04 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Zhao Changmin 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								3c16c9f725 
								
							 
						 
						
							
							
								
								Optimize baichuan on NPU ( #11548 )  
							
							 
							
							... 
							
							
							
							* baichuan_npu 
							
						 
						
							2024-07-10 13:18:48 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yuwen Hu 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								8982ab73d5 
								
							 
						 
						
							
							
								
								Add Yi-6B and StableLM to iGPU perf test ( #11546 )  
							
							 
							
							... 
							
							
							
							* Add transformer4.38.2 test to igpu benchmark (#11529 )
* add transformer4.38.1 test to igpu benchmark
* use transformers4.38.2 & fix csv name error in 4.38 workflow
* add model Yi-6B-Chat & remove temporarily most models
---------
Co-authored-by: ATMxsp01 <shou.xu@intel.com>
* filter some errorlevel (#11541 )
Co-authored-by: ATMxsp01 <shou.xu@intel.com>
* Restore the temporarily removed models in iGPU-perf (#11544 )
* filter some errorlevel
* restore the temporarily removed models in iGPU-perf
---------
Co-authored-by: ATMxsp01 <shou.xu@intel.com>
---------
Co-authored-by: Xu, Shuo <100334393+ATMxsp01@users.noreply.github.com>
Co-authored-by: ATMxsp01 <shou.xu@intel.com> 
							
						 
						
							2024-07-09 18:51:23 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								7dc6756d86 
								
							 
						 
						
							
							
								
								add disk embedding ( #11543 )  
							
							 
							
							
							
						 
						
							2024-07-09 17:38:40 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Zhao Changmin 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								76a5802acf 
								
							 
						 
						
							
							
								
								update NPU examples ( #11540 )  
							
							 
							
							... 
							
							
							
							* update NPU examples 
							
						 
						
							2024-07-09 17:19:42 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								99b2802d3b 
								
							 
						 
						
							
							
								
								optimize qewn2 memory ( #11535 )  
							
							 
							
							
							
						 
						
							2024-07-09 17:14:01 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								2929eb262e 
								
							 
						 
						
							
							
								
								support npu glm4 ( #11539 )  
							
							 
							
							
							
						 
						
							2024-07-09 15:46:49 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Xiangyu Tian 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								a1cede926d 
								
							 
						 
						
							
							
								
								Fix update_kv_cache in Pipeline-Parallel-Serving for glm4-9b model ( #11537 )  
							
							 
							
							
							
						 
						
							2024-07-09 14:08:04 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Cengguang Zhang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								fa81dbefd3 
								
							 
						 
						
							
							
								
								LLM: update multi gpu write csv in all-in-one benchmark. ( #11538 )  
							
							 
							
							
							
						 
						
							2024-07-09 11:14:17 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Xin Qiu 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								69701b3ec8 
								
							 
						 
						
							
							
								
								fix typo in python/llm/scripts/README.md ( #11536 )  
							
							 
							
							
							
						 
						
							2024-07-09 09:53:14 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Jason Dai 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								099486afb7 
								
							 
						 
						
							
							
								
								Update README.md ( #11530 )  
							
							 
							
							
							
						 
						
							2024-07-08 20:18:41 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									binbin Deng 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								66f6ffe4b2 
								
							 
						 
						
							
							
								
								Update GPU HF-Transformers example structure ( #11526 )  
							
							 
							
							
							
						 
						
							2024-07-08 17:58:06 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Xu, Shuo 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								f9a199900d 
								
							 
						 
						
							
							
								
								add model RWKV/v5-Eagle-7B-HF to igpu benchmark ( #11528 )  
							
							 
							
							... 
							
							
							
							Co-authored-by: ATMxsp01 <shou.xu@intel.com> 
							
						 
						
							2024-07-08 15:50:16 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Shaojun Liu 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								9b37ca6027 
								
							 
						 
						
							
							
								
								remove ( #11527 )  
							
							 
							
							
							
						 
						
							2024-07-08 15:49:52 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								c26651f91f 
								
							 
						 
						
							
							
								
								add mistral npu support ( #11523 )  
							
							 
							
							
							
						 
						
							2024-07-08 13:17:15 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Jun Wang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								5a57e54400 
								
							 
						 
						
							
							
								
								[ADD] add 5 new models for igpu-perf ( #11524 )  
							
							 
							
							
							
						 
						
							2024-07-08 11:12:15 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Xu, Shuo 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								64cfed602d 
								
							 
						 
						
							
							
								
								Add new models to benchmark ( #11505 )  
							
							 
							
							... 
							
							
							
							* Add new models to benchmark
* remove Qwen/Qwen-VL-Chat to pass the validation
---------
Co-authored-by: ATMxsp01 <shou.xu@intel.com> 
							
						 
						
							2024-07-08 10:35:55 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									binbin Deng 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								252426793b 
								
							 
						 
						
							
							
								
								Fix setting of use_quantize_kv_cache on different GPU in pipeline parallel ( #11516 )  
							
							 
							
							
							
						 
						
							2024-07-08 09:27:01 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								7cb09a8eac 
								
							 
						 
						
							
							
								
								optimize qwen2 memory usage again ( #11520 )  
							
							 
							
							
							
						 
						
							2024-07-05 17:32:34 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yuwen Hu 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								8f376e5192 
								
							 
						 
						
							
							
								
								Change igpu perf to mainly test int4+fp16 ( #11513 )  
							
							 
							
							
							
						 
						
							2024-07-05 17:12:33 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Jun Wang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								1efb6ebe93 
								
							 
						 
						
							
							
								
								[ADD] add transformer_int4_fp16_loadlowbit_gpu_win api ( #11511 )  
							
							 
							
							... 
							
							
							
							* [ADD] add transformer_int4_fp16_loadlowbit_gpu_win api
* [UPDATE] add int4_fp16_lowbit config and description
* [FIX] fix run.py mistake
* [FIX] fix run.py mistake
* [FIX] fix indent; change dtype=float16 to model.half() 
							
						 
						
							2024-07-05 16:38:41 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Zhao Changmin 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								f7e957aaf9 
								
							 
						 
						
							
							
								
								Clean npu dtype branch ( #11515 )  
							
							 
							
							... 
							
							
							
							* clean branch
* create_npu_kernels 
							
						 
						
							2024-07-05 15:45:26 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								14ce058004 
								
							 
						 
						
							
							
								
								add chatglm3 npu support ( #11518 )  
							
							 
							
							
							
						 
						
							2024-07-05 15:31:27 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Xin Qiu 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								a31f2cbe13 
								
							 
						 
						
							
							
								
								update minicpm.py ( #11517 )  
							
							 
							
							... 
							
							
							
							* update minicpm
* meet code review 
							
						 
						
							2024-07-05 15:25:44 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Zhao Changmin 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								24de13fc45 
								
							 
						 
						
							
							
								
								Optimize stablelm on NPU ( #11512 )  
							
							 
							
							... 
							
							
							
							* stablelm_optimize 
							
						 
						
							2024-07-05 14:21:57 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Xiangyu Tian 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								7d8bc83415 
								
							 
						 
						
							
							
								
								LLM: Partial Prefilling for Pipeline Parallel Serving ( #11457 )  
							
							 
							
							... 
							
							
							
							LLM: Partial Prefilling for Pipeline Parallel Serving 
							
						 
						
							2024-07-05 13:10:35 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									binbin Deng 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								60de428b37 
								
							 
						 
						
							
							
								
								Support pipeline parallel for qwen-vl ( #11503 )  
							
							 
							
							
							
						 
						
							2024-07-04 18:03:57 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Zhao Changmin 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								57b8adb189 
								
							 
						 
						
							
							
								
								[WIP] Support npu load_low_bit method ( #11502 )  
							
							 
							
							... 
							
							
							
							* npu_load_low_bit 
							
						 
						
							2024-07-04 17:15:34 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Jun Wang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								f07937945f 
								
							 
						 
						
							
							
								
								[REMOVE] remove all useless repo-id in benchmark/igpu-perf ( #11508 )  
							
							 
							
							
							
						 
						
							2024-07-04 16:38:34 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								1a8bab172e 
								
							 
						 
						
							
							
								
								add minicpm 1B/2B npu support ( #11507 )  
							
							 
							
							
							
						 
						
							2024-07-04 16:31:04 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								bb0a84044b 
								
							 
						 
						
							
							
								
								add qwen2 npu support ( #11504 )  
							
							 
							
							
							
						 
						
							2024-07-04 11:01:25 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Xin Qiu 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								f84ca99b9f 
								
							 
						 
						
							
							
								
								optimize gemma2 rmsnorm ( #11500 )  
							
							 
							
							
							
						 
						
							2024-07-03 15:21:03 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Wang, Jian4 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								61c36ba085 
								
							 
						 
						
							
							
								
								Add pp_serving verified models ( #11498 )  
							
							 
							
							... 
							
							
							
							* add verified models
* update
* verify large model
* update commend 
							
						 
						
							2024-07-03 14:57:09 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									binbin Deng 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								9274282ef7 
								
							 
						 
						
							
							
								
								Support pipeline parallel for glm-4-9b-chat ( #11463 )  
							
							 
							
							
							
						 
						
							2024-07-03 14:25:28 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								d97c2664ce 
								
							 
						 
						
							
							
								
								use new fuse rope in stablelm family ( #11497 )  
							
							 
							
							
							
						 
						
							2024-07-03 11:08:26 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Xu, Shuo 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								52519e07df 
								
							 
						 
						
							
							
								
								remove models we no longer need in benchmark. ( #11492 )  
							
							 
							
							... 
							
							
							
							Co-authored-by: ATMxsp01 <shou.xu@intel.com> 
							
						 
						
							2024-07-02 17:20:48 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Zhao Changmin 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								6a0134a9b2 
								
							 
						 
						
							
							
								
								support q4_0_rtn  ( #11477 )  
							
							 
							
							... 
							
							
							
							* q4_0_rtn 
							
						 
						
							2024-07-02 16:57:02 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								5e967205ac 
								
							 
						 
						
							
							
								
								remove the code converts input to fp16 before calling batch forward kernel ( #11489 )  
							
							 
							
							
							
						 
						
							2024-07-02 16:23:53 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Wang, Jian4 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								4390e7dc49 
								
							 
						 
						
							
							
								
								Fix codegeex2 transformers version ( #11487 )  
							
							 
							
							
							
						 
						
							2024-07-02 15:09:28 +08:00