Wang, Jian4 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								d9f71f1f53 
								
							 
						 
						
							
							
								
								Update benchmark util for example using ( #11027 )  
							
							 
							
							... 
							
							
							
							* mv benchmark_util.py to utils/
* remove
* update 
							
						 
						
							2024-05-15 14:16:35 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									binbin Deng 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								4053a6ef94 
								
							 
						 
						
							
							
								
								Update environment variable setting in AutoTP with arc ( #11018 )  
							
							 
							
							
							
						 
						
							2024-05-15 10:23:58 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								fad1dbaf60 
								
							 
						 
						
							
							
								
								use sdp fp8 causal kernel ( #11023 )  
							
							 
							
							
							
						 
						
							2024-05-15 10:22:35 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								ee325e9cc9 
								
							 
						 
						
							
							
								
								fix phi3 ( #11022 )  
							
							 
							
							
							
						 
						
							2024-05-15 09:32:12 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Ziteng Zhang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								7d3791c819 
								
							 
						 
						
							
							
								
								[LLM] Add llama3 alpaca qlora example ( #11011 )  
							
							 
							
							... 
							
							
							
							* Add llama3 finetune example based on alpaca qlora example 
							
						 
						
							2024-05-15 09:17:32 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Zhao Changmin 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								0a732bebe7 
								
							 
						 
						
							
							
								
								Add phi3 cached RotaryEmbedding ( #11013 )  
							
							 
							
							... 
							
							
							
							* phi3cachedrotaryembed
* pep8 
							
						 
						
							2024-05-15 08:16:43 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yina Chen 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								893197434d 
								
							 
						 
						
							
							
								
								Add fp6 support on gpu ( #11008 )  
							
							 
							
							... 
							
							
							
							* add fp6 support
* fix style 
							
						 
						
							2024-05-14 16:31:44 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Zhao Changmin 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								b03c859278 
								
							 
						 
						
							
							
								
								Add phi3RMS ( #10988 )  
							
							 
							
							... 
							
							
							
							* phi3RMS 
							
						 
						
							2024-05-14 15:16:27 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								170e3d65e0 
								
							 
						 
						
							
							
								
								use new sdp and fp32 sdp ( #11007 )  
							
							 
							
							
							
						 
						
							2024-05-14 14:29:18 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Qiyuan Gong 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								c957ea3831 
								
							 
						 
						
							
							
								
								Add axolotl main support and axolotl Llama-3-8B QLoRA example  ( #10984 )  
							
							 
							
							... 
							
							
							
							* Support axolotl main (796a085).
* Add axolotl Llama-3-8B QLoRA example.
* Change `sequence_len` to 256 for alpaca, and revert `lora_r` value.
* Add example to quick_start. 
							
						 
						
							2024-05-14 13:43:59 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yuwen Hu 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								fb656fbf74 
								
							 
						 
						
							
							
								
								Add requirements for oneAPI pypi packages for windows Intel GPU users ( #11009 )  
							
							 
							
							
							
						 
						
							2024-05-14 13:40:54 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Shaojun Liu 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								7f8c5b410b 
								
							 
						 
						
							
							
								
								Quickstart: Run PyTorch Inference on Intel GPU using Docker (on Linux or WSL) ( #10970 )  
							
							 
							
							... 
							
							
							
							* add entrypoint.sh
* add quickstart
* remove entrypoint
* update
* Install related library of benchmarking
* update
* print out results
* update docs
* minor update
* update
* update quickstart
* update
* update
* update
* update
* update
* update
* add chat & example section
* add more details
* minor update
* rename quickstart
* update
* minor update
* update
* update config.yaml
* update readme
* use --gpu
* add tips
* minor update
* update 
							
						 
						
							2024-05-14 12:58:31 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Guancheng Fu 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								a465111cf4 
								
							 
						 
						
							
							
								
								Update README.md ( #11003 )  
							
							 
							
							
							
						 
						
							2024-05-13 16:44:48 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Guancheng Fu 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								74997a3ed1 
								
							 
						 
						
							
							
								
								Adding load_low_bit interface for ipex_llm_worker ( #11000 )  
							
							 
							
							... 
							
							
							
							* initial implementation, need tests
* fix
* fix baichuan issue
* fix typo 
							
						 
						
							2024-05-13 15:30:19 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								1b3c7a6928 
								
							 
						 
						
							
							
								
								remove phi3 empty cache ( #10997 )  
							
							 
							
							
							
						 
						
							2024-05-13 14:09:55 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									ZehuaCao 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								99255fe36e 
								
							 
						 
						
							
							
								
								fix ppl ( #10996 )  
							
							 
							
							
							
						 
						
							2024-05-13 13:57:19 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Kai Huang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								f8dd2e52ad 
								
							 
						 
						
							
							
								
								Fix Langchain upstream ut ( #10985 )  
							
							 
							
							... 
							
							
							
							* Fix Langchain upstream ut
* Small fix
* Install bigdl-llm
* Update run-langchain-upstream-tests.sh
* Update run-langchain-upstream-tests.sh
* Update llm_unit_tests.yml
* Update run-langchain-upstream-tests.sh
* Update llm_unit_tests.yml
* Update run-langchain-upstream-tests.sh
* fix git checkout
* fix
---------
Co-authored-by: Zhangky11 <2321096202@qq.com>
Co-authored-by: Keyan (Kyrie) Zhang <79576162+Zhangky11@users.noreply.github.com> 
							
						 
						
							2024-05-11 14:40:37 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yuwen Hu 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								9f6358e4c2 
								
							 
						 
						
							
							
								
								Deprecate support for pytorch 2.0 on Linux for ipex-llm >= 2.1.0b20240511 ( #10986 )  
							
							 
							
							... 
							
							
							
							* Remove xpu_2.0 option in setup.py
* Disable xpu_2.0 test in UT and nightly
* Update docs for deprecated pytorch 2.0
* Small doc update 
							
						 
						
							2024-05-11 12:33:35 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								ad96f32ce0 
								
							 
						 
						
							
							
								
								optimize phi3 1st token performance ( #10981 )  
							
							 
							
							
							
						 
						
							2024-05-10 17:33:46 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Cengguang Zhang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								cfed76b2ed 
								
							 
						 
						
							
							
								
								LLM: add long-context support for Qwen1.5-7B/Baichuan2-7B/Mistral-7B. ( #10937 )  
							
							 
							
							... 
							
							
							
							* LLM: add split tensor support for baichuan2-7b and qwen1.5-7b.
* fix style.
* fix style.
* fix style.
* add support for mistral and fix condition threshold.
* fix  style.
* fix comments. 
							
						 
						
							2024-05-10 16:40:15 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									binbin Deng 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								f9615f12d1 
								
							 
						 
						
							
							
								
								Add driver related packages version check in env script ( #10977 )  
							
							 
							
							
							
						 
						
							2024-05-10 15:02:58 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Kai Huang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								a6342cc068 
								
							 
						 
						
							
							
								
								Empty cache after phi first attention to support 4k input ( #10972 )  
							
							 
							
							... 
							
							
							
							* empty cache
* fix style 
							
						 
						
							2024-05-09 19:50:04 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								e753125880 
								
							 
						 
						
							
							
								
								use fp16_sdp when head_dim=96 ( #10976 )  
							
							 
							
							
							
						 
						
							2024-05-09 17:02:59 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								697ca79eca 
								
							 
						 
						
							
							
								
								use quantize kv and sdp in phi3-mini ( #10973 )  
							
							 
							
							
							
						 
						
							2024-05-09 15:16:18 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Wang, Jian4 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								f4c615b1ee 
								
							 
						 
						
							
							
								
								Add cohere example ( #10954 )  
							
							 
							
							... 
							
							
							
							* add link first
* add_cpu_example
* add GPU example 
							
						 
						
							2024-05-08 17:19:59 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Wang, Jian4 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								3209d6b057 
								
							 
						 
						
							
							
								
								Fix spculative llama3 no stop error ( #10963 )  
							
							 
							
							... 
							
							
							
							* fix normal
* add eos_tokens_id on sp and add list if
* update
* no none 
							
						 
						
							2024-05-08 17:09:47 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Xiangyu Tian 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								02870dc385 
								
							 
						 
						
							
							
								
								LLM: Refine README of AutoTP-FastAPI example ( #10960 )  
							
							 
							
							
							
						 
						
							2024-05-08 16:55:23 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								2ebec0395c 
								
							 
						 
						
							
							
								
								optimize phi-3-mini-128 ( #10959 )  
							
							 
							
							
							
						 
						
							2024-05-08 16:33:17 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Xin Qiu 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								dfa3147278 
								
							 
						 
						
							
							
								
								update ( #10944 )  
							
							 
							
							
							
						 
						
							2024-05-08 14:28:05 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Xin Qiu 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								5973d6c753 
								
							 
						 
						
							
							
								
								make gemma's output better ( #10943 )  
							
							 
							
							
							
						 
						
							2024-05-08 14:27:51 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Jin Qiao 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								15ee3fd542 
								
							 
						 
						
							
							
								
								Update igpu perf internlm ( #10958 )  
							
							 
							
							
							
						 
						
							2024-05-08 14:16:43 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Zhao Changmin 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								0d6e12036f 
								
							 
						 
						
							
							
								
								Disable fast_init_ in load_low_bit ( #10945 )  
							
							 
							
							... 
							
							
							
							* fast_init_ disable 
							
						 
						
							2024-05-08 10:46:19 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Qiyuan Gong 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								164e6957af 
								
							 
						 
						
							
							
								
								Refine axolotl quickstart ( #10957 )  
							
							 
							
							... 
							
							
							
							* Add default accelerate config for axolotl quickstart.
* Fix requirement link.
* Upgrade peft to 0.10.0 in requirement. 
							
						 
						
							2024-05-08 09:34:02 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								c801c37bc6 
								
							 
						 
						
							
							
								
								optimize phi3 again: use quantize kv if possible ( #10953 )  
							
							 
							
							
							
						 
						
							2024-05-07 17:26:19 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								aa2fa9fde1 
								
							 
						 
						
							
							
								
								optimize phi3 again: use sdp if possible ( #10951 )  
							
							 
							
							
							
						 
						
							2024-05-07 15:53:08 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Qiyuan Gong 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								c11170b96f 
								
							 
						 
						
							
							
								
								Upgrade Peft to 0.10.0 in finetune examples and docker ( #10930 )  
							
							 
							
							... 
							
							
							
							* Upgrade Peft to 0.10.0 in finetune examples.
* Upgrade Peft to 0.10.0 in docker. 
							
						 
						
							2024-05-07 15:12:26 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Qiyuan Gong 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								d7ca5d935b 
								
							 
						 
						
							
							
								
								Upgrade Peft version to 0.10.0 for LLM finetune ( #10886 )  
							
							 
							
							... 
							
							
							
							* Upgrade Peft version to 0.10.0
* Upgrade Peft version in ARC unit test and HF-Peft example. 
							
						 
						
							2024-05-07 15:09:14 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yuwen Hu 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								0efe26c3b6 
								
							 
						 
						
							
							
								
								Change order of chatglm2-6b and chatglm3-6b in iGPU perf test for more stable performance ( #10948 )  
							
							 
							
							
							
						 
						
							2024-05-07 13:48:39 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									hxsz1997 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								245c7348bc 
								
							 
						 
						
							
							
								
								Add codegemma example ( #10884 )  
							
							 
							
							... 
							
							
							
							* add codegemma example in GPU/HF-Transformers-AutoModels/
* add README of codegemma example in GPU/HF-Transformers-AutoModels/
* add codegemma example in GPU/PyTorch-Models/
* add readme of codegemma example in GPU/PyTorch-Models/
* add codegemma example in CPU/HF-Transformers-AutoModels/
* add readme of codegemma example in CPU/HF-Transformers-AutoModels/
* add codegemma example in CPU/PyTorch-Models/
* add readme of codegemma example in CPU/PyTorch-Models/
* fix typos
* fix filename typo
* add codegemma in tables
* add comments of lm_head
* remove comments of use_cache 
							
						 
						
							2024-05-07 13:35:42 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Shaojun Liu 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								08ad40b251 
								
							 
						 
						
							
							
								
								improve ipex-llm-init for Linux ( #10928 )  
							
							 
							
							... 
							
							
							
							* refine ipex-llm-init
* install libtcmalloc.so for Max
* update based on comment
* remove unneeded code 
							
						 
						
							2024-05-07 12:55:14 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Wang, Jian4 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								191b184341 
								
							 
						 
						
							
							
								
								LLM: Optimize cohere model ( #10878 )  
							
							 
							
							... 
							
							
							
							* use mlp and rms
* optimize kv_cache
* add fuse qkv
* add flash attention and fp16 sdp
* error fp8 sdp
* fix optimized
* fix style
* update
* add for pp 
							
						 
						
							2024-05-07 10:19:50 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Xiangyu Tian 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								13a44cdacb 
								
							 
						 
						
							
							
								
								LLM: Refine Deepspped-AutoTP-FastAPI example ( #10916 )  
							
							 
							
							
							
						 
						
							2024-05-07 09:37:31 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Wang, Jian4 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								1de878bee1 
								
							 
						 
						
							
							
								
								LLM: Fix speculative llama3 long input error ( #10934 )  
							
							 
							
							
							
						 
						
							2024-05-07 09:25:20 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Guancheng Fu 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								49ab5a2b0e 
								
							 
						 
						
							
							
								
								Add embeddings ( #10931 )  
							
							 
							
							
							
						 
						
							2024-05-07 09:07:02 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Wang, Jian4 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								0e0bd309e2 
								
							 
						 
						
							
							
								
								LLM: Enable Speculative on Fastchat ( #10909 )  
							
							 
							
							... 
							
							
							
							* init
* enable streamer
* update
* update
* remove deprecated
* update
* update
* add gpu example 
							
						 
						
							2024-05-06 10:06:20 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Cengguang Zhang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								0edef1f94c 
								
							 
						 
						
							
							
								
								LLM: add min_new_tokens to all in one benchmark. ( #10911 )  
							
							 
							
							
							
						 
						
							2024-05-06 09:32:59 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Cengguang Zhang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								75dbf240ec 
								
							 
						 
						
							
							
								
								LLM: update split tensor conditions. ( #10872 )  
							
							 
							
							... 
							
							
							
							* LLM: update split tensor condition.
* add cond for split tensor.
* update priority of env.
* fix style.
* update env name. 
							
						 
						
							2024-04-30 17:07:21 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Guancheng Fu 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								2c64754eb0 
								
							 
						 
						
							
							
								
								Add vLLM to ipex-llm serving image ( #10807 )  
							
							 
							
							... 
							
							
							
							* add vllm
* done
* doc work
* fix done
* temp
* add docs
* format
* add start-fastchat-service.sh
* fix 
							
						 
						
							2024-04-29 17:25:42 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Jin Qiao 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								1f876fd837 
								
							 
						 
						
							
							
								
								Add example for phi-3 ( #10881 )  
							
							 
							
							... 
							
							
							
							* Add example for phi-3
* add in readme and index
* fix
* fix
* fix
* fix indent
* fix 
							
						 
						
							2024-04-29 16:43:55 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								d884c62dc4 
								
							 
						 
						
							
							
								
								remove new_layout parameter ( #10906 )  
							
							 
							
							
							
						 
						
							2024-04-29 10:31:50 +08:00