Heyang Sun 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								cd109bb061 
								
							 
						 
						
							
							
								
								Gemma QLoRA example ( #12969 )  
							
							 
							
							... 
							
							
							
							* Gemma QLoRA example
* Update README.md
* Update README.md
---------
Co-authored-by: sgwhat <ge.song@intel.com> 
							
						 
						
							2025-03-14 14:27:51 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								8aea5319bb 
								
							 
						 
						
							
							
								
								update more lora example ( #12785 )  
							
							 
							
							
							
						 
						
							2025-02-08 09:46:48 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								d0d9c9d636 
								
							 
						 
						
							
							
								
								remove load_in_8bit usage as it is not supported a long time ago ( #12779 )  
							
							 
							
							
							
						 
						
							2025-02-07 11:21:29 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								b4c9e23f73 
								
							 
						 
						
							
							
								
								fix galore and peft finetune example ( #12776 )  
							
							 
							
							
							
						 
						
							2025-02-06 16:36:13 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								c0d6b282b8 
								
							 
						 
						
							
							
								
								fix lisa finetune example ( #12775 )  
							
							 
							
							
							
						 
						
							2025-02-06 16:35:43 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								2e5f2e5dda 
								
							 
						 
						
							
							
								
								fix dpo finetune ( #12774 )  
							
							 
							
							
							
						 
						
							2025-02-06 16:35:21 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								9697197f3e 
								
							 
						 
						
							
							
								
								fix qlora finetune example ( #12769 )  
							
							 
							
							
							
						 
						
							2025-02-06 11:18:28 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yishuo Wang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								c72a5db757 
								
							 
						 
						
							
							
								
								remove unused code again ( #12624 )  
							
							 
							
							
							
						 
						
							2024-12-27 14:17:11 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Qiyuan Gong 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								7e50ff113c 
								
							 
						 
						
							
							
								
								Add padding_token=eos_token for GPU trl QLora example ( #12398 )  
							
							 
							
							... 
							
							
							
							* Avoid tokenizer doesn't have a padding token error. 
							
						 
						
							2024-11-14 10:51:30 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Qiyuan Gong 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								2dfcc36825 
								
							 
						 
						
							
							
								
								Fix trl version and padding in trl qlora example ( #12368 )  
							
							 
							
							... 
							
							
							
							* Change trl to 0.9.6
* Enable padding to avoid padding related errors. 
							
						 
						
							2024-11-08 16:05:17 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Jin, Qiao 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								82a61b5cf3 
								
							 
						 
						
							
							
								
								Limit trl version in example ( #12332 )  
							
							 
							
							... 
							
							
							
							* Limit trl version in example
* Limit trl version in example 
							
						 
						
							2024-11-05 14:50:10 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Jin, Qiao 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								126f95be80 
								
							 
						 
						
							
							
								
								Fix DPO finetuning example ( #12313 )  
							
							 
							
							
							
						 
						
							2024-11-01 13:29:44 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Jin, Qiao 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								3df6195cb0 
								
							 
						 
						
							
							
								
								Fix application quickstart ( #12305 )  
							
							 
							
							... 
							
							
							
							* fix graphrag quickstart
* fix axolotl quickstart
* fix ragflow quickstart
* fix ragflow quickstart
* fix graphrag toc
* fix comments
* fix comment
* fix comments 
							
						 
						
							2024-10-31 16:57:35 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Jinhe 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								30f668c206 
								
							 
						 
						
							
							
								
								updated transformers & accelerate requirements ( #12301 )  
							
							 
							
							
							
						 
						
							2024-10-31 15:59:40 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Rahul Nair 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								4cf1ccc43a 
								
							 
						 
						
							
							
								
								Update DPO EADME.md ( #12162 )  
							
							 
							
							... 
							
							
							
							bitsanbytes multi backend is now available and is required , otherwise would error out saying that no cuda is available 
							
						 
						
							2024-10-31 10:56:46 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Jinhe 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								46d8300f6b 
								
							 
						 
						
							
							
								
								bugfix for qlora finetuning on GPU ( #12298 )  
							
							 
							
							... 
							
							
							
							* bugfix for qlora 100 step error
* indent fix
* annotation fix 
							
						 
						
							2024-10-30 16:54:10 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Heyang Sun 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								ee6852c915 
								
							 
						 
						
							
							
								
								Fix typo ( #11862 )  
							
							 
							
							
							
						 
						
							2024-08-20 16:38:11 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Heyang Sun 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								70c828b87c 
								
							 
						 
						
							
							
								
								deepspeed zero3 QLoRA finetuning ( #11625 )  
							
							 
							
							... 
							
							
							
							* deepspeed zero3 QLoRA finetuning
* Update convert.py
* Update low_bit_linear.py
* Update utils.py
* Update qlora_finetune_llama2_13b_arch_2_card.sh
* Update low_bit_linear.py
* Update alpaca_qlora_finetuning.py
* Update low_bit_linear.py
* Update utils.py
* Update convert.py
* Update alpaca_qlora_finetuning.py
* Update alpaca_qlora_finetuning.py
* Update low_bit_linear.py
* Update deepspeed_zero3.json
* Update qlora_finetune_llama2_13b_arch_2_card.sh
* Update low_bit_linear.py
* Update low_bit_linear.py
* Update utils.py
* fix style
* fix style
* Update alpaca_qlora_finetuning.py
* Update qlora_finetune_llama2_13b_arch_2_card.sh
* Update convert.py
* Update low_bit_linear.py
* Update model.py
* Update alpaca_qlora_finetuning.py
* Update low_bit_linear.py
* Update low_bit_linear.py
* Update low_bit_linear.py 
							
						 
						
							2024-08-13 16:15:29 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Heyang Sun 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								365adad59f 
								
							 
						 
						
							
							
								
								Support LoRA ChatGLM with Alpaca Dataset ( #11580 )  
							
							 
							
							... 
							
							
							
							* Support LoRA ChatGLM with Alpaca Dataset
* refine
* fix
* add 2-card alpaca 
							
						 
						
							2024-07-16 15:40:02 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Heyang Sun 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								913e750b01 
								
							 
						 
						
							
							
								
								fix non-string deepseed config path bug ( #11476 )  
							
							 
							
							... 
							
							
							
							* fix non-string deepseed config path bug
* Update lora_finetune_chatglm.py 
							
						 
						
							2024-07-01 15:53:50 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Heyang Sun 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								07362ffffc 
								
							 
						 
						
							
							
								
								ChatGLM3-6B LoRA Fine-tuning Demo ( #11450 )  
							
							 
							
							... 
							
							
							
							* ChatGLM3-6B LoRA Fine-tuning Demo
* refine
* refine
* add 2-card deepspeed
* refine format
* add mpi4py and deepspeed install 
							
						 
						
							2024-07-01 09:18:39 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Heyang Sun 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								c985912ee3 
								
							 
						 
						
							
							
								
								Add Deepspeed LoRA dependencies in document ( #11410 )  
							
							 
							
							
							
						 
						
							2024-06-24 15:29:59 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Heyang Sun 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								67a1e05876 
								
							 
						 
						
							
							
								
								Remove zero3 context manager from LoRA ( #11346 )  
							
							 
							
							
							
						 
						
							2024-06-18 17:24:43 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Shaojun Liu 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								694912698e 
								
							 
						 
						
							
							
								
								Upgrade scikit-learn to 1.5.0 to fix dependabot issue ( #11349 )  
							
							 
							
							
							
						 
						
							2024-06-18 15:47:25 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Heyang Sun 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								00f322d8ee 
								
							 
						 
						
							
							
								
								Finetune ChatGLM with Deepspeed Zero3 LoRA ( #11314 )  
							
							 
							
							... 
							
							
							
							* Fintune ChatGLM with Deepspeed Zero3 LoRA
* add deepspeed zero3 config
* rename config
* remove offload_param
* add save_checkpoint parameter
* Update lora_deepspeed_zero3_finetune_chatglm3_6b_arc_2_card.sh
* refine 
							
						 
						
							2024-06-18 12:31:26 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Qiyuan Gong 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								de4bb97b4f 
								
							 
						 
						
							
							
								
								Remove accelerate 0.23.0 install command in readme and docker ( #11333 )  
							
							 
							
							... 
							
							
							
							*ipex-llm's accelerate has been upgraded to 0.23.0. Remove accelerate 0.23.0 install command in README and docker。 
							
						 
						
							2024-06-17 17:52:12 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Qiyuan Gong 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								15a6205790 
								
							 
						 
						
							
							
								
								Fix LoRA tokenizer for Llama and chatglm ( #11186 )  
							
							 
							
							... 
							
							
							
							* Set pad_token to eos_token if it's None. Otherwise, use model config. 
							
						 
						
							2024-06-03 15:35:38 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Qiyuan Gong 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								120a0035ac 
								
							 
						 
						
							
							
								
								Fix type mismatch in eval for Baichuan2 QLora example ( #11117 )  
							
							 
							
							... 
							
							
							
							* During the evaluation stage, Baichuan2 will raise type mismatch when training with bfloat16. Fix this issue by modifying modeling_baichuan.py. Add doc about how to modify this file. 
							
						 
						
							2024-05-24 14:14:30 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Qiyuan Gong 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								f6c9ffe4dc 
								
							 
						 
						
							
							
								
								Add WANDB_MODE and HF_HUB_OFFLINE to XPU finetune README ( #11097 )  
							
							 
							
							... 
							
							
							
							* Add WANDB_MODE=offline to avoid multi-GPUs finetune errors.
* Add HF_HUB_OFFLINE=1 to avoid Hugging Face related errors. 
							
						 
						
							2024-05-22 15:20:53 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Qiyuan Gong 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								492ed3fd41 
								
							 
						 
						
							
							
								
								Add verified models to GPU finetune README ( #11088 )  
							
							 
							
							... 
							
							
							
							* Add verified models to GPU finetune README 
							
						 
						
							2024-05-21 15:49:15 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Qiyuan Gong 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								1210491748 
								
							 
						 
						
							
							
								
								ChatGLM3, Baichuan2 and Qwen1.5 QLoRA example ( #11078 )  
							
							 
							
							... 
							
							
							
							* Add chatglm3, qwen15-7b and baichuan-7b QLoRA alpaca example
* Remove unnecessary tokenization setting. 
							
						 
						
							2024-05-21 15:29:43 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Ziteng Zhang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								7d3791c819 
								
							 
						 
						
							
							
								
								[LLM] Add llama3 alpaca qlora example ( #11011 )  
							
							 
							
							... 
							
							
							
							* Add llama3 finetune example based on alpaca qlora example 
							
						 
						
							2024-05-15 09:17:32 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Qiyuan Gong 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								c957ea3831 
								
							 
						 
						
							
							
								
								Add axolotl main support and axolotl Llama-3-8B QLoRA example  ( #10984 )  
							
							 
							
							... 
							
							
							
							* Support axolotl main (796a085).
* Add axolotl Llama-3-8B QLoRA example.
* Change `sequence_len` to 256 for alpaca, and revert `lora_r` value.
* Add example to quick_start. 
							
						 
						
							2024-05-14 13:43:59 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Qiyuan Gong 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								164e6957af 
								
							 
						 
						
							
							
								
								Refine axolotl quickstart ( #10957 )  
							
							 
							
							... 
							
							
							
							* Add default accelerate config for axolotl quickstart.
* Fix requirement link.
* Upgrade peft to 0.10.0 in requirement. 
							
						 
						
							2024-05-08 09:34:02 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Qiyuan Gong 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								c11170b96f 
								
							 
						 
						
							
							
								
								Upgrade Peft to 0.10.0 in finetune examples and docker ( #10930 )  
							
							 
							
							... 
							
							
							
							* Upgrade Peft to 0.10.0 in finetune examples.
* Upgrade Peft to 0.10.0 in docker. 
							
						 
						
							2024-05-07 15:12:26 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Qiyuan Gong 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								d7ca5d935b 
								
							 
						 
						
							
							
								
								Upgrade Peft version to 0.10.0 for LLM finetune ( #10886 )  
							
							 
							
							... 
							
							
							
							* Upgrade Peft version to 0.10.0
* Upgrade Peft version in ARC unit test and HF-Peft example. 
							
						 
						
							2024-05-07 15:09:14 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Qiyuan Gong 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								5494aa55f6 
								
							 
						 
						
							
							
								
								Downgrade datasets in axolotl example ( #10849 )  
							
							 
							
							... 
							
							
							
							* Downgrade datasets to 2.15.0 to address axolotl prepare issue https://github.com/OpenAccess-AI-Collective/axolotl/issues/1544 
Tks to @kwaa for providing the solution in https://github.com/intel-analytics/ipex-llm/issues/10821#issuecomment-2068861571  
							
						 
						
							2024-04-23 09:41:58 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Heyang Sun 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								fc33aa3721 
								
							 
						 
						
							
							
								
								fix missing import ( #10839 )  
							
							 
							
							
							
						 
						
							2024-04-22 14:34:52 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Qiyuan Gong 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								e90e31719f 
								
							 
						 
						
							
							
								
								axolotl lora example ( #10789 )  
							
							 
							
							... 
							
							
							
							* Add axolotl lora example
* Modify readme
* Add comments in yml 
							
						 
						
							2024-04-18 16:38:32 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Ziteng Zhang 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								ff040c8f01 
								
							 
						 
						
							
							
								
								LISA Finetuning Example ( #10743 )  
							
							 
							
							... 
							
							
							
							* enabling xetla only supports qtype=SYM_INT4 or FP8E5
* LISA Finetuning Example on gpu
* update readme
* add licence
* Explain parameters of lisa & Move backend codes to src dir
* fix style
* fix style
* update readme
* support chatglm
* fix style
* fix style
* update readme
* fix 
							
						 
						
							2024-04-18 13:48:10 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Heyang Sun 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								581ebf6104 
								
							 
						 
						
							
							
								
								GaLore Finetuning Example ( #10722 )  
							
							 
							
							... 
							
							
							
							* GaLore Finetuning Example
* Update README.md
* Update README.md
* change data to HuggingFaceH4/helpful_instructions
* Update README.md
* Update README.md
* shrink train size and delete cache before starting training to save memory
* Update README.md
* Update galore_finetuning.py
* change model to llama2 3b
* Update README.md 
							
						 
						
							2024-04-18 13:47:41 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Qiyuan Gong 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								9e5069437f 
								
							 
						 
						
							
							
								
								Fix gradio version in axolotl example ( #10776 )  
							
							 
							
							... 
							
							
							
							* Change to gradio>=4.19.2 
							
						 
						
							2024-04-17 10:23:43 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Qiyuan Gong 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								f2e923b3ca 
								
							 
						 
						
							
							
								
								Axolotl v0.4.0 support  ( #10773 )  
							
							 
							
							... 
							
							
							
							* Add Axolotl 0.4.0, remove legacy 0.3.0 support.
* replace is_torch_bf16_gpu_available
* Add HF_HUB_OFFLINE=1
* Move transformers out of requirement
* Refine readme and qlora.yml 
							
						 
						
							2024-04-17 09:49:11 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Qiyuan Gong 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								d30b22a81b 
								
							 
						 
						
							
							
								
								Refine axolotl 0.3.0 documents and links ( #10764 )  
							
							 
							
							... 
							
							
							
							* Refine axolotl 0.3 based on comments
* Rename requirements to requirement-xpu
* Add comments for paged_adamw_32bit
* change lora_r from 8 to 16 
							
						 
						
							2024-04-16 14:47:45 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Qiyuan Gong 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								2d64630757 
								
							 
						 
						
							
							
								
								Remove transformers version in axolotl example ( #10736 )  
							
							 
							
							... 
							
							
							
							* Remove transformers version in axolotl requirements.txt 
							
						 
						
							2024-04-11 14:02:31 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Qiyuan Gong 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								b727767f00 
								
							 
						 
						
							
							
								
								Add axolotl v0.3.0 with ipex-llm on Intel GPU ( #10717 )  
							
							 
							
							... 
							
							
							
							* Add axolotl v0.3.0 support on Intel GPU.
* Add finetune example on llama-2-7B with Alpaca dataset. 
							
						 
						
							2024-04-10 14:38:29 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Shaojun Liu 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								f37a1f2a81 
								
							 
						 
						
							
							
								
								Upgrade to python 3.11 ( #10711 )  
							
							 
							
							... 
							
							
							
							* create conda env with python 3.11
* recommend to use Python 3.11
* update 
							
						 
						
							2024-04-09 17:41:17 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Jin Qiao 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								10ee786920 
								
							 
						 
						
							
							
								
								Replace with IPEX-LLM in example comments ( #10671 )  
							
							 
							
							... 
							
							
							
							* Replace with IPEX-LLM in example comments
* More replacement
* revert some changes 
							
						 
						
							2024-04-07 13:29:51 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									ZehuaCao 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								52a2135d83 
								
							 
						 
						
							
							
								
								Replace ipex with ipex-llm ( #10554 )  
							
							 
							
							... 
							
							
							
							* fix ipex with ipex_llm
* fix ipex with ipex_llm
* update
* update
* update
* update
* update
* update
* update
* update 
							
						 
						
							2024-03-28 13:54:40 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Cheen Hau, 俊豪 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								1c5eb14128 
								
							 
						 
						
							
							
								
								Update pip install to use --extra-index-url for ipex package ( #10557 )  
							
							 
							
							... 
							
							
							
							* Change to 'pip install .. --extra-index-url' for readthedocs
* Change to 'pip install .. --extra-index-url' for examples
* Change to 'pip install .. --extra-index-url' for remaining files
* Fix URL for ipex
* Add links for ipex US and CN servers
* Update ipex cpu url
* remove readme
* Update for github actions
* Update for dockerfiles 
							
						 
						
							2024-03-28 09:56:23 +08:00