Xin Qiu 
								
							 
						 
						
							
							
							
							
								
							
							
								aabfc06977 
								
							 
						 
						
							
							
								
								add gemma example ( #10224 )  
							
							 
							
							... 
							
							
							
							* add gemma gpu example
* Update README.md
* add cpu example
* Update README.md
* Update README.md
* Update generate.py
* Update generate.py 
							
						 
						
							2024-02-23 15:20:57 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									yb-peng 
								
							 
						 
						
							
							
							
							
								
							
							
								a2c1675546 
								
							 
						 
						
							
							
								
								Add CPU and GPU examples for Yuan2-2B-hf ( #9946 )  
							
							 
							
							... 
							
							
							
							* Add a new CPU example of Yuan2-2B-hf
* Add a new CPU generate.py of Yuan2-2B-hf example
* Add a new GPU example of Yuan2-2B-hf
* Add Yuan2 to README table
* In CPU example:1.Use English as default prompt; 2.Provide modified files in yuan2-2B-instruct
* In GPU example:1.Use English as default prompt;2.Provide modified files
* GPU example:update README
* update Yuan2-2B-hf in README table
* Add CPU example for Yuan2-2B in Pytorch-Models
* Add GPU example for Yuan2-2B in Pytorch-Models
* Add license in generate.py; Modify README
* In GPU Add license in generate.py; Modify README
* In CPU yuan2 modify README
* In GPU yuan2 modify README
* In CPU yuan2 modify README
* In GPU example, updated the readme for Windows GPU supports
* In GPU torch example, updated the readme for Windows GPU supports
* GPU hf example README modified
* GPU example README modified 
							
						 
						
							2024-02-23 14:09:30 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									yb-peng 
								
							 
						 
						
							
							
							
							
								
							
							
								f1f4094a09 
								
							 
						 
						
							
							
								
								Add CPU and GPU examples of phi-2 ( #10014 )  
							
							 
							
							... 
							
							
							
							* Add CPU and GPU examples of phi-2
* In GPU hf example, updated the readme for Windows GPU supports
* In GPU torch example, updated the readme for Windows GPU supports
* update the table in BigDL/README.md
* update the table in BigDL/python/llm/README.md 
							
						 
						
							2024-02-23 14:05:53 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Guoqiong Song 
								
							 
						 
						
							
							
							
							
								
							
							
								63681af97e 
								
							 
						 
						
							
							
								
								falcon for transformers 4.36 ( #9960 )  
							
							 
							
							... 
							
							
							
							* falcon for transformers 4.36 
							
						 
						
							2024-02-22 17:04:40 -08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Xiangyu Tian 
								
							 
						 
						
							
							
							
							
								
							
							
								f445217d02 
								
							 
						 
						
							
							
								
								LLM: Update IPEX to 2.2.0+cpu and Refactor for _ipex_optimize ( #10189 )  
							
							 
							
							... 
							
							
							
							Update IPEX to 2.2.0+cpu and refactor for _ipex_optimize. 
							
						 
						
							2024-02-22 16:01:11 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Zhicun 
								
							 
						 
						
							
							
							
							
								
							
							
								c7e839e66c 
								
							 
						 
						
							
							
								
								Add Qwen1.5-7B-Chat ( #10113 )  
							
							 
							
							... 
							
							
							
							* add Qwen1.5-7B-Chat
* modify Qwen1.5 example
* update README
* update prompt format
* update folder name and example README
* add Chinese prompt sample output
* update link in README
* correct the link
* update transformer version 
							
						 
						
							2024-02-21 13:29:29 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Ziteng Zhang 
								
							 
						 
						
							
							
							
							
								
							
							
								276ef0e885 
								
							 
						 
						
							
							
								
								Speculative Ziya on CPU ( #10160 )  
							
							 
							
							... 
							
							
							
							* Speculative Ziya on CPU
* Without part of Accelerate with BIGDL_OPT_IPEX 
							
						 
						
							2024-02-21 10:30:39 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Zhicun 
								
							 
						 
						
							
							
							
							
								
							
							
								add3899311 
								
							 
						 
						
							
							
								
								Add ziya CPU example ( #10114 )  
							
							 
							
							... 
							
							
							
							* ziya on CPU
* add README for ziya
* specify use_cache
* add arc CPU
* update prompt format
* update link
* add comments to emphasize use_cache
* update pip cmd 
							
						 
						
							2024-02-20 13:59:52 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Wang, Jian4 
								
							 
						 
						
							
							
							
							
								
							
							
								d3591383d5 
								
							 
						 
						
							
							
								
								LLM : Add CPU chatglm3 speculative example ( #10004 )  
							
							 
							
							... 
							
							
							
							* init chatglm
* update
* update 
							
						 
						
							2024-02-19 13:38:52 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Heyang Sun 
								
							 
						 
						
							
							
							
							
								
							
							
								177273c1a4 
								
							 
						 
						
							
							
								
								IPEX Speculative Support for Baichuan2 7B ( #10112 )  
							
							 
							
							... 
							
							
							
							* IPEX Speculative Support for Baichuan2 7B
* fix license problems
* refine 
							
						 
						
							2024-02-19 09:12:57 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									binbin Deng 
								
							 
						 
						
							
							
							
							
								
							
							
								11fe5a87ec 
								
							 
						 
						
							
							
								
								LLM: add Modelscope model example ( #10126 )  
							
							 
							
							
							
						 
						
							2024-02-08 11:18:07 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Zhicun 
								
							 
						 
						
							
							
							
							
								
							
							
								7d2be7994f 
								
							 
						 
						
							
							
								
								add phixtral and optimize phi-moe ( #10052 )  
							
							 
							
							
							
						 
						
							2024-02-05 11:12:47 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Heyang Sun 
								
							 
						 
						
							
							
							
							
								
							
							
								90f004b80b 
								
							 
						 
						
							
							
								
								remove benchmarkwrapper form deepspeed example ( #10079 )  
							
							 
							
							
							
						 
						
							2024-02-04 15:42:15 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									ivy-lv11 
								
							 
						 
						
							
							
							
							
								
							
							
								428b7105f6 
								
							 
						 
						
							
							
								
								Add HF and PyTorch example InternLM2 ( #10061 )  
							
							 
							
							
							
						 
						
							2024-02-04 10:25:55 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Heyang Sun 
								
							 
						 
						
							
							
							
							
								
							
							
								601024f418 
								
							 
						 
						
							
							
								
								Mistral CPU example of speculative decoding ( #10024 )  
							
							 
							
							... 
							
							
							
							* Mistral CPU example of speculative decoding
* update transformres version
* update example
* Update README.md 
							
						 
						
							2024-02-01 10:52:32 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Yuxuan Xia 
								
							 
						 
						
							
							
							
							
								
							
							
								95636cad97 
								
							 
						 
						
							
							
								
								Add AutoGen CPU and XPU Example ( #9980 )  
							
							 
							
							... 
							
							
							
							* Add AutoGen example
* Adjust AutoGen README
* Adjust AutoGen README
* Change AutoGen README
* Change AutoGen README 
							
						 
						
							2024-01-31 11:31:18 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Heyang Sun 
								
							 
						 
						
							
							
							
							
								
							
							
								7284edd9b7 
								
							 
						 
						
							
							
								
								Vicuna CPU example of speculative decoding ( #10018 )  
							
							 
							
							... 
							
							
							
							* Vicuna CPU example of speculative decoding
* Update speculative.py
* Update README.md
* add requirements for ipex
* Update README.md
* Update speculative.py
* Update speculative.py 
							
						 
						
							2024-01-31 11:23:50 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Wang, Jian4 
								
							 
						 
						
							
							
							
							
								
							
							
								fb53b994f8 
								
							 
						 
						
							
							
								
								LLM : Add llama ipex optimized ( #10046 )  
							
							 
							
							... 
							
							
							
							* init ipex
* remove padding 
							
						 
						
							2024-01-31 10:38:46 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Heyang Sun 
								
							 
						 
						
							
							
							
							
								
							
							
								b1ff28ceb6 
								
							 
						 
						
							
							
								
								LLama2 CPU example of speculative decoding ( #9962 )  
							
							 
							
							... 
							
							
							
							* LLama2 example of speculative decoding
* add docs
* Update speculative.py
* Update README.md
* Update README.md
* Update speculative.py
* remove autocast 
							
						 
						
							2024-01-31 09:45:20 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Xiangyu Tian 
								
							 
						 
						
							
							
							
							
								
							
							
								9978089796 
								
							 
						 
						
							
							
								
								[LLM] Enable BIGDL_OPT_IPEX in speculative baichuan2 13b example  ( #10028 )  
							
							 
							
							... 
							
							
							
							Enable BIGDL_OPT_IPEX in speculative baichuan2 13b example 
							
						 
						
							2024-01-30 17:11:37 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Heyang Sun 
								
							 
						 
						
							
							
							
							
								
							
							
								cc3f122f6a 
								
							 
						 
						
							
							
								
								Baichuan2 CPU example of speculative decoding ( #10003 )  
							
							 
							
							... 
							
							
							
							* Baichuan2 CPU example of speculative decoding
* Update generate.py
* Update README.md
* Update generate.py
* Update generate.py
* Update generate.py
* fix default model
* fix wrong chinese coding
* Update generate.py
* update prompt
* update sample outputs
* baichuan 7b needs transformers==4.31.0
* rename example file's name 
							
						 
						
							2024-01-29 14:21:09 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									binbin Deng 
								
							 
						 
						
							
							
							
							
								
							
							
								171fb2d185 
								
							 
						 
						
							
							
								
								LLM: reorganize GPU finetuning examples ( #9952 )  
							
							 
							
							
							
						 
						
							2024-01-25 19:02:38 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Wang, Jian4 
								
							 
						 
						
							
							
							
							
								
							
							
								093e6f8f73 
								
							 
						 
						
							
							
								
								LLM: Add qwen CPU speculative example ( #9985 )  
							
							 
							
							... 
							
							
							
							* init from gpu
* update for cpu
* update
* update
* fix xpu readme
* update
* update example prompt
* update prompt and add 72b
* update
* update 
							
						 
						
							2024-01-25 17:01:34 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Jinyi Wan 
								
							 
						 
						
							
							
							
							
								
							
							
								ec2d9de0ea 
								
							 
						 
						
							
							
								
								Fix README.md for solar ( #9957 )  
							
							 
							
							
							
						 
						
							2024-01-24 15:50:54 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Heyang Sun 
								
							 
						 
						
							
							
							
							
								
							
							
								5184f400f9 
								
							 
						 
						
							
							
								
								Fix Mixtral GGUF Wrong Output Issue ( #9930 )  
							
							 
							
							... 
							
							
							
							* Fix Mixtral GGUF Wrong Output Issue
* fix style
* fix style 
							
						 
						
							2024-01-18 14:11:27 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Jinyi Wan 
								
							 
						 
						
							
							
							
							
								
							
							
								07485eff5a 
								
							 
						 
						
							
							
								
								Add SOLAR-10.7B to README ( #9869 )  
							
							 
							
							
							
						 
						
							2024-01-11 14:28:41 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									ZehuaCao 
								
							 
						 
						
							
							
							
							
								
							
							
								146076bdb5 
								
							 
						 
						
							
							
								
								Support llm-awq backend ( #9856 )  
							
							 
							
							... 
							
							
							
							* Support for LLM-AWQ Backend
* fix
* Update README.md
* Add awqconfig
* modify init
* update
* support llm-awq
* fix style
* fix style
* update
* fix AwqBackendPackingMethod not found error
* fix style
* update README
* fix style
---------
Co-authored-by: Uxito-Ada <414416158@qq.com>
Co-authored-by: Heyang Sun <60865256+Uxito-Ada@users.noreply.github.com>
Co-authored-by: cyita <yitastudy@gmail.com> 
							
						 
						
							2024-01-09 13:07:32 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Mingyu Wei 
								
							 
						 
						
							
							
							
							
								
							
							
								ed81baa35e 
								
							 
						 
						
							
							
								
								LLM: Use default typing-extension in LangChain examples ( #9857 )  
							
							 
							
							... 
							
							
							
							* remove typing extension downgrade in readme; minor fixes of code
* fix typos in README
* change default question of docqa.py 
							
						 
						
							2024-01-08 16:50:55 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Jinyi Wan 
								
							 
						 
						
							
							
							
							
								
							
							
								3147ebe63d 
								
							 
						 
						
							
							
								
								Add cpu and gpu examples for SOLAR-10.7B ( #9821 )  
							
							 
							
							
							
						 
						
							2024-01-05 09:50:28 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Wang, Jian4 
								
							 
						 
						
							
							
							
							
								
							
							
								4ceefc9b18 
								
							 
						 
						
							
							
								
								LLM: Support bitsandbytes config on qlora finetune ( #9715 )  
							
							 
							
							... 
							
							
							
							* test support bitsandbytesconfig
* update style
* update cpu example
* update example
* update readme
* update unit test
* use bfloat16
* update logic
* use int4
* set defalut bnb_4bit_use_double_quant
* update
* update example
* update model.py
* update
* support lora example 
							
						 
						
							2024-01-04 11:23:16 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Wang, Jian4 
								
							 
						 
						
							
							
							
							
								
							
							
								a54cd767b1 
								
							 
						 
						
							
							
								
								LLM: Add gguf falcon ( #9801 )  
							
							 
							
							... 
							
							
							
							* init falcon
* update convert.py
* update style 
							
						 
						
							2024-01-03 14:49:02 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									binbin Deng 
								
							 
						 
						
							
							
							
							
								
							
							
								6584539c91 
								
							 
						 
						
							
							
								
								LLM: fix installation of codellama ( #9813 )  
							
							 
							
							
							
						 
						
							2024-01-02 14:32:50 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Wang, Jian4 
								
							 
						 
						
							
							
							
							
								
							
							
								7ed9538b9f 
								
							 
						 
						
							
							
								
								LLM: support gguf mpt ( #9773 )  
							
							 
							
							... 
							
							
							
							* add gguf mpt
* update 
							
						 
						
							2023-12-28 09:22:39 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Jason Dai 
								
							 
						 
						
							
							
							
							
								
							
							
								361781bcd0 
								
							 
						 
						
							
							
								
								Update readme ( #9788 )  
							
							 
							
							
							
						 
						
							2023-12-26 19:46:11 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Ziteng Zhang 
								
							 
						 
						
							
							
							
							
								
							
							
								44b4a0c9c5 
								
							 
						 
						
							
							
								
								[LLM] Correct prompt format of Yi, Llama2 and Qwen in generate.py ( #9786 )  
							
							 
							
							... 
							
							
							
							* correct prompt format of Yi
* correct prompt format of llama2 in cpu generate.py
* correct prompt format of Qwen in GPU example 
							
						 
						
							2023-12-26 16:57:55 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Heyang Sun 
								
							 
						 
						
							
							
							
							
								
							
							
								66e286a73d 
								
							 
						 
						
							
							
								
								Support for Mixtral AWQ ( #9775 )  
							
							 
							
							... 
							
							
							
							* Support for Mixtral AWQ
* Update README.md
* Update README.md
* Update awq_config.py
* Update README.md
* Update README.md 
							
						 
						
							2023-12-25 16:08:09 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Wang, Jian4 
								
							 
						 
						
							
							
							
							
								
							
							
								984697afe2 
								
							 
						 
						
							
							
								
								LLM: Add bloom gguf support ( #9734 )  
							
							 
							
							... 
							
							
							
							* init
* update bloom add merges
* update
* update readme
* update for llama error
* update 
							
						 
						
							2023-12-21 14:06:25 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Heyang Sun 
								
							 
						 
						
							
							
							
							
								
							
							
								1fa7793fc0 
								
							 
						 
						
							
							
								
								Load Mixtral GGUF Model ( #9690 )  
							
							 
							
							... 
							
							
							
							* Load Mixtral GGUF Model
* refactor
* fix empty tensor when to cpu
* update gpu and cpu readmes
* add dtype when set tensor into module 
							
						 
						
							2023-12-19 13:54:38 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Wang, Jian4 
								
							 
						 
						
							
							
							
							
								
							
							
								b8437a1c1e 
								
							 
						 
						
							
							
								
								LLM: Add gguf mistral model support ( #9691 )  
							
							 
							
							... 
							
							
							
							* add mistral support
* need to upgrade transformers version
* update 
							
						 
						
							2023-12-15 13:37:39 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Wang, Jian4 
								
							 
						 
						
							
							
							
							
								
							
							
								496bb2e845 
								
							 
						 
						
							
							
								
								LLM: Support load BaiChuan model family gguf model ( #9685 )  
							
							 
							
							... 
							
							
							
							* support baichuan model family gguf model
* update gguf generate.py
* add verify models
* add support model_family
* update
* update style
* update type
* update readme
* update
* remove support model_family 
							
						 
						
							2023-12-15 13:34:33 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Lilac09 
								
							 
						 
						
							
							
							
							
								
							
							
								3afed99216 
								
							 
						 
						
							
							
								
								fix path issue ( #9696 )  
							
							 
							
							
							
						 
						
							2023-12-15 11:21:49 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Ziteng Zhang 
								
							 
						 
						
							
							
							
							
								
							
							
								21c7503a42 
								
							 
						 
						
							
							
								
								[LLM] Correct prompt format of Qwen in generate.py ( #9678 )  
							
							 
							
							... 
							
							
							
							* Change qwen prompt format to chatml 
							
						 
						
							2023-12-14 14:01:30 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Qiyuan Gong 
								
							 
						 
						
							
							
							
							
								
							
							
								223c9622f7 
								
							 
						 
						
							
							
								
								[LLM] Mixtral CPU examples ( #9673 )  
							
							 
							
							... 
							
							
							
							* Mixtral CPU PyTorch and hugging face examples, based on #9661  and #9671  
							
						 
						
							2023-12-14 10:35:11 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									ZehuaCao 
								
							 
						 
						
							
							
							
							
								
							
							
								877229f3be 
								
							 
						 
						
							
							
								
								[LLM]Add Yi-34B-AWQ to verified AWQ model. ( #9676 )  
							
							 
							
							... 
							
							
							
							* verfiy Yi-34B-AWQ
* update 
							
						 
						
							2023-12-14 09:55:47 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									ZehuaCao 
								
							 
						 
						
							
							
							
							
								
							
							
								503880809c 
								
							 
						 
						
							
							
								
								verfiy codeLlama ( #9668 )  
							
							 
							
							
							
						 
						
							2023-12-13 15:39:31 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Heyang Sun 
								
							 
						 
						
							
							
							
							
								
							
							
								c64e2248ef 
								
							 
						 
						
							
							
								
								fix str returned by get_int_from_str rather than expected int ( #9667 )  
							
							 
							
							
							
						 
						
							2023-12-13 11:01:21 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									ZehuaCao 
								
							 
						 
						
							
							
							
							
								
							
							
								45721f3473 
								
							 
						 
						
							
							
								
								verfiy llava ( #9649 )  
							
							 
							
							
							
						 
						
							2023-12-11 14:26:05 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Heyang Sun 
								
							 
						 
						
							
							
							
							
								
							
							
								9f02f96160 
								
							 
						 
						
							
							
								
								[LLM] support for Yi AWQ model ( #9648 )  
							
							 
							
							
							
						 
						
							2023-12-11 14:07:34 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									ZehuaCao 
								
							 
						 
						
							
							
							
							
								
							
							
								6eca8a8bb5 
								
							 
						 
						
							
							
								
								update transformer version ( #9631 )  
							
							 
							
							
							
						 
						
							2023-12-08 09:36:00 +08:00  
						
						
							 
							
							
								 
							 
							
						 
					 
				
					
						
							
								
								
									 
									Heyang Sun 
								
							 
						 
						
							
							
							
							
								
							
							
								3811cf43c9 
								
							 
						 
						
							
							
								
								[LLM] update AWQ documents ( #9623 )  
							
							 
							
							... 
							
							
							
							* [LLM] update AWQ and verified models' documents
* refine
* refine links
* refine 
							
						 
						
							2023-12-07 16:02:20 +08:00