Xiangyu Tian
								
							 
						 | 
						
							
							
								
								
							
							
							
								
							
							
								b3f6faa038
								
							
						 | 
						
							
							
								
								LLM: Add CPU vLLM entrypoint (#11083)
							
							
							
							
							
							
							
							Add CPU vLLM entrypoint and update CPU vLLM serving example. 
							
						 | 
						
							2024-05-24 09:16:59 +08:00 | 
						
						
							
							
							
								
							
							
						 | 
					
				
					
						
							
								
								
									 
									ZehuaCao
								
							 
						 | 
						
							
							
								
								
							
							
							
								
							
							
								842d6dfc2d
								
							
						 | 
						
							
							
								
								Further Modify CPU example (#11081)
							
							
							
							
							
							
							
							* modify CPU example
* update 
							
						 | 
						
							2024-05-21 13:55:47 +08:00 | 
						
						
							
							
							
								
							
							
						 | 
					
				
					
						
							
								
								
									 
									Shaojun Liu
								
							 
						 | 
						
							
							
								
								
							
							
							
								
							
							
								f37a1f2a81
								
							
						 | 
						
							
							
								
								Upgrade to python 3.11 (#10711)
							
							
							
							
							
							
							
							* create conda env with python 3.11
* recommend to use Python 3.11
* update 
							
						 | 
						
							2024-04-09 17:41:17 +08:00 | 
						
						
							
							
							
								
							
							
						 | 
					
				
					
						
							
								
								
									 
									Wang, Jian4
								
							 
						 | 
						
							
							
								
								
							
							
							
								
							
							
								16b2ef49c6
								
							
						 | 
						
							
							
								
								Update_document by heyang (#30)
							
							
							
							
							
						 | 
						
							2024-03-25 10:06:02 +08:00 | 
						
						
							
							
							
								
							
							
						 | 
					
				
					
						
							
								
								
									 
									Wang, Jian4
								
							 
						 | 
						
							
							
								
								
							
							
							
								
							
							
								9df70d95eb
								
							
						 | 
						
							
							
								
								Refactor bigdl.llm to  ipex_llm (#24)
							
							
							
							
							
							
							
							* Rename bigdl/llm to ipex_llm
* rm python/llm/src/bigdl
* from bigdl.llm to from ipex_llm 
							
						 | 
						
							2024-03-22 15:41:21 +08:00 | 
						
						
							
							
							
								
							
							
						 | 
					
				
					
						
							
								
								
									 
									Guancheng Fu
								
							 
						 | 
						
							
							
							
							
								
							
							
								2d930bdca8
								
							
						 | 
						
							
							
								
								Add vLLM bf16 support (#10278)
							
							
							
							
							
							
							
							* add argument load_in_low_bit
* add docs
* modify gpu doc
* done
---------
Co-authored-by: ivy-lv11 <lvzc@lamda.nju.edu.cn> 
							
						 | 
						
							2024-02-29 16:33:42 +08:00 | 
						
						
							
							
							
								
							
							
						 | 
					
				
					
						
							
								
								
									 
									Guancheng Fu
								
							 
						 | 
						
							
							
							
							
								
							
							
								963a5c8d79
								
							
						 | 
						
							
							
								
								Add vLLM-XPU version's README/examples (#9536)
							
							
							
							
							
							
							
							* test
* test
* fix last kv cache
* add xpu readme
* remove numactl for xpu example
* fix link error
* update max_num_batched_tokens logic
* add explaination
* add xpu environement version requirement
* refine gpu memory
* fix
* fix style 
							
						 | 
						
							2023-11-28 09:44:03 +08:00 | 
						
						
							
							
							
								
							
							
						 | 
					
				
					
						
							
								
								
									 
									Guancheng Fu
								
							 
						 | 
						
							
							
							
							
								
							
							
								b6c3520748
								
							
						 | 
						
							
							
								
								Remove xformers from vLLM-CPU (#9535)
							
							
							
							
							
						 | 
						
							2023-11-27 11:21:25 +08:00 | 
						
						
							
							
							
								
							
							
						 | 
					
				
					
						
							
								
								
									 
									Jason Dai
								
							 
						 | 
						
							
							
							
							
								
							
							
								b3178d449f
								
							
						 | 
						
							
							
								
								Update README.md (#9525)
							
							
							
							
							
						 | 
						
							2023-11-23 21:45:20 +08:00 | 
						
						
							
							
							
								
							
							
						 | 
					
				
					
						
							
								
								
									 
									Guancheng Fu
								
							 
						 | 
						
							
							
							
							
								
							
							
								bf579507c2
								
							
						 | 
						
							
							
								
								Integrate vllm (#9310)
							
							
							
							
							
							
							
							* done
* Rename structure
* add models
* Add structure/sampling_params,sequence
* add input_metadata
* add outputs
* Add policy,logger
* add and update
* add parallelconfig back
* core/scheduler.py
* Add llm_engine.py
* Add async_llm_engine.py
* Add tested entrypoint
* fix minor error
* Fix everything
* fix kv cache view
* fix
* fix
* fix
* format&refine
* remove logger from repo
* try to add token latency
* remove logger
* Refine config.py
* finish worker.py
* delete utils.py
* add license
* refine
* refine sequence.py
* remove sampling_params.py
* finish
* add license
* format
* add license
* refine
* refine
* Refine line too long
* remove exception
* so dumb style-check
* refine
* refine
* refine
* refine
* refine
* refine
* add README
* refine README
* add warning instead error
* fix padding
* add license
* format
* format
* format fix
* Refine vllm dependency (#1)
vllm dependency clear
* fix licence
* fix format
* fix format
* fix
* adapt LLM engine
* fix
* add license
* fix format
* fix
* Moving README.md to the correct position
* Fix readme.md
* done
* guide for adding models
* fix
* Fix README.md
* Add new model readme
* remove ray-logic
* refactor arg_utils.py
* remove distributed_init_method logic
* refactor entrypoints
* refactor input_metadata
* refactor model_loader
* refactor utils.py
* refactor models
* fix api server
* remove vllm.stucture
* revert by txy 1120
* remove utils
* format
* fix license
* add bigdl model
* Refer to a specfic commit
* Change code base
* add comments
* add async_llm_engine comment
* refine
* formatted
* add worker comments
* add comments
* add comments
* fix style
* add changes
---------
Co-authored-by: xiangyuT <xiangyu.tian@intel.com>
Co-authored-by: Xiangyu Tian <109123695+xiangyuT@users.noreply.github.com>
Co-authored-by: leonardozcm <leonardo1997zcm@gmail.com> 
							
						 | 
						
							2023-11-23 16:46:45 +08:00 | 
						
						
							
							
							
								
							
							
						 |