Yishuo Wang
|
2d210817ff
|
add phi3 optimization (#10871)
|
2024-04-24 15:17:40 +08:00 |
|
Yishuo Wang
|
fe5a082b84
|
add phi-2 optimization (#10843)
|
2024-04-22 18:56:47 +08:00 |
|
Ruonan Wang
|
439c834ed3
|
LLM: add mixed precision for lm_head (#10795)
* add mixed_quantization
* meet code review
* update
* fix style
* meet review
|
2024-04-18 19:11:31 +08:00 |
|
Guancheng Fu
|
cbe7b5753f
|
Add vLLM[xpu] related code (#10779)
* Add ipex-llm side change
* add runable offline_inference
* refactor to call vllm2
* Verified async server
* add new v2 example
* add README
* fix
* change dir
* refactor readme.md
* add experimental
* fix
|
2024-04-18 15:29:20 +08:00 |
|
Wang, Jian4
|
209c3501e6
|
LLM: Optimize qwen1.5 moe model (#10706)
* update moe block
* fix style
* enable optmize MLP
* enabel kv_cache
* enable fuse rope
* enable fused qkv
* enable flash_attention
* error sdp quantize
* use old api
* use fuse
* use xetla
* fix python style
* update moe_blocks num
* fix output error
* add cpu sdpa
* update
* update
* update
|
2024-04-18 14:54:05 +08:00 |
|
binbin Deng
|
0a62933d36
|
LLM: fix qwen AutoTP (#10766)
|
2024-04-16 09:56:17 +08:00 |
|
Wang, Jian4
|
c9e6d42ad1
|
LLM: Fix chatglm3-6b-32k error (#10719)
* fix chatglm3-6b-32k
* update style
|
2024-04-10 11:24:06 +08:00 |
|
binbin Deng
|
44922bb5c2
|
LLM: support baichuan2-13b using AutoTP (#10691)
|
2024-04-09 14:06:01 +08:00 |
|
Ovo233
|
dcb2038aad
|
Enable optimization for sentence_transformers (#10679)
* enable optimization for sentence_transformers
* fix python style check failure
|
2024-04-09 12:33:46 +08:00 |
|
Xin Qiu
|
1274cba79b
|
stablelm fp8 kv cache (#10672)
* stablelm fp8 kvcache
* update
* fix
* change to fp8 matmul
* fix style
* fix
* fix
* meet code review
* add comment
|
2024-04-08 15:16:46 +08:00 |
|
Xin Qiu
|
3a9ab8f1ae
|
fix stablelm logits diff (#10636)
* fix logits diff
* Small fixes
---------
Co-authored-by: Yuwen Hu <yuwen.hu@intel.com>
|
2024-04-03 15:08:12 +08:00 |
|
Yuwen Hu
|
fd384ddfb8
|
Optimize StableLM (#10619)
* Initial commit for stablelm optimizations
* Small style fix
* add dependency
* Add mlp optimizations
* Small fix
* add attention forward
* Remove quantize kv for now as head_dim=80
* Add merged qkv
* fix lisence
* Python style fix
---------
Co-authored-by: qiuxin2012 <qiuxin2012cs@gmail.com>
|
2024-04-02 18:58:38 +08:00 |
|
Yishuo Wang
|
ba8cc6bd68
|
optimize starcoder2-3b (#10625)
|
2024-04-02 17:16:29 +08:00 |
|
Ruonan Wang
|
bfc1caa5e5
|
LLM: support iq1s for llama2-70b-hf (#10596)
|
2024-04-01 13:13:13 +08:00 |
|
Xin Qiu
|
5963239b46
|
Fix qwen's position_ids no enough (#10572)
* fix position_ids
* fix position_ids
|
2024-03-28 17:05:49 +08:00 |
|
ZehuaCao
|
52a2135d83
|
Replace ipex with ipex-llm (#10554)
* fix ipex with ipex_llm
* fix ipex with ipex_llm
* update
* update
* update
* update
* update
* update
* update
* update
|
2024-03-28 13:54:40 +08:00 |
|
Wang, Jian4
|
9df70d95eb
|
Refactor bigdl.llm to ipex_llm (#24)
* Rename bigdl/llm to ipex_llm
* rm python/llm/src/bigdl
* from bigdl.llm to from ipex_llm
|
2024-03-22 15:41:21 +08:00 |
|