Cengguang Zhang
|
cfed76b2ed
|
LLM: add long-context support for Qwen1.5-7B/Baichuan2-7B/Mistral-7B. (#10937)
* LLM: add split tensor support for baichuan2-7b and qwen1.5-7b.
* fix style.
* fix style.
* fix style.
* add support for mistral and fix condition threshold.
* fix style.
* fix comments.
|
2024-05-10 16:40:15 +08:00 |
|
Yishuo Wang
|
e753125880
|
use fp16_sdp when head_dim=96 (#10976)
|
2024-05-09 17:02:59 +08:00 |
|
Yishuo Wang
|
697ca79eca
|
use quantize kv and sdp in phi3-mini (#10973)
|
2024-05-09 15:16:18 +08:00 |
|
Yishuo Wang
|
2ebec0395c
|
optimize phi-3-mini-128 (#10959)
|
2024-05-08 16:33:17 +08:00 |
|
Yishuo Wang
|
c801c37bc6
|
optimize phi3 again: use quantize kv if possible (#10953)
|
2024-05-07 17:26:19 +08:00 |
|
Yishuo Wang
|
aa2fa9fde1
|
optimize phi3 again: use sdp if possible (#10951)
|
2024-05-07 15:53:08 +08:00 |
|
Wang, Jian4
|
191b184341
|
LLM: Optimize cohere model (#10878)
* use mlp and rms
* optimize kv_cache
* add fuse qkv
* add flash attention and fp16 sdp
* error fp8 sdp
* fix optimized
* fix style
* update
* add for pp
|
2024-05-07 10:19:50 +08:00 |
|
Cengguang Zhang
|
75dbf240ec
|
LLM: update split tensor conditions. (#10872)
* LLM: update split tensor condition.
* add cond for split tensor.
* update priority of env.
* fix style.
* update env name.
|
2024-04-30 17:07:21 +08:00 |
|
Yishuo Wang
|
d884c62dc4
|
remove new_layout parameter (#10906)
|
2024-04-29 10:31:50 +08:00 |
|
Guancheng Fu
|
c9fac8c26b
|
Fix sdp logic (#10896)
* fix
* fix
|
2024-04-28 22:02:14 +08:00 |
|
Cengguang Zhang
|
9752ffe979
|
LLM: update split qkv native sdp. (#10895)
* LLM: update split qkv native sdp.
* fix typo.
|
2024-04-26 18:47:35 +08:00 |
|
Yishuo Wang
|
46ba962168
|
use new quantize kv (#10888)
|
2024-04-26 14:42:17 +08:00 |
|
Yina Chen
|
8811f268ff
|
Use new fp16 sdp in Qwen and modify the constraint (#10882)
|
2024-04-25 19:23:37 +08:00 |
|
Yina Chen
|
dc27b3bc35
|
Use sdp when rest token seq_len > 1 in llama & mistral (for lookup & spec) (#10790)
* update sdp condition
* update
* fix
* update & test llama
* mistral
* fix style
* update
* fix style
* remove pvc constrain
* update ds on arc
* fix style
|
2024-04-24 17:24:01 +08:00 |
|
binbin Deng
|
c9feffff9a
|
LLM: support Qwen1.5-MoE-A2.7B-Chat pipeline parallel inference (#10864)
|
2024-04-24 16:02:27 +08:00 |
|
Yishuo Wang
|
2d210817ff
|
add phi3 optimization (#10871)
|
2024-04-24 15:17:40 +08:00 |
|
Cengguang Zhang
|
763413b7e1
|
LLM: support llama split tensor for long context in transformers>=4.36. (#10844)
* LLm: support llama split tensor for long context in transformers>=4.36.
* fix dtype.
* fix style.
* fix style.
* fix style.
* fix style.
* fix dtype.
* fix style.
|
2024-04-23 16:13:25 +08:00 |
|
Wang, Jian4
|
18c032652d
|
LLM: Add mixtral speculative CPU example (#10830)
* init mixtral sp example
* use different prompt_format
* update output
* update
|
2024-04-23 10:05:51 +08:00 |
|
Yishuo Wang
|
fe5a082b84
|
add phi-2 optimization (#10843)
|
2024-04-22 18:56:47 +08:00 |
|
Guancheng Fu
|
caf75beef8
|
Disable sdpa (#10814)
|
2024-04-19 17:33:18 +08:00 |
|
Yishuo Wang
|
08458b4f74
|
remove rms norm copy (#10793)
|
2024-04-19 13:57:48 +08:00 |
|
Ruonan Wang
|
754b0ffecf
|
Fix pvc llama (#10798)
* ifx
* update
|
2024-04-18 10:44:57 -07:00 |
|
Wang, Jian4
|
14ca42a048
|
LLM:Fix moe indexs error on cpu (#10791)
|
2024-04-18 15:56:52 +08:00 |
|
Wang, Jian4
|
209c3501e6
|
LLM: Optimize qwen1.5 moe model (#10706)
* update moe block
* fix style
* enable optmize MLP
* enabel kv_cache
* enable fuse rope
* enable fused qkv
* enable flash_attention
* error sdp quantize
* use old api
* use fuse
* use xetla
* fix python style
* update moe_blocks num
* fix output error
* add cpu sdpa
* update
* update
* update
|
2024-04-18 14:54:05 +08:00 |
|
Ziteng Zhang
|
ff040c8f01
|
LISA Finetuning Example (#10743)
* enabling xetla only supports qtype=SYM_INT4 or FP8E5
* LISA Finetuning Example on gpu
* update readme
* add licence
* Explain parameters of lisa & Move backend codes to src dir
* fix style
* fix style
* update readme
* support chatglm
* fix style
* fix style
* update readme
* fix
|
2024-04-18 13:48:10 +08:00 |
|
Yang Wang
|
952e517db9
|
use config rope_theta (#10787)
* use config rope_theta
* fix style
|
2024-04-17 20:39:11 -07:00 |
|
Guancheng Fu
|
31ea2f9a9f
|
Fix wrong output for Llama models on CPU (#10742)
|
2024-04-18 11:07:27 +08:00 |
|
Xin Qiu
|
e764f9b1b1
|
Disable fast fused rope on UHD (#10780)
* use decoding fast path
* update
* update
* cleanup
|
2024-04-18 10:03:53 +08:00 |
|
Wang, Jian4
|
a20271ffe4
|
LLM: Fix yi-6b fp16 error on pvc (#10781)
* updat for yi fp16
* update
* update
|
2024-04-17 16:49:59 +08:00 |
|
Cengguang Zhang
|
3e2662c87e
|
LLM: fix get env KV_CACHE_ALLOC_BLOCK_LENGTH type. (#10771)
|
2024-04-16 09:32:30 +08:00 |
|
binbin Deng
|
c3fc8f4b90
|
LLM: add bs limitation for llama softmax upcast to fp32 (#10752)
|
2024-04-12 15:40:25 +08:00 |
|
Yishuo Wang
|
8086554d33
|
use new fp16 sdp in llama and mistral (#10734)
|
2024-04-12 10:49:02 +08:00 |
|
Yang Wang
|
019293e1b9
|
Fuse MOE indexes computation (#10716)
* try moe
* use c++ cpu to compute indexes
* fix style
|
2024-04-11 10:12:55 -07:00 |
|
Cengguang Zhang
|
4b024b7aac
|
LLM: optimize chatglm2 8k input. (#10723)
* LLM: optimize chatglm2 8k input.
* rename.
|
2024-04-10 16:59:06 +08:00 |
|
Keyan (Kyrie) Zhang
|
585c174e92
|
Read the value of KV_CACHE_ALLOC_BLOCK_LENGTH from the environment variables (#10707)
* Read the value of KV_CACHE_ALLOC_BLOCK_LENGTH from the environment variables.
* Fix style
|
2024-04-10 10:48:46 +08:00 |
|
Jiao Wang
|
878a97077b
|
Fix llava example to support transformerds 4.36 (#10614)
* fix llava example
* update
|
2024-04-09 13:47:07 -07:00 |
|
Yishuo Wang
|
8f45e22072
|
fix llama2 (#10710)
|
2024-04-09 17:28:37 +08:00 |
|
Yishuo Wang
|
e438f941f2
|
disable rwkv5 fp16 (#10699)
|
2024-04-09 16:42:11 +08:00 |
|
Yina Chen
|
c7422712fc
|
mistral 4.36 use fp16 sdp (#10704)
|
2024-04-09 13:50:33 +08:00 |
|
Yang Wang
|
5a1f446d3c
|
support fp8 in xetla (#10555)
* support fp8 in xetla
* change name
* adjust model file
* support convert back to cpu
* factor
* fix bug
* fix style
|
2024-04-08 13:22:09 -07:00 |
|
Cengguang Zhang
|
7c43ac0164
|
LLM: optimize llama natvie sdp for split qkv tensor (#10693)
* LLM: optimize llama natvie sdp for split qkv tensor.
* fix block real size.
* fix comment.
* fix style.
* refactor.
|
2024-04-08 17:48:11 +08:00 |
|
Xin Qiu
|
1274cba79b
|
stablelm fp8 kv cache (#10672)
* stablelm fp8 kvcache
* update
* fix
* change to fp8 matmul
* fix style
* fix
* fix
* meet code review
* add comment
|
2024-04-08 15:16:46 +08:00 |
|
Cengguang Zhang
|
c0cd238e40
|
LLM: support llama2 8k input with w4a16. (#10677)
* LLM: support llama2 8k input with w4a16.
* fix comment and style.
* fix style.
* fix comments and split tensor to quantized attention forward.
* fix style.
* refactor name.
* fix style.
* fix style.
* fix style.
* refactor checker name.
* refactor native sdp split qkv tensor name.
* fix style.
* fix comment rename variables.
* fix co-exist of intermedia results.
|
2024-04-08 11:43:15 +08:00 |
|
Cengguang Zhang
|
1a9b8204a4
|
LLM: support int4 fp16 chatglm2-6b 8k input. (#10648)
|
2024-04-07 09:39:21 +08:00 |
|
Jiao Wang
|
69bdbf5806
|
Fix vllm print error message issue (#10664)
* update chatglm readme
* Add condition to invalidInputError
* update
* update
* style
|
2024-04-05 15:08:13 -07:00 |
|
Xin Qiu
|
4c3e493b2d
|
fix stablelm2 1.6b (#10656)
* fix stablelm2 1.6b
* meet code review
|
2024-04-03 22:15:32 +08:00 |
|
Yishuo Wang
|
702e686901
|
optimize starcoder normal kv cache (#10642)
|
2024-04-03 15:27:02 +08:00 |
|
Xin Qiu
|
3a9ab8f1ae
|
fix stablelm logits diff (#10636)
* fix logits diff
* Small fixes
---------
Co-authored-by: Yuwen Hu <yuwen.hu@intel.com>
|
2024-04-03 15:08:12 +08:00 |
|
Kai Huang
|
c875b3c858
|
Add seq len check for llama softmax upcast to fp32 (#10629)
|
2024-04-03 12:05:13 +08:00 |
|
Jiao Wang
|
23e33a0ca1
|
Fix qwen-vl style (#10633)
* update
* update
|
2024-04-02 18:41:38 -07:00 |
|