Yuwen Hu
9d65dcd7ef
Fix deepseek coder with linear rope type support on GPU ( #12709 )
...
* Fix deepseek coder with linear rope type
* Style fix
* Move to optimize_pre
* Small fix
* Small fix
* Small fix to not affect other cases
* Style fixes
* Update function name
* Small fix
* Small fix
* Small fix
* Fix for low transformers version first
* Style fix
* Small fix
2025-01-15 21:12:34 +08:00
Yishuo Wang
7234c9b27b
update quantize kv cache condition ( #12681 )
2025-01-09 15:23:04 +08:00
Yishuo Wang
c11f5f0fcd
also convert SdpaAttention in optimize_model ( #12673 )
2025-01-08 16:48:03 +08:00
Yishuo Wang
34dbdb8ee3
small fix ( #12623 )
2024-12-27 10:19:27 +08:00
Yishuo Wang
1604b4ead8
small fix ( #12616 )
2024-12-26 11:35:12 +08:00
Yishuo Wang
6249c1e373
rewrite llama optimization ( #12609 )
2024-12-25 17:04:32 +08:00
Yishuo Wang
5ae0006103
remove old rope usage ( #12552 )
2024-12-16 15:59:36 +08:00
Yishuo Wang
324bcb057e
refactor to reduce old rope usage ( #12219 )
2024-10-17 14:45:09 +08:00
Yishuo Wang
6cedb601e4
remove some useless code ( #12035 )
2024-09-06 17:51:08 +08:00
Guoqiong Song
8803242f5c
fix llama on cpu ( #12018 )
2024-09-04 19:17:54 -07:00
Ruonan Wang
4a61f7d20d
update mlp of llama ( #11897 )
...
* update mlp of llama
* relax threshold of mlp test
* revert code
2024-08-22 20:34:53 +08:00
Yina Chen
c3c058373f
Update compresskv model forward type logic ( #11868 )
...
* update
* fix
2024-08-20 18:11:37 +08:00
Yina Chen
3cd4e87168
Support compress KV with quantize KV ( #11812 )
...
* update llama
* support llama 4.41
* fix style
* support minicpm
* support qwen2
* support minicpm & update
* support chatglm4
* support chatglm
* remove print
* add DynamicCompressFp8Cache & support qwen
* support llama
* support minicpm phi3
* update chatglm2/4
* small fix & support qwen 4.42
* remove print
2024-08-19 15:32:32 +08:00
Yina Chen
7cd6ec9723
MiniCPM-V support compresskv ( #11779 )
...
* fix check error
* fix other models
* remove print
2024-08-13 19:03:40 +08:00
Yina Chen
841dbcdf3a
Fix compresskv with lookahead issue ( #11767 )
...
* fix compresskv + lookahead attn_mask qwen2
* support llama chatglm
* support mistral & chatglm
* address comments
* revert run.py
2024-08-12 18:53:55 +08:00
Ruonan Wang
00a5574c8a
Use merge_qkv to replace fused_qkv for llama2 ( #11727 )
...
* update 4.38
* support new versions
* update
* fix style
* fix style
* update rope
* temp test sdpa
* fix style
* fix cpu ut
2024-08-07 18:04:01 +08:00
Yina Chen
a71ae7c22b
Support minicpm compresskv & modify default compresskv config & default enable compresskv on mtl 2.5k~4.5k ( #11726 )
...
* support minicpm & modify default & default enable on mtl 2.5k~4.5k
* fix style
2024-08-07 11:35:39 +08:00
Yina Chen
8d1e0bd2f4
add sdp causal support in llama ( #11705 )
2024-08-02 10:27:40 +08:00
Ruonan Wang
736a7ef72e
add sdp_causal for mistral 4.36 ( #11686 )
...
* add sdp_causal for mistral
* fix
* update
2024-08-01 18:57:31 +08:00
Yina Chen
670ad887fc
Qwen support compress kv ( #11680 )
...
* Qwen support compress kv
* fix style
* fix
2024-07-30 11:16:42 +08:00
hxsz1997
9b36877897
disable default quantize_kv of GQA on MTL ( #11679 )
...
* disable default quantizekv of gqa in mtl
* fix stype
* fix stype
* fix stype
* fix stype
* fix stype
* fix stype
2024-07-30 09:38:46 +08:00
Ruonan Wang
c11d5301d7
add sdp fp8 for llama ( #11671 )
...
* add sdp fp8 for llama
* fix style
* refactor
2024-07-29 13:46:22 +08:00
Yina Chen
fc7f8feb83
Support compress kv ( #11642 )
...
* mistral snapkv
* update
* mtl update
* update
* update
* update
* add comments
* style fix
* fix style
* support llama
* llama use compress kv
* support mistral 4.40
* fix style
* support diff transformers versions
* move snapkv util to kv
* fix style
* meet comments & small fix
* revert all in one
* fix indent
---------
Co-authored-by: leonardozcm <leonardo1997zcm@gmail.com>
2024-07-26 16:02:00 +08:00
Cengguang Zhang
70ab1a6f1a
LLM: unify memory optimization env variables. ( #11549 )
...
* LLM: unify memory optimization env variables.
* fix comments.
2024-07-11 11:01:28 +08:00
Guoqiong Song
7507000ef2
Fix 1383 Llama model on transformers=4.41[WIP] ( #11280 )
2024-06-21 11:24:10 -07:00
Yina Chen
b6b70d1ba0
Divide core-xe packages ( #11131 )
...
* temp
* add batch
* fix style
* update package name
* fix style
* add workflow
* use temp version to run uts
* trigger performance test
* trigger win igpu perf
* revert workflow & setup
2024-05-28 12:00:18 +08:00
SONG Ge
192ae35012
Add support for llama2 quantize_kv with transformers 4.38.0 ( #11054 )
...
* add support for llama2 quantize_kv with transformers 4.38.0
* fix code style
* fix code style
2024-05-16 22:23:39 +08:00
SONG Ge
16b2a418be
hotfix native_sdp ut ( #11046 )
...
* hotfix native_sdp
* update
2024-05-16 17:15:37 +08:00
SONG Ge
9942a4ba69
[WIP] Support llama2 with transformers==4.38.0 ( #11024 )
...
* support llama2 with transformers==4.38.0
* add supprot for quantize_qkv
* add original support for 4.38.0 now
* code style fix
2024-05-15 18:07:00 +08:00
Yishuo Wang
170e3d65e0
use new sdp and fp32 sdp ( #11007 )
2024-05-14 14:29:18 +08:00
Cengguang Zhang
75dbf240ec
LLM: update split tensor conditions. ( #10872 )
...
* LLM: update split tensor condition.
* add cond for split tensor.
* update priority of env.
* fix style.
* update env name.
2024-04-30 17:07:21 +08:00
Yishuo Wang
d884c62dc4
remove new_layout parameter ( #10906 )
2024-04-29 10:31:50 +08:00
Guancheng Fu
c9fac8c26b
Fix sdp logic ( #10896 )
...
* fix
* fix
2024-04-28 22:02:14 +08:00
Cengguang Zhang
9752ffe979
LLM: update split qkv native sdp. ( #10895 )
...
* LLM: update split qkv native sdp.
* fix typo.
2024-04-26 18:47:35 +08:00
Yina Chen
dc27b3bc35
Use sdp when rest token seq_len > 1 in llama & mistral (for lookup & spec) ( #10790 )
...
* update sdp condition
* update
* fix
* update & test llama
* mistral
* fix style
* update
* fix style
* remove pvc constrain
* update ds on arc
* fix style
2024-04-24 17:24:01 +08:00
Cengguang Zhang
763413b7e1
LLM: support llama split tensor for long context in transformers>=4.36. ( #10844 )
...
* LLm: support llama split tensor for long context in transformers>=4.36.
* fix dtype.
* fix style.
* fix style.
* fix style.
* fix style.
* fix dtype.
* fix style.
2024-04-23 16:13:25 +08:00
Guancheng Fu
caf75beef8
Disable sdpa ( #10814 )
2024-04-19 17:33:18 +08:00
Yishuo Wang
08458b4f74
remove rms norm copy ( #10793 )
2024-04-19 13:57:48 +08:00
Ruonan Wang
754b0ffecf
Fix pvc llama ( #10798 )
...
* ifx
* update
2024-04-18 10:44:57 -07:00
Ziteng Zhang
ff040c8f01
LISA Finetuning Example ( #10743 )
...
* enabling xetla only supports qtype=SYM_INT4 or FP8E5
* LISA Finetuning Example on gpu
* update readme
* add licence
* Explain parameters of lisa & Move backend codes to src dir
* fix style
* fix style
* update readme
* support chatglm
* fix style
* fix style
* update readme
* fix
2024-04-18 13:48:10 +08:00
Yang Wang
952e517db9
use config rope_theta ( #10787 )
...
* use config rope_theta
* fix style
2024-04-17 20:39:11 -07:00
Guancheng Fu
31ea2f9a9f
Fix wrong output for Llama models on CPU ( #10742 )
2024-04-18 11:07:27 +08:00
Xin Qiu
e764f9b1b1
Disable fast fused rope on UHD ( #10780 )
...
* use decoding fast path
* update
* update
* cleanup
2024-04-18 10:03:53 +08:00
Wang, Jian4
a20271ffe4
LLM: Fix yi-6b fp16 error on pvc ( #10781 )
...
* updat for yi fp16
* update
* update
2024-04-17 16:49:59 +08:00
Cengguang Zhang
3e2662c87e
LLM: fix get env KV_CACHE_ALLOC_BLOCK_LENGTH type. ( #10771 )
2024-04-16 09:32:30 +08:00
binbin Deng
c3fc8f4b90
LLM: add bs limitation for llama softmax upcast to fp32 ( #10752 )
2024-04-12 15:40:25 +08:00
Yishuo Wang
8086554d33
use new fp16 sdp in llama and mistral ( #10734 )
2024-04-12 10:49:02 +08:00
Jiao Wang
878a97077b
Fix llava example to support transformerds 4.36 ( #10614 )
...
* fix llava example
* update
2024-04-09 13:47:07 -07:00
Yishuo Wang
8f45e22072
fix llama2 ( #10710 )
2024-04-09 17:28:37 +08:00
Yang Wang
5a1f446d3c
support fp8 in xetla ( #10555 )
...
* support fp8 in xetla
* change name
* adjust model file
* support convert back to cpu
* factor
* fix bug
* fix style
2024-04-08 13:22:09 -07:00