Ruonan Wang
|
736a7ef72e
|
add sdp_causal for mistral 4.36 (#11686)
* add sdp_causal for mistral
* fix
* update
|
2024-08-01 18:57:31 +08:00 |
|
Yina Chen
|
670ad887fc
|
Qwen support compress kv (#11680)
* Qwen support compress kv
* fix style
* fix
|
2024-07-30 11:16:42 +08:00 |
|
hxsz1997
|
9b36877897
|
disable default quantize_kv of GQA on MTL (#11679)
* disable default quantizekv of gqa in mtl
* fix stype
* fix stype
* fix stype
* fix stype
* fix stype
* fix stype
|
2024-07-30 09:38:46 +08:00 |
|
Ruonan Wang
|
c11d5301d7
|
add sdp fp8 for llama (#11671)
* add sdp fp8 for llama
* fix style
* refactor
|
2024-07-29 13:46:22 +08:00 |
|
Yina Chen
|
fc7f8feb83
|
Support compress kv (#11642)
* mistral snapkv
* update
* mtl update
* update
* update
* update
* add comments
* style fix
* fix style
* support llama
* llama use compress kv
* support mistral 4.40
* fix style
* support diff transformers versions
* move snapkv util to kv
* fix style
* meet comments & small fix
* revert all in one
* fix indent
---------
Co-authored-by: leonardozcm <leonardo1997zcm@gmail.com>
|
2024-07-26 16:02:00 +08:00 |
|
Cengguang Zhang
|
70ab1a6f1a
|
LLM: unify memory optimization env variables. (#11549)
* LLM: unify memory optimization env variables.
* fix comments.
|
2024-07-11 11:01:28 +08:00 |
|
Guoqiong Song
|
7507000ef2
|
Fix 1383 Llama model on transformers=4.41[WIP] (#11280)
|
2024-06-21 11:24:10 -07:00 |
|
Yina Chen
|
b6b70d1ba0
|
Divide core-xe packages (#11131)
* temp
* add batch
* fix style
* update package name
* fix style
* add workflow
* use temp version to run uts
* trigger performance test
* trigger win igpu perf
* revert workflow & setup
|
2024-05-28 12:00:18 +08:00 |
|
SONG Ge
|
192ae35012
|
Add support for llama2 quantize_kv with transformers 4.38.0 (#11054)
* add support for llama2 quantize_kv with transformers 4.38.0
* fix code style
* fix code style
|
2024-05-16 22:23:39 +08:00 |
|
SONG Ge
|
16b2a418be
|
hotfix native_sdp ut (#11046)
* hotfix native_sdp
* update
|
2024-05-16 17:15:37 +08:00 |
|
SONG Ge
|
9942a4ba69
|
[WIP] Support llama2 with transformers==4.38.0 (#11024)
* support llama2 with transformers==4.38.0
* add supprot for quantize_qkv
* add original support for 4.38.0 now
* code style fix
|
2024-05-15 18:07:00 +08:00 |
|
Yishuo Wang
|
170e3d65e0
|
use new sdp and fp32 sdp (#11007)
|
2024-05-14 14:29:18 +08:00 |
|
Cengguang Zhang
|
75dbf240ec
|
LLM: update split tensor conditions. (#10872)
* LLM: update split tensor condition.
* add cond for split tensor.
* update priority of env.
* fix style.
* update env name.
|
2024-04-30 17:07:21 +08:00 |
|
Yishuo Wang
|
d884c62dc4
|
remove new_layout parameter (#10906)
|
2024-04-29 10:31:50 +08:00 |
|
Guancheng Fu
|
c9fac8c26b
|
Fix sdp logic (#10896)
* fix
* fix
|
2024-04-28 22:02:14 +08:00 |
|
Cengguang Zhang
|
9752ffe979
|
LLM: update split qkv native sdp. (#10895)
* LLM: update split qkv native sdp.
* fix typo.
|
2024-04-26 18:47:35 +08:00 |
|
Yina Chen
|
dc27b3bc35
|
Use sdp when rest token seq_len > 1 in llama & mistral (for lookup & spec) (#10790)
* update sdp condition
* update
* fix
* update & test llama
* mistral
* fix style
* update
* fix style
* remove pvc constrain
* update ds on arc
* fix style
|
2024-04-24 17:24:01 +08:00 |
|
Cengguang Zhang
|
763413b7e1
|
LLM: support llama split tensor for long context in transformers>=4.36. (#10844)
* LLm: support llama split tensor for long context in transformers>=4.36.
* fix dtype.
* fix style.
* fix style.
* fix style.
* fix style.
* fix dtype.
* fix style.
|
2024-04-23 16:13:25 +08:00 |
|
Guancheng Fu
|
caf75beef8
|
Disable sdpa (#10814)
|
2024-04-19 17:33:18 +08:00 |
|
Yishuo Wang
|
08458b4f74
|
remove rms norm copy (#10793)
|
2024-04-19 13:57:48 +08:00 |
|
Ruonan Wang
|
754b0ffecf
|
Fix pvc llama (#10798)
* ifx
* update
|
2024-04-18 10:44:57 -07:00 |
|
Ziteng Zhang
|
ff040c8f01
|
LISA Finetuning Example (#10743)
* enabling xetla only supports qtype=SYM_INT4 or FP8E5
* LISA Finetuning Example on gpu
* update readme
* add licence
* Explain parameters of lisa & Move backend codes to src dir
* fix style
* fix style
* update readme
* support chatglm
* fix style
* fix style
* update readme
* fix
|
2024-04-18 13:48:10 +08:00 |
|
Yang Wang
|
952e517db9
|
use config rope_theta (#10787)
* use config rope_theta
* fix style
|
2024-04-17 20:39:11 -07:00 |
|
Guancheng Fu
|
31ea2f9a9f
|
Fix wrong output for Llama models on CPU (#10742)
|
2024-04-18 11:07:27 +08:00 |
|
Xin Qiu
|
e764f9b1b1
|
Disable fast fused rope on UHD (#10780)
* use decoding fast path
* update
* update
* cleanup
|
2024-04-18 10:03:53 +08:00 |
|
Wang, Jian4
|
a20271ffe4
|
LLM: Fix yi-6b fp16 error on pvc (#10781)
* updat for yi fp16
* update
* update
|
2024-04-17 16:49:59 +08:00 |
|
Cengguang Zhang
|
3e2662c87e
|
LLM: fix get env KV_CACHE_ALLOC_BLOCK_LENGTH type. (#10771)
|
2024-04-16 09:32:30 +08:00 |
|
binbin Deng
|
c3fc8f4b90
|
LLM: add bs limitation for llama softmax upcast to fp32 (#10752)
|
2024-04-12 15:40:25 +08:00 |
|
Yishuo Wang
|
8086554d33
|
use new fp16 sdp in llama and mistral (#10734)
|
2024-04-12 10:49:02 +08:00 |
|
Jiao Wang
|
878a97077b
|
Fix llava example to support transformerds 4.36 (#10614)
* fix llava example
* update
|
2024-04-09 13:47:07 -07:00 |
|
Yishuo Wang
|
8f45e22072
|
fix llama2 (#10710)
|
2024-04-09 17:28:37 +08:00 |
|
Yang Wang
|
5a1f446d3c
|
support fp8 in xetla (#10555)
* support fp8 in xetla
* change name
* adjust model file
* support convert back to cpu
* factor
* fix bug
* fix style
|
2024-04-08 13:22:09 -07:00 |
|
Cengguang Zhang
|
7c43ac0164
|
LLM: optimize llama natvie sdp for split qkv tensor (#10693)
* LLM: optimize llama natvie sdp for split qkv tensor.
* fix block real size.
* fix comment.
* fix style.
* refactor.
|
2024-04-08 17:48:11 +08:00 |
|
Cengguang Zhang
|
c0cd238e40
|
LLM: support llama2 8k input with w4a16. (#10677)
* LLM: support llama2 8k input with w4a16.
* fix comment and style.
* fix style.
* fix comments and split tensor to quantized attention forward.
* fix style.
* refactor name.
* fix style.
* fix style.
* fix style.
* refactor checker name.
* refactor native sdp split qkv tensor name.
* fix style.
* fix comment rename variables.
* fix co-exist of intermedia results.
|
2024-04-08 11:43:15 +08:00 |
|
Kai Huang
|
c875b3c858
|
Add seq len check for llama softmax upcast to fp32 (#10629)
|
2024-04-03 12:05:13 +08:00 |
|
binbin Deng
|
2bbd8a1548
|
LLM: fix llama2 FP16 & bs>1 & autotp on PVC and ARC (#10611)
|
2024-04-03 09:28:04 +08:00 |
|
Shaojun Liu
|
a10f5a1b8d
|
add python style check (#10620)
* add python style check
* fix style checks
* update runner
* add ipex-llm-finetune-qlora-cpu-k8s to manually_build workflow
* update tag to 2.1.0-SNAPSHOT
|
2024-04-02 16:17:56 +08:00 |
|
Cengguang Zhang
|
e567956121
|
LLM: add memory optimization for llama. (#10592)
* add initial memory optimization.
* fix logic.
* fix logic,
* remove env var check in mlp split.
|
2024-04-02 09:07:50 +08:00 |
|
Ruonan Wang
|
ea4bc450c4
|
LLM: add esimd sdp for pvc (#10543)
* add esimd sdp for pvc
* update
* fix
* fix batch
|
2024-03-26 19:04:40 +08:00 |
|
Xin Qiu
|
1dd40b429c
|
enable fp4 fused mlp and qkv (#10531)
* enable fp4 fused mlp and qkv
* update qwen
* update qwen2
|
2024-03-26 08:34:00 +08:00 |
|
Wang, Jian4
|
9df70d95eb
|
Refactor bigdl.llm to ipex_llm (#24)
* Rename bigdl/llm to ipex_llm
* rm python/llm/src/bigdl
* from bigdl.llm to from ipex_llm
|
2024-03-22 15:41:21 +08:00 |
|