Qiyuan Gong
|
f2e923b3ca
|
Axolotl v0.4.0 support (#10773)
* Add Axolotl 0.4.0, remove legacy 0.3.0 support.
* replace is_torch_bf16_gpu_available
* Add HF_HUB_OFFLINE=1
* Move transformers out of requirement
* Refine readme and qlora.yml
|
2024-04-17 09:49:11 +08:00 |
|
Yina Chen
|
899d392e2f
|
Support prompt lookup in ipex-llm (#10768)
* lookup init
* add lookup
* fix style
* remove redundant code
* change param name
* fix style
|
2024-04-16 16:52:38 +08:00 |
|
binbin Deng
|
0a62933d36
|
LLM: fix qwen AutoTP (#10766)
|
2024-04-16 09:56:17 +08:00 |
|
Cengguang Zhang
|
3e2662c87e
|
LLM: fix get env KV_CACHE_ALLOC_BLOCK_LENGTH type. (#10771)
|
2024-04-16 09:32:30 +08:00 |
|
binbin Deng
|
c3fc8f4b90
|
LLM: add bs limitation for llama softmax upcast to fp32 (#10752)
|
2024-04-12 15:40:25 +08:00 |
|
Yishuo Wang
|
8086554d33
|
use new fp16 sdp in llama and mistral (#10734)
|
2024-04-12 10:49:02 +08:00 |
|
Yang Wang
|
019293e1b9
|
Fuse MOE indexes computation (#10716)
* try moe
* use c++ cpu to compute indexes
* fix style
|
2024-04-11 10:12:55 -07:00 |
|
binbin Deng
|
70ed9397f9
|
LLM: fix AttributeError of FP16Linear (#10740)
|
2024-04-11 17:03:56 +08:00 |
|
Cengguang Zhang
|
4b024b7aac
|
LLM: optimize chatglm2 8k input. (#10723)
* LLM: optimize chatglm2 8k input.
* rename.
|
2024-04-10 16:59:06 +08:00 |
|
Wang, Jian4
|
c9e6d42ad1
|
LLM: Fix chatglm3-6b-32k error (#10719)
* fix chatglm3-6b-32k
* update style
|
2024-04-10 11:24:06 +08:00 |
|
Keyan (Kyrie) Zhang
|
585c174e92
|
Read the value of KV_CACHE_ALLOC_BLOCK_LENGTH from the environment variables (#10707)
* Read the value of KV_CACHE_ALLOC_BLOCK_LENGTH from the environment variables.
* Fix style
|
2024-04-10 10:48:46 +08:00 |
|
Jiao Wang
|
878a97077b
|
Fix llava example to support transformerds 4.36 (#10614)
* fix llava example
* update
|
2024-04-09 13:47:07 -07:00 |
|
Zhicun
|
b4147a97bb
|
Fix dtype mismatch error (#10609)
* fix llama
* fix
* fix code style
* add torch type in model.py
---------
Co-authored-by: arda <arda@arda-arc19.sh.intel.com>
|
2024-04-09 17:50:33 +08:00 |
|
Yishuo Wang
|
8f45e22072
|
fix llama2 (#10710)
|
2024-04-09 17:28:37 +08:00 |
|
Yishuo Wang
|
e438f941f2
|
disable rwkv5 fp16 (#10699)
|
2024-04-09 16:42:11 +08:00 |
|
binbin Deng
|
44922bb5c2
|
LLM: support baichuan2-13b using AutoTP (#10691)
|
2024-04-09 14:06:01 +08:00 |
|
Yina Chen
|
c7422712fc
|
mistral 4.36 use fp16 sdp (#10704)
|
2024-04-09 13:50:33 +08:00 |
|
Ovo233
|
dcb2038aad
|
Enable optimization for sentence_transformers (#10679)
* enable optimization for sentence_transformers
* fix python style check failure
|
2024-04-09 12:33:46 +08:00 |
|
Yang Wang
|
5a1f446d3c
|
support fp8 in xetla (#10555)
* support fp8 in xetla
* change name
* adjust model file
* support convert back to cpu
* factor
* fix bug
* fix style
|
2024-04-08 13:22:09 -07:00 |
|
Cengguang Zhang
|
7c43ac0164
|
LLM: optimize llama natvie sdp for split qkv tensor (#10693)
* LLM: optimize llama natvie sdp for split qkv tensor.
* fix block real size.
* fix comment.
* fix style.
* refactor.
|
2024-04-08 17:48:11 +08:00 |
|
Xin Qiu
|
1274cba79b
|
stablelm fp8 kv cache (#10672)
* stablelm fp8 kvcache
* update
* fix
* change to fp8 matmul
* fix style
* fix
* fix
* meet code review
* add comment
|
2024-04-08 15:16:46 +08:00 |
|
Cengguang Zhang
|
c0cd238e40
|
LLM: support llama2 8k input with w4a16. (#10677)
* LLM: support llama2 8k input with w4a16.
* fix comment and style.
* fix style.
* fix comments and split tensor to quantized attention forward.
* fix style.
* refactor name.
* fix style.
* fix style.
* fix style.
* refactor checker name.
* refactor native sdp split qkv tensor name.
* fix style.
* fix comment rename variables.
* fix co-exist of intermedia results.
|
2024-04-08 11:43:15 +08:00 |
|
Wang, Jian4
|
47cabe8fcc
|
LLM: Fix no return_last_logit running bigdl_ipex chatglm3 (#10678)
* fix no return_last_logits
* update only for chatglm
|
2024-04-07 15:27:58 +08:00 |
|
Cengguang Zhang
|
1a9b8204a4
|
LLM: support int4 fp16 chatglm2-6b 8k input. (#10648)
|
2024-04-07 09:39:21 +08:00 |
|
Jiao Wang
|
69bdbf5806
|
Fix vllm print error message issue (#10664)
* update chatglm readme
* Add condition to invalidInputError
* update
* update
* style
|
2024-04-05 15:08:13 -07:00 |
|
Xin Qiu
|
4c3e493b2d
|
fix stablelm2 1.6b (#10656)
* fix stablelm2 1.6b
* meet code review
|
2024-04-03 22:15:32 +08:00 |
|
Yishuo Wang
|
702e686901
|
optimize starcoder normal kv cache (#10642)
|
2024-04-03 15:27:02 +08:00 |
|
Xin Qiu
|
3a9ab8f1ae
|
fix stablelm logits diff (#10636)
* fix logits diff
* Small fixes
---------
Co-authored-by: Yuwen Hu <yuwen.hu@intel.com>
|
2024-04-03 15:08:12 +08:00 |
|
Kai Huang
|
c875b3c858
|
Add seq len check for llama softmax upcast to fp32 (#10629)
|
2024-04-03 12:05:13 +08:00 |
|
Jiao Wang
|
23e33a0ca1
|
Fix qwen-vl style (#10633)
* update
* update
|
2024-04-02 18:41:38 -07:00 |
|
binbin Deng
|
2bbd8a1548
|
LLM: fix llama2 FP16 & bs>1 & autotp on PVC and ARC (#10611)
|
2024-04-03 09:28:04 +08:00 |
|
Jiao Wang
|
654dc5ba57
|
Fix Qwen-VL example problem (#10582)
* update
* update
* update
* update
|
2024-04-02 12:17:30 -07:00 |
|
Yuwen Hu
|
fd384ddfb8
|
Optimize StableLM (#10619)
* Initial commit for stablelm optimizations
* Small style fix
* add dependency
* Add mlp optimizations
* Small fix
* add attention forward
* Remove quantize kv for now as head_dim=80
* Add merged qkv
* fix lisence
* Python style fix
---------
Co-authored-by: qiuxin2012 <qiuxin2012cs@gmail.com>
|
2024-04-02 18:58:38 +08:00 |
|
Yishuo Wang
|
ba8cc6bd68
|
optimize starcoder2-3b (#10625)
|
2024-04-02 17:16:29 +08:00 |
|
Shaojun Liu
|
a10f5a1b8d
|
add python style check (#10620)
* add python style check
* fix style checks
* update runner
* add ipex-llm-finetune-qlora-cpu-k8s to manually_build workflow
* update tag to 2.1.0-SNAPSHOT
|
2024-04-02 16:17:56 +08:00 |
|
Cengguang Zhang
|
58b57177e3
|
LLM: support bigdl quantize kv cache env and add warning. (#10623)
* LLM: support bigdl quantize kv cache env and add warnning.
* fix style.
* fix comments.
|
2024-04-02 15:41:08 +08:00 |
|
Kai Huang
|
0a95c556a1
|
Fix starcoder first token perf (#10612)
* add bias check
* update
|
2024-04-02 09:21:38 +08:00 |
|
Cengguang Zhang
|
e567956121
|
LLM: add memory optimization for llama. (#10592)
* add initial memory optimization.
* fix logic.
* fix logic,
* remove env var check in mlp split.
|
2024-04-02 09:07:50 +08:00 |
|
Ruonan Wang
|
bfc1caa5e5
|
LLM: support iq1s for llama2-70b-hf (#10596)
|
2024-04-01 13:13:13 +08:00 |
|
Yishuo Wang
|
437a349dd6
|
fix rwkv with pip installer (#10591)
|
2024-03-29 17:56:45 +08:00 |
|
Ruonan Wang
|
0136fad1d4
|
LLM: support iq1_s (#10564)
* init version
* update utils
* remove unsed code
|
2024-03-29 09:43:55 +08:00 |
|
Qiyuan Gong
|
f4537798c1
|
Enable kv cache quantization by default for flex when 1 < batch <= 8 (#10584)
* Enable kv cache quantization by default for flex when 1 < batch <= 8.
* Change up bound from <8 to <=8.
|
2024-03-29 09:43:42 +08:00 |
|
Cengguang Zhang
|
b44f7adbad
|
LLM: Disable esimd sdp for PVC GPU when batch size>1 (#10579)
* llm: disable esimd sdp for pvc bz>1.
* fix logic.
* fix: avoid call get device name twice.
|
2024-03-28 22:55:48 +08:00 |
|
Xin Qiu
|
5963239b46
|
Fix qwen's position_ids no enough (#10572)
* fix position_ids
* fix position_ids
|
2024-03-28 17:05:49 +08:00 |
|
ZehuaCao
|
52a2135d83
|
Replace ipex with ipex-llm (#10554)
* fix ipex with ipex_llm
* fix ipex with ipex_llm
* update
* update
* update
* update
* update
* update
* update
* update
|
2024-03-28 13:54:40 +08:00 |
|
binbin Deng
|
92dfed77be
|
LLM: fix abnormal output of fp16 deepspeed autotp (#10558)
|
2024-03-28 09:35:48 +08:00 |
|
Xiangyu Tian
|
51d34ca68e
|
Fix wrong import in speculative (#10562)
|
2024-03-27 18:21:07 +08:00 |
|
Ruonan Wang
|
ea4bc450c4
|
LLM: add esimd sdp for pvc (#10543)
* add esimd sdp for pvc
* update
* fix
* fix batch
|
2024-03-26 19:04:40 +08:00 |
|
Xiangyu Tian
|
11550d3f25
|
LLM: Add length check for IPEX-CPU speculative decoding (#10529)
Add length check for IPEX-CPU speculative decoding.
|
2024-03-26 17:47:10 +08:00 |
|
Yishuo Wang
|
69a28d6b4c
|
fix chatglm (#10540)
|
2024-03-26 16:01:00 +08:00 |
|
binbin Deng
|
0a3e4e788f
|
LLM: fix mistral hidden_size setting for deepspeed autotp (#10527)
|
2024-03-26 10:55:44 +08:00 |
|
Xin Qiu
|
1dd40b429c
|
enable fp4 fused mlp and qkv (#10531)
* enable fp4 fused mlp and qkv
* update qwen
* update qwen2
|
2024-03-26 08:34:00 +08:00 |
|
Wang, Jian4
|
9df70d95eb
|
Refactor bigdl.llm to ipex_llm (#24)
* Rename bigdl/llm to ipex_llm
* rm python/llm/src/bigdl
* from bigdl.llm to from ipex_llm
|
2024-03-22 15:41:21 +08:00 |
|