Cengguang Zhang
763413b7e1
LLM: support llama split tensor for long context in transformers>=4.36. ( #10844 )
...
* LLm: support llama split tensor for long context in transformers>=4.36.
* fix dtype.
* fix style.
* fix style.
* fix style.
* fix style.
* fix dtype.
* fix style.
2024-04-23 16:13:25 +08:00
ZehuaCao
92ea54b512
Fix speculative decoding bug ( #10855 )
2024-04-23 14:28:31 +08:00
Wang, Jian4
18c032652d
LLM: Add mixtral speculative CPU example ( #10830 )
...
* init mixtral sp example
* use different prompt_format
* update output
* update
2024-04-23 10:05:51 +08:00
Yishuo Wang
fe5a082b84
add phi-2 optimization ( #10843 )
2024-04-22 18:56:47 +08:00
Guancheng Fu
47bd5f504c
[vLLM]Remove vllm-v1, refactor v2 ( #10842 )
...
* remove vllm-v1
* fix format
2024-04-22 17:51:32 +08:00
Wang, Jian4
23c6a52fb0
LLM: Fix ipex torchscript=True error ( #10832 )
...
* remove
* update
* remove torchscript
2024-04-22 15:53:09 +08:00
Yina Chen
3daad242b8
Fix No module named 'transformers.cache_utils' with transformers < 4.36 ( #10835 )
...
* update sdp condition
* update
* fix
* fix 431 error
* revert sdp & style fix
* fix
* meet comments
2024-04-22 14:05:50 +08:00
Guancheng Fu
caf75beef8
Disable sdpa ( #10814 )
2024-04-19 17:33:18 +08:00
Yishuo Wang
57edf2033c
fix lookahead with transformers >= 4.36 ( #10808 )
2024-04-19 16:24:56 +08:00
Ovo233
1a885020ee
Updated importing of top_k_top_p_filtering for transformers>=4.39.0 ( #10794 )
...
* In transformers>=4.39.0, the top_k_top_p_filtering function has been deprecated and moved to the hugging face package trl. Thus, for versions >= 4.39.0, import this function from trl.
2024-04-19 15:34:39 +08:00
Yishuo Wang
08458b4f74
remove rms norm copy ( #10793 )
2024-04-19 13:57:48 +08:00
Ruonan Wang
754b0ffecf
Fix pvc llama ( #10798 )
...
* ifx
* update
2024-04-18 10:44:57 -07:00
Ruonan Wang
439c834ed3
LLM: add mixed precision for lm_head ( #10795 )
...
* add mixed_quantization
* meet code review
* update
* fix style
* meet review
2024-04-18 19:11:31 +08:00
Yina Chen
8796401b08
Support q4k in ipex-llm ( #10796 )
...
* support q4k
* update
2024-04-18 18:55:28 +08:00
Ruonan Wang
0e8aac19e3
add q6k precision in ipex-llm ( #10792 )
...
* add q6k
* add initial 16k
* update
* fix style
2024-04-18 16:52:09 +08:00
Wang, Jian4
14ca42a048
LLM:Fix moe indexs error on cpu ( #10791 )
2024-04-18 15:56:52 +08:00
Guancheng Fu
cbe7b5753f
Add vLLM[xpu] related code ( #10779 )
...
* Add ipex-llm side change
* add runable offline_inference
* refactor to call vllm2
* Verified async server
* add new v2 example
* add README
* fix
* change dir
* refactor readme.md
* add experimental
* fix
2024-04-18 15:29:20 +08:00
Wang, Jian4
209c3501e6
LLM: Optimize qwen1.5 moe model ( #10706 )
...
* update moe block
* fix style
* enable optmize MLP
* enabel kv_cache
* enable fuse rope
* enable fused qkv
* enable flash_attention
* error sdp quantize
* use old api
* use fuse
* use xetla
* fix python style
* update moe_blocks num
* fix output error
* add cpu sdpa
* update
* update
* update
2024-04-18 14:54:05 +08:00
Ziteng Zhang
ff040c8f01
LISA Finetuning Example ( #10743 )
...
* enabling xetla only supports qtype=SYM_INT4 or FP8E5
* LISA Finetuning Example on gpu
* update readme
* add licence
* Explain parameters of lisa & Move backend codes to src dir
* fix style
* fix style
* update readme
* support chatglm
* fix style
* fix style
* update readme
* fix
2024-04-18 13:48:10 +08:00
Yang Wang
952e517db9
use config rope_theta ( #10787 )
...
* use config rope_theta
* fix style
2024-04-17 20:39:11 -07:00
Guancheng Fu
31ea2f9a9f
Fix wrong output for Llama models on CPU ( #10742 )
2024-04-18 11:07:27 +08:00
Xin Qiu
e764f9b1b1
Disable fast fused rope on UHD ( #10780 )
...
* use decoding fast path
* update
* update
* cleanup
2024-04-18 10:03:53 +08:00
Yina Chen
ea5b373a97
Add lookahead GPU example ( #10785 )
...
* Add lookahead example
* fix style & attn mask
* fix typo
* address comments
2024-04-17 17:41:55 +08:00
Wang, Jian4
a20271ffe4
LLM: Fix yi-6b fp16 error on pvc ( #10781 )
...
* updat for yi fp16
* update
* update
2024-04-17 16:49:59 +08:00
Yina Chen
766fe45222
Fix spec error caused by lookup pr ( #10777 )
...
* Fix spec error
* remove
* fix style
2024-04-17 11:27:35 +08:00
Qiyuan Gong
f2e923b3ca
Axolotl v0.4.0 support ( #10773 )
...
* Add Axolotl 0.4.0, remove legacy 0.3.0 support.
* replace is_torch_bf16_gpu_available
* Add HF_HUB_OFFLINE=1
* Move transformers out of requirement
* Refine readme and qlora.yml
2024-04-17 09:49:11 +08:00
Yina Chen
899d392e2f
Support prompt lookup in ipex-llm ( #10768 )
...
* lookup init
* add lookup
* fix style
* remove redundant code
* change param name
* fix style
2024-04-16 16:52:38 +08:00
binbin Deng
0a62933d36
LLM: fix qwen AutoTP ( #10766 )
2024-04-16 09:56:17 +08:00
Cengguang Zhang
3e2662c87e
LLM: fix get env KV_CACHE_ALLOC_BLOCK_LENGTH type. ( #10771 )
2024-04-16 09:32:30 +08:00
binbin Deng
3d561b60ac
LLM: add enable_xetla parameter for optimize_model API ( #10753 )
2024-04-15 12:18:25 +08:00
binbin Deng
c3fc8f4b90
LLM: add bs limitation for llama softmax upcast to fp32 ( #10752 )
2024-04-12 15:40:25 +08:00
Yishuo Wang
8086554d33
use new fp16 sdp in llama and mistral ( #10734 )
2024-04-12 10:49:02 +08:00
Yang Wang
019293e1b9
Fuse MOE indexes computation ( #10716 )
...
* try moe
* use c++ cpu to compute indexes
* fix style
2024-04-11 10:12:55 -07:00
binbin Deng
70ed9397f9
LLM: fix AttributeError of FP16Linear ( #10740 )
2024-04-11 17:03:56 +08:00
Cengguang Zhang
4b024b7aac
LLM: optimize chatglm2 8k input. ( #10723 )
...
* LLM: optimize chatglm2 8k input.
* rename.
2024-04-10 16:59:06 +08:00
Wang, Jian4
c9e6d42ad1
LLM: Fix chatglm3-6b-32k error ( #10719 )
...
* fix chatglm3-6b-32k
* update style
2024-04-10 11:24:06 +08:00
Keyan (Kyrie) Zhang
585c174e92
Read the value of KV_CACHE_ALLOC_BLOCK_LENGTH from the environment variables ( #10707 )
...
* Read the value of KV_CACHE_ALLOC_BLOCK_LENGTH from the environment variables.
* Fix style
2024-04-10 10:48:46 +08:00
Jiao Wang
878a97077b
Fix llava example to support transformerds 4.36 ( #10614 )
...
* fix llava example
* update
2024-04-09 13:47:07 -07:00
Zhicun
b4147a97bb
Fix dtype mismatch error ( #10609 )
...
* fix llama
* fix
* fix code style
* add torch type in model.py
---------
Co-authored-by: arda <arda@arda-arc19.sh.intel.com>
2024-04-09 17:50:33 +08:00
Yishuo Wang
8f45e22072
fix llama2 ( #10710 )
2024-04-09 17:28:37 +08:00
Yishuo Wang
e438f941f2
disable rwkv5 fp16 ( #10699 )
2024-04-09 16:42:11 +08:00
binbin Deng
44922bb5c2
LLM: support baichuan2-13b using AutoTP ( #10691 )
2024-04-09 14:06:01 +08:00
Yina Chen
c7422712fc
mistral 4.36 use fp16 sdp ( #10704 )
2024-04-09 13:50:33 +08:00
Ovo233
dcb2038aad
Enable optimization for sentence_transformers ( #10679 )
...
* enable optimization for sentence_transformers
* fix python style check failure
2024-04-09 12:33:46 +08:00
Yang Wang
5a1f446d3c
support fp8 in xetla ( #10555 )
...
* support fp8 in xetla
* change name
* adjust model file
* support convert back to cpu
* factor
* fix bug
* fix style
2024-04-08 13:22:09 -07:00
Cengguang Zhang
7c43ac0164
LLM: optimize llama natvie sdp for split qkv tensor ( #10693 )
...
* LLM: optimize llama natvie sdp for split qkv tensor.
* fix block real size.
* fix comment.
* fix style.
* refactor.
2024-04-08 17:48:11 +08:00
Xin Qiu
1274cba79b
stablelm fp8 kv cache ( #10672 )
...
* stablelm fp8 kvcache
* update
* fix
* change to fp8 matmul
* fix style
* fix
* fix
* meet code review
* add comment
2024-04-08 15:16:46 +08:00
Cengguang Zhang
c0cd238e40
LLM: support llama2 8k input with w4a16. ( #10677 )
...
* LLM: support llama2 8k input with w4a16.
* fix comment and style.
* fix style.
* fix comments and split tensor to quantized attention forward.
* fix style.
* refactor name.
* fix style.
* fix style.
* fix style.
* refactor checker name.
* refactor native sdp split qkv tensor name.
* fix style.
* fix comment rename variables.
* fix co-exist of intermedia results.
2024-04-08 11:43:15 +08:00
Wang, Jian4
47cabe8fcc
LLM: Fix no return_last_logit running bigdl_ipex chatglm3 ( #10678 )
...
* fix no return_last_logits
* update only for chatglm
2024-04-07 15:27:58 +08:00
Zhicun
9d8ba64c0d
Llamaindex: add tokenizer_id and support chat ( #10590 )
...
* add tokenizer_id
* fix
* modify
* add from_model_id and from_mode_id_low_bit
* fix typo and add comment
* fix python code style
---------
Co-authored-by: pengyb2001 <284261055@qq.com>
2024-04-07 13:51:34 +08:00