Yina Chen
7cd6ec9723
MiniCPM-V support compresskv ( #11779 )
...
* fix check error
* fix other models
* remove print
2024-08-13 19:03:40 +08:00
Qiyuan Gong
3998de14f0
Fix mistral forward_qkv in q4_0 ( #11781 )
...
* Fix mistral forward_qkv without self.rotary_emb.base in q4_0.
* Replace apply_rotary_pos_emb_no_cache_xpu with rotary_half_inplaced.
* Revert https://github.com/intel-analytics/ipex-llm/pull/11765
2024-08-13 16:48:19 +08:00
Heyang Sun
70c828b87c
deepspeed zero3 QLoRA finetuning ( #11625 )
...
* deepspeed zero3 QLoRA finetuning
* Update convert.py
* Update low_bit_linear.py
* Update utils.py
* Update qlora_finetune_llama2_13b_arch_2_card.sh
* Update low_bit_linear.py
* Update alpaca_qlora_finetuning.py
* Update low_bit_linear.py
* Update utils.py
* Update convert.py
* Update alpaca_qlora_finetuning.py
* Update alpaca_qlora_finetuning.py
* Update low_bit_linear.py
* Update deepspeed_zero3.json
* Update qlora_finetune_llama2_13b_arch_2_card.sh
* Update low_bit_linear.py
* Update low_bit_linear.py
* Update utils.py
* fix style
* fix style
* Update alpaca_qlora_finetuning.py
* Update qlora_finetune_llama2_13b_arch_2_card.sh
* Update convert.py
* Update low_bit_linear.py
* Update model.py
* Update alpaca_qlora_finetuning.py
* Update low_bit_linear.py
* Update low_bit_linear.py
* Update low_bit_linear.py
2024-08-13 16:15:29 +08:00
Yishuo Wang
a184b120c9
fix minicpm-v 2.5 ( #11780 )
2024-08-13 16:14:00 +08:00
Qiyuan Gong
a88c132e54
Reduce Mistral softmax memory only in low memory mode ( #11775 )
...
* Reduce Mistral softmax memory only in low memory mode
2024-08-13 14:50:54 +08:00
Yishuo Wang
aa861df066
use new fp32 softmax kernel ( #11776 )
2024-08-13 14:48:11 +08:00
binbin Deng
23d3acdc77
Add experimental support of fused decoder layer for llama2 ( #11768 )
2024-08-13 14:41:36 +08:00
Yishuo Wang
a1eb793f70
optimize minicpm v 2_6 firs token perf ( #11770 )
2024-08-13 09:51:18 +08:00
Yina Chen
841dbcdf3a
Fix compresskv with lookahead issue ( #11767 )
...
* fix compresskv + lookahead attn_mask qwen2
* support llama chatglm
* support mistral & chatglm
* address comments
* revert run.py
2024-08-12 18:53:55 +08:00
Xu, Shuo
1b05caba2b
Set mistral fuse rope to false except fp6 & fp16 ( #11765 )
...
* set mistral fuse rope to false except fp6 & fp16
* lint
* lint
---------
Co-authored-by: ATMxsp01 <shou.xu@intel.com>
2024-08-12 17:25:07 +08:00
Ruonan Wang
8db34057b4
optimize lookahead init time ( #11769 )
2024-08-12 17:19:12 +08:00
Yishuo Wang
57d177738d
optimize minicpm-v-2_6 repetition penalty ( #11763 )
2024-08-12 14:10:10 +08:00
Ruonan Wang
66fe2ee464
initial support of IPEX_LLM_PERFORMANCE_MODE ( #11754 )
...
* add perf mode
* update
* fix style
2024-08-09 19:04:09 +08:00
Yina Chen
4b9c57cc60
Support compress kv with lookahead ( #11752 )
...
* support compress kv with lookahead
* enough kv miss param
2024-08-09 17:39:57 +08:00
Yishuo Wang
93455aac09
fix minicpm V 2.6 repeat output ( #11753 )
2024-08-09 17:39:24 +08:00
Ruonan Wang
7e917d6cfb
fix gptq of llama ( #11749 )
...
* fix gptq of llama
* small fix
2024-08-09 16:39:25 +08:00
Yina Chen
dd46c141bd
Phi3 support compresskv ( #11733 )
...
* phi3 support compresskv
* fix phi3 mtl error
* fix conflict with quant kv
* fix abnormal on mtl
* fix style
* use slide windows size to compress kv
* support sliding window
* fix style
* fix style
* temp: partial support quant kv
* support quant kv with compress kv, todo: model check
* temp
* fix style
* fix style
* remove prepare
* address comment
* default -> 1.8k
2024-08-09 15:43:43 +08:00
Qiyuan Gong
d8808cc2e3
Mistral apply_rotary_pos_emb_no_cache_xpu use rope_theta from config ( #11747 )
...
mistral-7B-instruct-v0.2 and mistral-7B-instruct-v0.1 use different rope_theta (0.2 is 1e, 0.1 is 1e5). Pass self.config.rope_theta to apply_rotary_pos_emb_no_cache_xpu to avoid output difference.
2024-08-09 10:35:51 +08:00
Yishuo Wang
54cc9353db
support and optimize minicpm-v-2_6 ( #11738 )
2024-08-07 18:21:16 +08:00
Yina Chen
e956e71fc1
fix conflict with quant kv ( #11737 )
2024-08-07 18:10:30 +08:00
Ruonan Wang
00a5574c8a
Use merge_qkv to replace fused_qkv for llama2 ( #11727 )
...
* update 4.38
* support new versions
* update
* fix style
* fix style
* update rope
* temp test sdpa
* fix style
* fix cpu ut
2024-08-07 18:04:01 +08:00
Yina Chen
d2abc9711b
Fix MTL 4k input qwen2 compresskv error ( #11734 )
...
* fix
* fix style
2024-08-07 16:21:57 +08:00
Yina Chen
a71ae7c22b
Support minicpm compresskv & modify default compresskv config & default enable compresskv on mtl 2.5k~4.5k ( #11726 )
...
* support minicpm & modify default & default enable on mtl 2.5k~4.5k
* fix style
2024-08-07 11:35:39 +08:00
Yishuo Wang
c093f7d980
fix phi3 ( #11729 )
2024-08-07 09:39:46 +08:00
Yishuo Wang
929675aa6b
support latest phi3 ( #11721 )
2024-08-06 15:52:55 +08:00
Yishuo Wang
bbdff6edeb
optimize internvl2 4b performance ( #11720 )
2024-08-06 14:25:08 +08:00
Yishuo Wang
f44b732aa8
support internvl2-4b ( #11718 )
2024-08-06 13:36:32 +08:00
Ruonan Wang
aa98ef96fe
change mixed_precision to q6_k ( #11706 )
2024-08-02 15:55:16 +08:00
Xiangyu Tian
1baa3efe0e
Optimizations for Pipeline Parallel Serving ( #11702 )
...
Optimizations for Pipeline Parallel Serving
2024-08-02 12:06:59 +08:00
Yina Chen
8d1e0bd2f4
add sdp causal support in llama ( #11705 )
2024-08-02 10:27:40 +08:00
Ruonan Wang
736a7ef72e
add sdp_causal for mistral 4.36 ( #11686 )
...
* add sdp_causal for mistral
* fix
* update
2024-08-01 18:57:31 +08:00
Yina Chen
45c730ff39
Chatglm support compresskv ( #11690 )
...
* chatglm4 support compresskv
* fix
* fix style
* support chatglm2
* fix quantkv conflict
* fix style
2024-08-01 18:20:20 +08:00
Guancheng Fu
afeca38a47
Fix import vllm condition ( #11682 )
2024-07-31 13:50:01 +08:00
Ruonan Wang
54bf3a23a6
add fallback for unsupported k-quants ( #11691 )
...
* add fallback
* fix style
* fix
2024-07-31 11:39:58 +08:00
Yina Chen
670ad887fc
Qwen support compress kv ( #11680 )
...
* Qwen support compress kv
* fix style
* fix
2024-07-30 11:16:42 +08:00
hxsz1997
9b36877897
disable default quantize_kv of GQA on MTL ( #11679 )
...
* disable default quantizekv of gqa in mtl
* fix stype
* fix stype
* fix stype
* fix stype
* fix stype
* fix stype
2024-07-30 09:38:46 +08:00
Yishuo Wang
c02003925b
add mlp for gemma2 ( #11678 )
2024-07-29 16:10:23 +08:00
Yishuo Wang
6f999e6e90
add sdp for gemma2 ( #11677 )
2024-07-29 15:15:47 +08:00
Ruonan Wang
c11d5301d7
add sdp fp8 for llama ( #11671 )
...
* add sdp fp8 for llama
* fix style
* refactor
2024-07-29 13:46:22 +08:00
Yishuo Wang
7f88ce23cd
add more gemma2 optimization ( #11673 )
2024-07-29 11:13:00 +08:00
Yishuo Wang
3e8819734b
add basic gemma2 optimization ( #11672 )
2024-07-29 10:46:51 +08:00
Heyang Sun
ba01b85c13
empty cache only for 1st token but rest token to speed up ( #11665 )
2024-07-26 16:46:21 +08:00
Yina Chen
fc7f8feb83
Support compress kv ( #11642 )
...
* mistral snapkv
* update
* mtl update
* update
* update
* update
* add comments
* style fix
* fix style
* support llama
* llama use compress kv
* support mistral 4.40
* fix style
* support diff transformers versions
* move snapkv util to kv
* fix style
* meet comments & small fix
* revert all in one
* fix indent
---------
Co-authored-by: leonardozcm <leonardo1997zcm@gmail.com>
2024-07-26 16:02:00 +08:00
Yishuo Wang
6bcdc6cc8f
fix qwen2 cpu ( #11663 )
2024-07-26 13:41:51 +08:00
Guancheng Fu
a4d30a8211
Change logic for detecting if vllm is available ( #11657 )
...
* fix
* fix
2024-07-25 15:24:19 +08:00
Xiangyu Tian
4499d25c26
LLM: Fix ParallelLMHead convert in vLLM cpu ( #11654 )
2024-07-25 13:07:19 +08:00
binbin Deng
777e61d8c8
Fix qwen2 & int4 on NPU ( #11646 )
2024-07-24 13:14:39 +08:00
Yishuo Wang
1b3b46e54d
fix chatglm new model ( #11639 )
2024-07-23 13:44:56 +08:00
Xiangyu Tian
060792a648
LLM: Refine Pipeline Parallel FastAPI ( #11587 )
...
Refine Pipeline Parallel FastAPI
2024-07-22 15:52:05 +08:00
Wang, Jian4
1eed0635f2
Add lightweight serving and support tgi parameter ( #11600 )
...
* init tgi request
* update openai api
* update for pp
* update and add readme
* add to docker
* add start bash
* update
* update
* update
2024-07-19 13:15:56 +08:00