binbin Deng
bec00e2015
Improve baichuan2 NPU performance ( #11942 )
2024-08-27 18:37:08 +08:00
Zijie Li
90f692937d
Update npu baichuan2 ( #11939 )
2024-08-27 16:56:26 +08:00
Jiao Wang
b4b6ddf73c
NPU Baichuan2 Multi- Process example ( #11928 )
2024-08-27 15:25:49 +08:00
SONG Ge
e211a5b076
update minicpm to meet latest refactor ( #11937 )
2024-08-27 15:08:01 +08:00
binbin Deng
7c8c9a0670
Update benchmark script for NPU ( #11932 )
2024-08-27 14:41:14 +08:00
Zijie Li
6c3eb1e1e8
refactor from_pretrained API for NPU ( #11927 )
2024-08-27 09:50:30 +08:00
Xiangyu Tian
7ca557aada
LLM: Fix vLLM CPU convert error ( #11926 )
2024-08-27 09:22:19 +08:00
Yuwen Hu
c1d07bc626
Support streaming for lookup generation ( #11922 )
...
* Support streaming for lookup generation
* Small update
* Style fixes
* Add origin generate full back for batch inference and beam search; support input length threshold judgement for directly input with input_ids
* Fix lookup stream generate with eos token
* Small fixes
* Small fix
* index fix
* Small fix
2024-08-26 19:33:31 +08:00
SONG Ge
019f725d4d
[NPU] Add support for running mp minicpm model on npu ( #11909 )
...
* add initial support for npu minicpm mp
* fix minicpm-1b abnormal output error
2024-08-26 17:52:55 +08:00
Yuwen Hu
24c279e0ae
Update IPEX_LLM_PERFORMANCE_MODE with input length threshold ( #11908 )
...
* Update IPEX_LLM_PERFORMANCE_MODE with input length threshold
* Update based on comments. And and judgement for inputs_embeds
* Fix for benchmarking purposes
* Update based on comments
* Small fix
2024-08-23 20:49:15 +08:00
binbin Deng
303a090a6b
Add lm_head optimization on NPU ( #11903 )
2024-08-23 15:51:07 +08:00
Yina Chen
23631cd357
disable lm_head opt for baichuan2-13b ( #11905 )
2024-08-23 15:39:47 +08:00
hxsz1997
650e6e6ce4
Merge pull request #11891 from hxsz1997/baichuan2-compresskv
...
Add compress_kv for Baichuan2
2024-08-23 06:09:58 +03:00
Ruonan Wang
4a61f7d20d
update mlp of llama ( #11897 )
...
* update mlp of llama
* relax threshold of mlp test
* revert code
2024-08-22 20:34:53 +08:00
Yuwen Hu
420ce7d164
Fix non-stop at eos token problem for lookup generation ( #11896 )
...
* Fix non-stop by eos_token_id problem for lookup
* Small fix
* Add judgement when generation_config.eos_token_id is None
* Fix based on comments
2024-08-22 18:55:59 +08:00
Huang, Xinshengzi
4cf03d6212
update baichuan-7b
2024-08-22 18:16:33 +08:00
Guancheng Fu
278b191dc1
Fix optimize lm head error ( #11899 )
2024-08-22 17:45:26 +08:00
Wang, Jian4
5c4ed00593
Add lightweight-serving whisper asr example ( #11847 )
...
* add asr init
* update for pp
* update style
* update readme
* update reamde
2024-08-22 15:46:28 +08:00
Huang, Xinshengzi
eb1e65f8a9
add comment
2024-08-22 15:14:47 +08:00
Huang, Xinshengzi
a2be3d7501
add comment of compress kv in attention forward
2024-08-22 15:11:55 +08:00
Huang, Xinshengzi
ce7de77085
add comment of change in model forward
2024-08-22 14:29:27 +08:00
Huang, Xinshengzi
42398a0045
add comment
2024-08-22 13:17:13 +08:00
Huang, Xinshengzi
48a827aa07
fix typos
2024-08-22 11:35:47 +08:00
Huang, Xinshengzi
8a5df93de2
fix typos
2024-08-22 11:33:07 +08:00
Huang, Xinshengzi
01ed397e7a
fix typos
2024-08-22 11:31:25 +08:00
Huang, Xinshengzi
c6ed1c412d
fix typos
2024-08-22 11:26:49 +08:00
Huang, Xinshengzi
2a0aa9271b
fix typos
2024-08-22 11:23:22 +08:00
Huang, Xinshengzi
4adadddbbc
fix typos
2024-08-22 11:12:23 +08:00
Huang, Xinshengzi
6a5ca17afc
fix typoes
2024-08-22 11:09:58 +08:00
binbin Deng
72a7bf624b
Support qwen2-1.5b with fused decoderlayer optimization on NPU ( #11888 )
2024-08-22 11:09:12 +08:00
Huang, Xinshengzi
6bb9035788
fix typos
2024-08-22 11:08:48 +08:00
Huang, Xinshengzi
86248b0505
add compress_kv for baichuan2
2024-08-22 10:59:08 +08:00
Yina Chen
cc27321441
support chatglm4 in lookup ( #11855 )
2024-08-21 15:53:17 +08:00
Yina Chen
0236de3ac2
set IPEX_LLM_LAST_LM_HEAD=1 as default ( #11885 )
2024-08-21 15:06:12 +08:00
Yang Wang
209d42ab79
Refactor npu mp to make it easier to integrate new models ( #11873 )
...
* Refactor npu mp to make it easier to integrate new models
* fix style
* move layer functions to base
2024-08-20 20:58:47 -07:00
Guancheng Fu
537c0d2767
fix vllm qwen2 models ( #11879 )
2024-08-21 11:05:24 +08:00
Yishuo Wang
bd1e490d62
fix phi3 ( #11878 )
2024-08-21 10:31:41 +08:00
Yang Wang
bdaeee1d63
Fix run_decoders bug ( #11871 )
2024-08-20 12:04:59 -07:00
Yina Chen
c3c058373f
Update compresskv model forward type logic ( #11868 )
...
* update
* fix
2024-08-20 18:11:37 +08:00
Yishuo Wang
d4ee0a89f3
optimize phi3 memory usage ( #11867 )
2024-08-20 17:32:51 +08:00
Yishuo Wang
2946420e14
add minicpmv 2.6 load_low_bit workaround ( #11856 )
2024-08-20 11:16:02 +08:00
Yang Wang
99b05ba1dc
separate prefill into a process ( #11787 )
...
* seperate prefill into a process
* using model.share_memory()
* might work
* worked
* use long prompt
* refactor
* cleanup
* fix bug
* clean up
* changable inter and intra process stages
* refactor
* add max output len
* fix npu_model changes that may cause generate down
* fix npu_model generate import error
* fix generare forward error
---------
Co-authored-by: sgwhat <ge.song@intel.com>
2024-08-19 17:53:36 +08:00
Yishuo Wang
9490781aec
optimize phi3 memory usage again ( #11848 )
2024-08-19 17:26:59 +08:00
Yina Chen
3cd4e87168
Support compress KV with quantize KV ( #11812 )
...
* update llama
* support llama 4.41
* fix style
* support minicpm
* support qwen2
* support minicpm & update
* support chatglm4
* support chatglm
* remove print
* add DynamicCompressFp8Cache & support qwen
* support llama
* support minicpm phi3
* update chatglm2/4
* small fix & support qwen 4.42
* remove print
2024-08-19 15:32:32 +08:00
Zhao Changmin
6841a9ac8f
fix load low bit com dtype ( #11832 )
2024-08-19 13:43:19 +08:00
Yuwen Hu
96796f95cb
Update all-in-one benchmark prompts for continuation task & lookup update for minicpmv ( #11827 )
...
* Update all-in-one benchmark prompts for continuation task
* Small fix
* Add pure-text benchmark support for minicpm-v-2_6
* Support lookahead for model.llm generate of minicpmv
* Add prompt reference
* Small update
* Small fix
2024-08-16 17:16:35 +08:00
Yishuo Wang
e966e85df8
force lm_head optimization in any model if set environment variable ( #11830 )
2024-08-16 16:48:45 +08:00
Yishuo Wang
17a0beb21f
optimize qwen2-audio again ( #11825 )
2024-08-16 11:11:35 +08:00
Yuwen Hu
9e9086cc2a
Update IPEX_LLM_PERFORMANCE_MODE ( #11823 )
2024-08-16 09:48:36 +08:00
Wang, Jian4
5a80fd2633
Fix lightweight-serving no streaming resp on mtl ( #11822 )
2024-08-16 09:43:03 +08:00