binbin Deng
dd303776cf
Add troubleshooting about transpose value setting
2024-08-26 16:06:32 +08:00
Yuwen Hu
24c279e0ae
Update IPEX_LLM_PERFORMANCE_MODE with input length threshold ( #11908 )
...
* Update IPEX_LLM_PERFORMANCE_MODE with input length threshold
* Update based on comments. And and judgement for inputs_embeds
* Fix for benchmarking purposes
* Update based on comments
* Small fix
2024-08-23 20:49:15 +08:00
binbin Deng
303a090a6b
Add lm_head optimization on NPU ( #11903 )
2024-08-23 15:51:07 +08:00
Yina Chen
23631cd357
disable lm_head opt for baichuan2-13b ( #11905 )
2024-08-23 15:39:47 +08:00
hxsz1997
650e6e6ce4
Merge pull request #11891 from hxsz1997/baichuan2-compresskv
...
Add compress_kv for Baichuan2
2024-08-23 06:09:58 +03:00
Ruonan Wang
4a61f7d20d
update mlp of llama ( #11897 )
...
* update mlp of llama
* relax threshold of mlp test
* revert code
2024-08-22 20:34:53 +08:00
Yuwen Hu
420ce7d164
Fix non-stop at eos token problem for lookup generation ( #11896 )
...
* Fix non-stop by eos_token_id problem for lookup
* Small fix
* Add judgement when generation_config.eos_token_id is None
* Fix based on comments
2024-08-22 18:55:59 +08:00
Huang, Xinshengzi
4cf03d6212
update baichuan-7b
2024-08-22 18:16:33 +08:00
Zijie Li
794abe2ce8
update npu-readme ( #11900 )
2024-08-22 17:49:35 +08:00
Guancheng Fu
278b191dc1
Fix optimize lm head error ( #11899 )
2024-08-22 17:45:26 +08:00
Shaojun Liu
c5b51d41fb
Update pypi tag to 2.2.0.dev0 ( #11895 )
2024-08-22 16:48:09 +08:00
Jinhe
18662dca1c
change 5 pytorch/huggingface models to fp16 ( #11894 )
2024-08-22 16:12:09 +08:00
Wang, Jian4
5c4ed00593
Add lightweight-serving whisper asr example ( #11847 )
...
* add asr init
* update for pp
* update style
* update readme
* update reamde
2024-08-22 15:46:28 +08:00
Huang, Xinshengzi
eb1e65f8a9
add comment
2024-08-22 15:14:47 +08:00
Huang, Xinshengzi
a2be3d7501
add comment of compress kv in attention forward
2024-08-22 15:11:55 +08:00
Jinhe
a8e2573421
added tokenization file for codegeex2-6b in pytorch-models( #11875 )
...
* added tokenization file
* tokenization file readme update
* optional
2024-08-22 14:37:56 +08:00
Huang, Xinshengzi
ce7de77085
add comment of change in model forward
2024-08-22 14:29:27 +08:00
Huang, Xinshengzi
42398a0045
add comment
2024-08-22 13:17:13 +08:00
Huang, Xinshengzi
48a827aa07
fix typos
2024-08-22 11:35:47 +08:00
Huang, Xinshengzi
8a5df93de2
fix typos
2024-08-22 11:33:07 +08:00
Huang, Xinshengzi
01ed397e7a
fix typos
2024-08-22 11:31:25 +08:00
Huang, Xinshengzi
c6ed1c412d
fix typos
2024-08-22 11:26:49 +08:00
Huang, Xinshengzi
2a0aa9271b
fix typos
2024-08-22 11:23:22 +08:00
Huang, Xinshengzi
4adadddbbc
fix typos
2024-08-22 11:12:23 +08:00
Huang, Xinshengzi
6a5ca17afc
fix typoes
2024-08-22 11:09:58 +08:00
binbin Deng
72a7bf624b
Support qwen2-1.5b with fused decoderlayer optimization on NPU ( #11888 )
2024-08-22 11:09:12 +08:00
Huang, Xinshengzi
6bb9035788
fix typos
2024-08-22 11:08:48 +08:00
Huang, Xinshengzi
86248b0505
add compress_kv for baichuan2
2024-08-22 10:59:08 +08:00
Zijie Li
bdbe995b01
Update README.md ( #11889 )
...
Set datasets version to 2.16.1. Clear out the transformers version requirement.
2024-08-22 09:40:16 +08:00
Yina Chen
cc27321441
support chatglm4 in lookup ( #11855 )
2024-08-21 15:53:17 +08:00
Yina Chen
0236de3ac2
set IPEX_LLM_LAST_LM_HEAD=1 as default ( #11885 )
2024-08-21 15:06:12 +08:00
SONG Ge
8c5c7f32dd
Update doc for running npu generate example with ipex-llm[npu] ( #11876 )
...
* update doc for running npu generate example with ipex-llm[npu]
* switch max_prompt_len to 512 to fix compile error on mtl
2024-08-21 13:45:29 +08:00
Yang Wang
209d42ab79
Refactor npu mp to make it easier to integrate new models ( #11873 )
...
* Refactor npu mp to make it easier to integrate new models
* fix style
* move layer functions to base
2024-08-20 20:58:47 -07:00
Guancheng Fu
537c0d2767
fix vllm qwen2 models ( #11879 )
2024-08-21 11:05:24 +08:00
Yishuo Wang
bd1e490d62
fix phi3 ( #11878 )
2024-08-21 10:31:41 +08:00
Yuwen Hu
eab6f6dde4
Spr perf small fix ( #11874 )
2024-08-21 09:35:26 +08:00
Yang Wang
bdaeee1d63
Fix run_decoders bug ( #11871 )
2024-08-20 12:04:59 -07:00
Chu,Youcheng
32f0a77846
feat: update readme for ppl test ( #11865 )
...
* feat: update readme for ppl test
* fix: textual adjustments
* fix: textual adjustments
* Add ipex-llm npu option in setup.py (#11858 )
* add ipex-llm npu release
* update example doc
* meet latest release changes
* optimize phi3 memory usage (#11867 )
* Update `ipex-llm` default transformers version to 4.37.0 (#11859 )
* Update default transformers version to 4.37.0
* Add dependency requirements for qwen and qwen-vl
* Temp fix transformers version for these not yet verified models
* Skip qwen test in UT for now as it requires transformers<4.37.0
* Update performance test regarding updated default `transformers==4.37.0` (#11869 )
* Update igpu performance from transformers 4.36.2 to 4.37.0 (#11841 )
* upgrade arc perf test to transformers 4.37 (#11842 )
* fix load low bit com dtype (#11832 )
* feat: add mixed_precision argument on ppl longbench evaluation
* fix: delete extra code
* feat: upgrade arc perf test to transformers 4.37
* fix: add missing codes
* fix: keep perf test for qwen-vl-chat in transformers 4.36
* fix: remove extra space
* fix: resolve pr comment
* fix: add empty line
* fix: add pip install for spr and core test
* fix: delete extra comments
* fix: remove python -m for pip
* Revert "fix load low bit com dtype (#11832 )"
This reverts commit 6841a9ac8f .
---------
Co-authored-by: Zhao Changmin <changmin.zhao@intel.com>
Co-authored-by: Jinhe Tang <jin.tang1337@gmail.com>
* add transformers==4.36 for qwen vl in igpu-perf (#11846 )
* add transformers==4.36.2 for qwen-vl
* Small update
---------
Co-authored-by: Yuwen Hu <yuwen.hu@intel.com>
* fix: remove qwen-7b on core test (#11851 )
* fix: remove qwen-7b on core test
* fix: change delete to comment
---------
Co-authored-by: Jinhe Tang <jin.tang1337@gmail.com>
* replce filename (#11854 )
* fix: remove qwen-7b on core test
* fix: change delete to comment
* fix: replace filename
---------
Co-authored-by: Jinhe Tang <jin.tang1337@gmail.com>
* fix: delete extra comments (#11863 )
* Remove transformers installation for temp test purposes
* Small fix
* Small update
---------
Co-authored-by: Chu,Youcheng <70999398+cranechu0131@users.noreply.github.com>
Co-authored-by: Zhao Changmin <changmin.zhao@intel.com>
Co-authored-by: Jinhe Tang <jin.tang1337@gmail.com>
Co-authored-by: Zijie Li <michael20001122@gmail.com>
Co-authored-by: Chu,Youcheng <1340390339@qq.com>
* Pytorch models transformers version update (#11860 )
* yi sync
* delete 4.34 constraint
* delete 4.34 constraint
* delete 4.31 constraint
* delete 4.34 constraint
* delete 4.35 constraint
* added <=4.33.3 constraint
* added <=4.33.3 constraint
* switched to chinese prompt
* Update compresskv model forward type logic (#11868 )
* update
* fix
* Update local import for ppl (#11866 )
Co-authored-by: jenniew <jenniewang123@gmail.com>
* fix: textual adjustment
---------
Co-authored-by: SONG Ge <38711238+sgwhat@users.noreply.github.com>
Co-authored-by: Yishuo Wang <yishuo.wang@intel.com>
Co-authored-by: Yuwen Hu <54161268+Oscilloscope98@users.noreply.github.com>
Co-authored-by: Zhao Changmin <changmin.zhao@intel.com>
Co-authored-by: Jinhe Tang <jin.tang1337@gmail.com>
Co-authored-by: Zijie Li <michael20001122@gmail.com>
Co-authored-by: Yina Chen <33650826+cyita@users.noreply.github.com>
Co-authored-by: RyuKosei <70006706+RyuKosei@users.noreply.github.com>
Co-authored-by: jenniew <jenniewang123@gmail.com>
2024-08-20 20:13:54 +08:00
RyuKosei
5df00869de
Update local import for ppl ( #11866 )
...
Co-authored-by: jenniew <jenniewang123@gmail.com>
2024-08-20 18:50:00 +08:00
Yina Chen
c3c058373f
Update compresskv model forward type logic ( #11868 )
...
* update
* fix
2024-08-20 18:11:37 +08:00
Jinhe
3ee194d983
Pytorch models transformers version update ( #11860 )
...
* yi sync
* delete 4.34 constraint
* delete 4.34 constraint
* delete 4.31 constraint
* delete 4.34 constraint
* delete 4.35 constraint
* added <=4.33.3 constraint
* added <=4.33.3 constraint
* switched to chinese prompt
2024-08-20 18:01:42 +08:00
Yuwen Hu
0d58c2fdf9
Update performance test regarding updated default transformers==4.37.0 ( #11869 )
...
* Update igpu performance from transformers 4.36.2 to 4.37.0 (#11841 )
* upgrade arc perf test to transformers 4.37 (#11842 )
* fix load low bit com dtype (#11832 )
* feat: add mixed_precision argument on ppl longbench evaluation
* fix: delete extra code
* feat: upgrade arc perf test to transformers 4.37
* fix: add missing codes
* fix: keep perf test for qwen-vl-chat in transformers 4.36
* fix: remove extra space
* fix: resolve pr comment
* fix: add empty line
* fix: add pip install for spr and core test
* fix: delete extra comments
* fix: remove python -m for pip
* Revert "fix load low bit com dtype (#11832 )"
This reverts commit 6841a9ac8f .
---------
Co-authored-by: Zhao Changmin <changmin.zhao@intel.com>
Co-authored-by: Jinhe Tang <jin.tang1337@gmail.com>
* add transformers==4.36 for qwen vl in igpu-perf (#11846 )
* add transformers==4.36.2 for qwen-vl
* Small update
---------
Co-authored-by: Yuwen Hu <yuwen.hu@intel.com>
* fix: remove qwen-7b on core test (#11851 )
* fix: remove qwen-7b on core test
* fix: change delete to comment
---------
Co-authored-by: Jinhe Tang <jin.tang1337@gmail.com>
* replce filename (#11854 )
* fix: remove qwen-7b on core test
* fix: change delete to comment
* fix: replace filename
---------
Co-authored-by: Jinhe Tang <jin.tang1337@gmail.com>
* fix: delete extra comments (#11863 )
* Remove transformers installation for temp test purposes
* Small fix
* Small update
---------
Co-authored-by: Chu,Youcheng <70999398+cranechu0131@users.noreply.github.com>
Co-authored-by: Zhao Changmin <changmin.zhao@intel.com>
Co-authored-by: Jinhe Tang <jin.tang1337@gmail.com>
Co-authored-by: Zijie Li <michael20001122@gmail.com>
Co-authored-by: Chu,Youcheng <1340390339@qq.com>
2024-08-20 17:59:28 +08:00
Yuwen Hu
5e8286f72d
Update ipex-llm default transformers version to 4.37.0 ( #11859 )
...
* Update default transformers version to 4.37.0
* Add dependency requirements for qwen and qwen-vl
* Temp fix transformers version for these not yet verified models
* Skip qwen test in UT for now as it requires transformers<4.37.0
2024-08-20 17:37:58 +08:00
Yishuo Wang
d4ee0a89f3
optimize phi3 memory usage ( #11867 )
2024-08-20 17:32:51 +08:00
SONG Ge
5b83493b1a
Add ipex-llm npu option in setup.py ( #11858 )
...
* add ipex-llm npu release
* update example doc
* meet latest release changes
2024-08-20 17:29:49 +08:00
Heyang Sun
ee6852c915
Fix typo ( #11862 )
2024-08-20 16:38:11 +08:00
Yishuo Wang
2946420e14
add minicpmv 2.6 load_low_bit workaround ( #11856 )
2024-08-20 11:16:02 +08:00
SONG Ge
7380823f3f
Update Llama2 multi-processes example ( #11852 )
...
* update llama2 multi-processes examples
* update
* update readme
* update
2024-08-19 19:49:01 +08:00
Yang Wang
99b05ba1dc
separate prefill into a process ( #11787 )
...
* seperate prefill into a process
* using model.share_memory()
* might work
* worked
* use long prompt
* refactor
* cleanup
* fix bug
* clean up
* changable inter and intra process stages
* refactor
* add max output len
* fix npu_model changes that may cause generate down
* fix npu_model generate import error
* fix generare forward error
---------
Co-authored-by: sgwhat <ge.song@intel.com>
2024-08-19 17:53:36 +08:00
Jinhe
da3d7a3a53
delete transformers version requirement ( #11845 )
...
* delete transformers version requirement
* delete transformers version requirement
2024-08-19 17:53:02 +08:00