Yishuo Wang
ccf618ff4a
Remove all ipex usage ( #12666 )
2025-01-08 10:31:18 +08:00
Zijie Li
8fd2dcba86
Add benchmark_util for transformers >= 4.47.0 ( #12644 )
2025-01-03 10:48:29 +08:00
binbin Deng
534566e290
[NPU] Support minicpm-v with python cpp backend ( #12637 )
2025-01-02 11:13:15 +08:00
Xu, Shuo
fd9cf767ed
All-in-one Benchmark run.py: Ignore error if import BenchmarkWrapper failed. ( #12526 )
2024-12-11 16:20:55 +08:00
binbin Deng
f56a111aa2
[NPU] Fix load-low-bit benchmark script ( #12502 )
2024-12-05 10:01:32 +08:00
Yuwen Hu
ef4028ac2d
[NPU] Support split lm_head for Qwen2 with CPP ( #12491 )
...
* Use split for Qwen2 lm_head instead of slice in optimize_pre
* Support split lm_head in Qwen2 python cpp backend
* Fit with Python acc lib pipeline
* Removed default mixed_precision=True in all-in-one and related examples
* Small fix
* Style fix
* Fix based on comments
* Fix based on comments
* Stype fix
2024-12-04 14:41:08 +08:00
binbin Deng
ab01753b1c
[NPU] update save-load API usage ( #12473 )
2024-12-03 09:46:15 +08:00
binbin Deng
f99f188023
Hotfix of benchmark script ( #12467 )
2024-11-29 14:00:59 +08:00
binbin Deng
c911026f03
[NPU C++] Update model support & examples & benchmark ( #12466 )
2024-11-29 13:35:58 +08:00
binbin Deng
7a97fbb779
Support vpm and resampler module of minicpm-v on NPU ( #12375 )
2024-11-12 15:59:55 +08:00
Yuwen Hu
8fe294e01f
Small fix to all-in-one benchmark ( #12362 )
2024-11-07 18:56:34 +08:00
Jinhe
79f2877413
add minicpm-v models to transformers_int4_npu_win api ( #12352 )
...
* add minicpm npu
* optimize model
2024-11-07 10:05:10 +08:00
Ruonan Wang
c267355b35
fix three NPU benchmark issues ( #12350 )
...
* fix three issues
* limit mixed_precision for CW only
2024-11-06 19:01:01 +08:00
Jin, Qiao
7240c283a3
Add dummy model in iGPU perf ( #12341 )
...
* Add dummy model in iGPU perf
* Add dummy model in iGPU perf
* Fix
2024-11-05 17:56:10 +08:00
Ch1y0q
e54af44ed6
Add transformers_int4_npu_pipeline_win in all-in-one benchmark ( #12325 )
...
* add transformers_int4_npu_pipeline_win
* bugfix
* bugfix: wrong actual_output_len
* fix format
* bugfix & update `README.md`
2024-11-04 16:00:20 +08:00
Yuwen Hu
20755e8077
Small fix to all-in-one benchmark scripts ( #12317 )
2024-11-01 19:16:25 +08:00
Ch1y0q
48123af463
add npu_group_size for transformers_int4_npu_win in all-in-one benchmark api ( #12316 )
...
* add `npu_group_size` for `transformers_int4_npu_win`
small bugfix
* update
2024-11-01 18:44:27 +08:00
Ruonan Wang
3fe2ea3081
[NPU] Reuse prefill of acc lib for pipeline ( #12279 )
...
* first commit
* update example
* fix style
* update example
* embedding as const
* fix generate
* code refactor
* meet code review
* fix style
* change max_output_len to max_context_len
* fix all-in-one
* fix example
* add check for new tokens
2024-10-28 16:05:49 +08:00
Yuwen Hu
e713296090
Update all-in-one benchmark ( #12272 )
...
* Update all-in-one benchmark
* Small fix
* Small fix
* Small fix
2024-10-25 16:52:59 +08:00
Yuwen Hu
93895b2ac2
Openvino all in one benchmark small fix ( #12269 )
...
* Small update for all-in-one benchmark readme to support OpenVINO tests
* Small fix
2024-10-25 14:13:52 +08:00
Zijie Li
f7f62a3fef
Add OpenVINO performance tests to all-in-one benchmark ( #12238 )
...
* add-openvino-to-all-in-one
* update on openvino API
* Update save_openvino.py
* Update save_openvino.py
* Update save_openvino.py
* update on run.py and save_openvino
* update references
* Create openvino-requirements.txt
* fix on comments
* Small updates
* Small fix
* Fix
---------
Co-authored-by: Yuwen Hu <yuwen.hu@intel.com>
2024-10-25 13:53:53 +08:00
Yina Chen
e37f951cce
[NPU] Groupwise ( #12241 )
...
* dq divide
* fix
* support attn divide
* update qwen2 7b
* divide down_proj & other linear
* use concat & reduce sum
* support scale after
* support qwen2
* w/ mm
* update reshape
* spda
* split
* split 2+
* update
* lm head-> 28
* no scale
* update
* update
* update
* fix style
* fix style
* to split linear
* update
* update code
* address comments
* fix style & remove redundant code & revert benchmark scripts
* fix style & remove code
* update save & load
---------
Co-authored-by: Yang Wang <yang3.wang@intel.com>
2024-10-23 14:10:58 +08:00
Chu,Youcheng
f17cc4fdee
feat: add llama3.2-11b-vision in all in one ( #12207 )
...
* feat: add llama3.2-11b-vision in all in one
* fix: change model
* fix: change name
* fix: add a space
* fix: switch import
2024-10-16 10:32:11 +08:00
Jinhe
02399021d6
add npu load_low_bit api in all-in-one benchmark ( #12103 )
2024-09-20 17:56:08 +08:00
Ch1y0q
9650bf616a
add transpose_value_cache for NPU benchmark ( #12092 )
...
* add `transpose_value_cache`
* update
* update
2024-09-19 18:45:05 +08:00
binbin Deng
7f7f6c89f5
Quick fix benchmark script ( #11938 )
2024-08-27 15:29:27 +08:00
binbin Deng
7c8c9a0670
Update benchmark script for NPU ( #11932 )
2024-08-27 14:41:14 +08:00
Yuwen Hu
a0bbd8e28d
All-in-one benchmark update regarding performance mode for input length threshold ( #11920 )
...
* All-in-one benchmark update regarding performance mode input length threshold
* typo fix
2024-08-26 18:52:13 +08:00
Ruonan Wang
a0fbda5bc8
add MiniCPM-Llama3-V-2_5 into all-in-one benchmark ( #11849 )
2024-08-19 17:51:16 +08:00
Yuwen Hu
cfc959defa
Fixes regarding utf-8 in all-in-one benchmark ( #11839 )
2024-08-19 10:38:00 +08:00
Jin, Qiao
9f17234f3b
Add MiniCPM-V-2_6 to iGPU Perf ( #11810 )
...
* Add MiniCPM-V-2_6 to iGPU Perf
* keep last model in yaml
* fix MINICPM_V_IDS
* Restore tested model list
* Small fix
---------
Co-authored-by: Yuwen Hu <yuwen.hu@intel.com>
2024-08-16 18:41:21 +08:00
Yuwen Hu
96796f95cb
Update all-in-one benchmark prompts for continuation task & lookup update for minicpmv ( #11827 )
...
* Update all-in-one benchmark prompts for continuation task
* Small fix
* Add pure-text benchmark support for minicpm-v-2_6
* Support lookahead for model.llm generate of minicpmv
* Add prompt reference
* Small update
* Small fix
2024-08-16 17:16:35 +08:00
Yuwen Hu
356281cb80
Further all-in-one benchmark update continuation task ( #11784 )
...
* Further update prompt for continuation task, and disable lookup candidate update strategy on MTL
* style fix
2024-08-14 14:39:34 +08:00
Yuwen Hu
81824ff8c9
Fix stdout in all-in-one benchmark to utf-8 ( #11772 )
2024-08-13 10:51:08 +08:00
Yuwen Hu
f97a77ea4e
Update all-in-one benchmark for continuation task input preparation ( #11760 )
...
* All use 8192.txt for prompt preparation for now
* Small fix
* Fix text encoding mode to utf-8
* Small update
2024-08-12 17:49:45 +08:00
Jin, Qiao
05989ad0f9
Update npu example and all in one benckmark ( #11766 )
2024-08-12 16:46:46 +08:00
Ruonan Wang
66fe2ee464
initial support of IPEX_LLM_PERFORMANCE_MODE ( #11754 )
...
* add perf mode
* update
* fix style
2024-08-09 19:04:09 +08:00
Zijie Li
8fb36b9f4a
add new benchmark_util.py ( #11713 )
...
* add new benchmark_util.py
2024-08-05 16:18:48 +08:00
Qiyuan Gong
0c6e0b86c0
Refine continuation get input_str ( #11652 )
...
* Remove duplicate code in continuation get input_str.
* Avoid infinite loop in all-in-one due to test_length not in the list.
2024-07-25 14:41:19 +08:00
Xu, Shuo
7f80db95eb
Change run.py in benchmark to support phi-3-vision in arc-perf ( #11638 )
...
Co-authored-by: ATMxsp01 <shou.xu@intel.com>
2024-07-23 09:51:36 +08:00
Zhao Changmin
06745e5742
Add npu benchmark all-in-one script ( #11571 )
...
* npu benchmark
2024-07-15 10:42:37 +08:00
Xu, Shuo
1355b2ce06
Add model Qwen-VL-Chat to iGPU-perf ( #11558 )
...
* Add model Qwen-VL-Chat to iGPU-perf
* small fix
---------
Co-authored-by: ATMxsp01 <shou.xu@intel.com>
2024-07-11 15:39:02 +08:00
Xu, Shuo
028ad4f63c
Add model phi-3-vision-128k-instruct to iGPU-perf benchmark ( #11554 )
...
* try to improve MIniCPM performance
* Add model phi-3-vision-128k-instruct to iGPU-perf benchmark
---------
Co-authored-by: ATMxsp01 <shou.xu@intel.com>
2024-07-10 17:26:30 +08:00
Cengguang Zhang
fa81dbefd3
LLM: update multi gpu write csv in all-in-one benchmark. ( #11538 )
2024-07-09 11:14:17 +08:00
Jun Wang
1efb6ebe93
[ADD] add transformer_int4_fp16_loadlowbit_gpu_win api ( #11511 )
...
* [ADD] add transformer_int4_fp16_loadlowbit_gpu_win api
* [UPDATE] add int4_fp16_lowbit config and description
* [FIX] fix run.py mistake
* [FIX] fix run.py mistake
* [FIX] fix indent; change dtype=float16 to model.half()
2024-07-05 16:38:41 +08:00
Cengguang Zhang
d0b801d7bc
LLM: change write mode in all-in-one benchmark. ( #11444 )
...
* LLM: change write mode in all-in-one benchmark.
* update output style.
2024-06-27 19:36:38 +08:00
RyuKosei
05a8d051f6
Fix run.py run_ipex_fp16_gpu ( #11361 )
...
* fix a bug on run.py
* Update run.py
fixed the format problem
---------
Co-authored-by: sgwhat <ge.song@intel.com>
2024-06-20 10:29:32 +08:00
hxsz1997
44f22cba70
add config and default value ( #11344 )
...
* add config and default value
* add config in taml
* remove lookahead and max_matching_ngram_size in config
* remove streaming and use_fp16_torch_dtype in test yaml
* update task in readme
* update commit of task
2024-06-18 15:28:57 +08:00
hxsz1997
99b309928b
Add lookahead in test_api: transformer_int4_fp16_gpu ( #11337 )
...
* add lookahead in test_api:transformer_int4_fp16_gpu
* change the short prompt of summarize
* change short prompt to cnn_64
* change short prompt of summarize
2024-06-17 17:41:41 +08:00
binbin Deng
6ea1e71af0
Update PP inference benchmark script ( #11323 )
2024-06-17 09:59:36 +08:00