Commit graph

74 commits

Author SHA1 Message Date
WeiguangHan
fd81d66047 LLM: Compress some models to save space (#10315)
* LLM: compress some models to save space

* add deleted comments
2024-03-04 17:53:03 +08:00
Shaojun Liu
bab2ee5f9e update nightly spr perf test (#10178)
* update nightly spr perf test

* update

* update runner lable

* update

* update

* update folder

* revert
2024-03-04 13:46:33 +08:00
Jin Qiao
5d7243067c LLM: add Baichuan2-13B-Chat 2048-256 to MTL perf (#10273) 2024-02-29 13:48:55 +08:00
hxsz1997
cba61a2909 Add html report of ppl (#10218)
* remove include and language option, select the corresponding dataset based on the model name in Run

* change the nightly test time

* change the nightly test time of harness and ppl

* save the ppl result to json file

* generate csv file and print table result

* generate html

* modify the way to get parent folder

* update html in parent folder

* add llm-ppl-summary and llm-ppl-summary-html

* modify echo single result

* remove download fp16.csv

* change model name of PR

* move ppl nightly related files to llm/test folder

* reformat

* seperate make_table from make_table_and_csv.py

* separate make_csv from make_table_and_csv.py

* update llm-ppl-html

* remove comment

* add Download fp16.results
2024-02-27 17:37:08 +08:00
Yuwen Hu
38ae4b372f Add yuan2-2b to win igpu perf test (#10250) 2024-02-27 11:08:33 +08:00
Jin Qiao
3e6d188553 LLM: add baichuan2-13b to mtl perf (#10238) 2024-02-26 15:55:56 +08:00
Chen, Zhentao
f315c7f93a Move harness nightly related files to llm/test folder (#10209)
* move harness nightly files to test folder

* change workflow file path accordingly

* use arc01 when pr

* fix path

* fix fp16 csv path
2024-02-23 11:12:36 +08:00
Yuwen Hu
21de2613ce [LLM] Add model loading time record for all-in-one benchmark (#10201)
* Add model loading time record in csv for all-in-one benchmark

* Small fix

* Small fix to number after .
2024-02-22 13:57:18 +08:00
WeiguangHan
6c09aed90d LLM: add qwen_1.5_7b model for arc perf test (#10166)
* LLM: add qwen_1.5_7b model for arc perf test

* small fix

* revert some codes
2024-02-19 17:21:00 +08:00
Chen, Zhentao
14ba2c5135 Harness: remove deprecated files (#10165) 2024-02-19 14:27:49 +08:00
Yuwen Hu
81ed65fbe7 [LLM] Add qwen1.5-7B in iGPU perf (#10127)
* Add qwen1.5 test config yaml with transformers 4.37.0

* Update for yaml file
2024-02-07 22:31:20 +08:00
dingbaorong
36c9442c6d Arc Stable version test (#10087)
* add batch_size in stable version test

* add batch_size in excludes

* add excludes for batch_size

* fix ci

* triger regression test

* fix xpu version

* disable ci

* address kai's comment

---------

Co-authored-by: Ariadne <wyn2000330@126.com>
2024-02-06 10:23:50 +08:00
WeiguangHan
0aecd8637b LLM: small fix for the html script (#10094) 2024-02-05 17:27:34 +08:00
WeiguangHan
d2d3f6b091 LLM: ensure the result of daily arc perf test (#10016)
* ensure the result of daily arc perf test

* small fix

* small fix

* small fix

* small fix

* small fix

* small fix

* small fix

* small fix

* small fix

* small fix

* concat more csvs

* small fix

* revert some files
2024-01-31 18:26:21 +08:00
WeiguangHan
9724939499 temporarily disable bloom 2k input (#10056) 2024-01-31 17:49:12 +08:00
Jin Qiao
8c8fc148c9 LLM: add rwkv 5 (#10048) 2024-01-31 15:54:55 +08:00
Yuwen Hu
1eaaace2dc Update perf test all-in-one config for batch_size arg (#10012) 2024-01-26 16:46:36 +08:00
WeiguangHan
be5836bee1 LLM: fix outlier value (#9945)
* fix outlier value

* small fix
2024-01-23 17:04:13 +08:00
Cheen Hau, 俊豪
947b1e27b7 Add readme for Whisper Test (#9944)
* Fix local data path

* Remove non-essential files

* Add readme

* Minor fixes to script

* Bugfix, refactor

* Add references to original source. Bugfixes.

* Reviewer comments

* Properly print and explain output

* Move files to dev/benchmark

* Fixes
2024-01-22 15:11:33 +08:00
WeiguangHan
100e0a87e5 LLM: add compressed chatglm3 model (#9892)
* LLM: add compressed chatglm3 model

* small fix

* revert github action
2024-01-18 17:48:15 +08:00
Yuwen Hu
9e2ac5291b Add rwkv v4 back for igpu perf test 32-512 (#9938) 2024-01-18 17:15:28 +08:00
Yuwen Hu
0c498a7b64 Add llama2-13b to igpu perf test (#9920) 2024-01-17 14:58:45 +08:00
Yuwen Hu
8643b62521 [LLM] Support longer context in iGPU perf tests (2048-256) (#9910) 2024-01-16 17:48:37 +08:00
WeiguangHan
ad6b182916 LLM: change the color of peak diff (#9836) 2024-01-04 19:30:32 +08:00
WeiguangHan
9a14465560 LLM: add peak diff (#9789)
* add peak diff

* small fix

* revert yml file
2024-01-03 18:18:19 +08:00
dingbaorong
f5752ead36 Add whisper test (#9808)
* add whisper benchmark code

* add librispeech_asr.py

* add bigdl license
2024-01-02 16:36:05 +08:00
Kai Huang
4d01069302 Temp remove baichuan2-13b 1k from arc perf test (#9810) 2023-12-29 12:54:13 +08:00
dingbaorong
a8baf68865 fix csv_to_html (#9802) 2023-12-28 14:58:51 +08:00
Shaojun Liu
a5e5c3daec set warm_up: 3 num_trials: 50 for cpu stress test (#9799) 2023-12-28 08:55:43 +08:00
dingbaorong
f6bb4ab313 Arc stress test (#9795)
* add arc stress test

* triger ci

* triger CI

* triger ci

* disable ci
2023-12-27 21:02:41 +08:00
Kai Huang
40eaf76ae3 Add baichuan2-13b to Arc perf (#9794)
* add baichuan2-13b

* fix indent

* revert
2023-12-27 19:38:53 +08:00
Shaojun Liu
6c75c689ea bigdl-llm stress test for stable version (#9781)
* 1k-512 2k-512 baseline

* add cpu stress test

* update yaml name

* update

* update

* clean up

* test

* update

* update

* update

* test

* update
2023-12-27 15:40:53 +08:00
dingbaorong
5cfb4c4f5b Arc stable version performance regression test (#9785)
* add arc stable version regression test

* empty gpu mem between different models

* triger ci

* comment spr test

* triger ci

* address kai's comments and disable ci

* merge fp8 and int4

* disable ci
2023-12-27 11:01:56 +08:00
Yuwen Hu
c38e18f2ff [LLM] Migrate iGPU perf tests to new machine (#9784)
* Move 1024 test just after 32-32 test; and enable all model for 1024-128

* Make sure python output encoding in utf-8 so that redirect to txt can always be success

* Upload results to ftp

* Small fix
2023-12-26 19:15:57 +08:00
WeiguangHan
c05d7e1532 LLM: add star_corder_15.5b model (#9772)
* LLM: add star_corder_15.5b model

* revert llm_performance_tests.yml
2023-12-26 18:55:56 +08:00
Shaojun Liu
b6222404b8 bigdl-llm stable version: let the perf test fail if the difference between perf and baseline is greater than 5% (#9750)
* test

* test

* test

* update

* revert
2023-12-25 13:47:11 +08:00
Yuwen Hu
02436c6cce [LLM] Enable more long context in-out pairs for iGPU perf tests (#9765)
* Add test for 1024-128 and enable more tests for 512-64

* Fix date in results csv name to the time when the performance is triggered

* Small fix

* Small fix

* further fixes
2023-12-22 18:18:23 +08:00
Chen, Zhentao
86a69e289c fix harness runner label of manual trigger (#9754)
* fix runner

* update golden
2023-12-22 15:09:22 +08:00
Shaojun Liu
bb52239e0a bigdl-llm stable version release & test (#9732)
* stable version test

* trigger spr test

* update

* trigger

* test

* test

* test

* test

* test

* refine

* release linux first
2023-12-21 22:55:33 +08:00
WeiguangHan
d4d2ccdd9d LLM: remove startcorder-15.5b (#9748) 2023-12-21 18:52:52 +08:00
WeiguangHan
474c099559 LLM: using separate threads to do inference (#9727)
* using separate threads to do inference

* resolve some comments

* resolve some comments

* revert llm_performance_tests.yml file
2023-12-21 17:56:43 +08:00
WeiguangHan
34bb804189 LLM: check csv and its corresponding yaml file (#9702)
* LLM: check csv and its corresponding yaml file

* run PR arc perf test

* modify the name of some variables

* execute the check results script in right place

* use cp to replace mv command

* resolve some comments

* resolve more comments

* revert the llm_performance_test.yaml file
2023-12-21 09:54:33 +08:00
WeiguangHan
3aa8b66bc3 LLM: remove starcoder-15.5b model temporarily (#9720) 2023-12-19 20:14:46 +08:00
Kai Huang
4c112ee70c Rename qwen in model name for arc perf test (#9712) 2023-12-18 20:34:31 +08:00
Chen, Zhentao
b3647507c0 Fix harness workflow (#9704)
* error when larger than 0.001

* fix env setup

* fix typo

* fix typo
2023-12-18 15:42:10 +08:00
WeiguangHan
1f0245039d LLM: check the final csv results for arc perf test (#9684)
* LLM: check the final csv results for arc perf test

* delete useless python script

* change threshold

* revert the llm_performance_tests.yml
2023-12-14 19:46:08 +08:00
Yuwen Hu
82ac2dbf55 [LLM] Small fixes for win igpu test for ipex 2.1 (#9686)
* Fixes to install for igpu performance tests

* Small update for core performance tests model lists
2023-12-14 15:39:51 +08:00
Yuwen Hu
cbdd49f229 [LLM] win igpu performance for ipex 2.1 and oneapi 2024.0 (#9679)
* Change igpu win tests for ipex 2.1 and oneapi 2024.0

* Qwen model repo id updates; updates model list for 512-64

* Add .eval for win igpu all-in-one benchmark for best performance
2023-12-13 18:52:29 +08:00
Mingyu Wei
16febc949c [LLM] Add exclude option in all-in-one performance test (#9632)
* add exclude option in all-in-one perf test

* update arc-perf-test.yaml

* Exclude in_out_pairs in main function

* fix some bugs

* address Kai's comments

* define excludes at the beginning

* add bloomz:2048 to exclude
2023-12-13 18:13:06 +08:00
Yuwen Hu
d272b6dc47 [LLM] Enable generation of html again for win igpu tests (#9652)
* Enable generation of html again and comment out rwkv for 32-512 as it is not very stable

* Small fix
2023-12-11 19:15:17 +08:00