Wang, Jian4
51f2effb05
Add xpu-tgi manually_build ( #11556 )
2024-07-11 10:35:40 +08:00
Yuwen Hu
8982ab73d5
Add Yi-6B and StableLM to iGPU perf test ( #11546 )
...
* Add transformer4.38.2 test to igpu benchmark (#11529 )
* add transformer4.38.1 test to igpu benchmark
* use transformers4.38.2 & fix csv name error in 4.38 workflow
* add model Yi-6B-Chat & remove temporarily most models
---------
Co-authored-by: ATMxsp01 <shou.xu@intel.com>
* filter some errorlevel (#11541 )
Co-authored-by: ATMxsp01 <shou.xu@intel.com>
* Restore the temporarily removed models in iGPU-perf (#11544 )
* filter some errorlevel
* restore the temporarily removed models in iGPU-perf
---------
Co-authored-by: ATMxsp01 <shou.xu@intel.com>
---------
Co-authored-by: Xu, Shuo <100334393+ATMxsp01@users.noreply.github.com>
Co-authored-by: ATMxsp01 <shou.xu@intel.com>
2024-07-09 18:51:23 +08:00
Xu, Shuo
64cfed602d
Add new models to benchmark ( #11505 )
...
* Add new models to benchmark
* remove Qwen/Qwen-VL-Chat to pass the validation
---------
Co-authored-by: ATMxsp01 <shou.xu@intel.com>
2024-07-08 10:35:55 +08:00
Yuwen Hu
8f376e5192
Change igpu perf to mainly test int4+fp16 ( #11513 )
2024-07-05 17:12:33 +08:00
Shaojun Liu
932ef78131
Update Workflow Inputs, Runner, and PR Validation Process ( #11501 )
...
* update check-artifact runner label to Shire
* update github.event.inputs to inputs
* update PR template
2024-07-03 16:49:54 +08:00
Shaojun Liu
e7ab93b55c
Update pull_request_template.md ( #11484 )
...
* Update pull_request_template.md
* refine
2024-07-03 11:13:16 +08:00
Jun Wang
18c973dc3e
Wang jun/ipex llm workflow ( #11499 )
...
* [update] merge manually build for testing function to manualy build
* [FIX] change public type to string
* [FIX] change public type to string
* [FIX] remove github.event prefix for inputs
2024-07-03 10:13:42 +08:00
Yuwen Hu
e53bd4401c
Small typo fixes in binary build workflow ( #11494 )
2024-07-02 19:11:43 +08:00
Yuwen Hu
4e32c92979
Further fix for triggering perf test from commit ( #11493 )
...
* Further fix for triggering perf test from commit
* Small fix
2024-07-02 18:56:53 +08:00
Jun Wang
6352c718f3
[update] merge manually build for testing function to manualy build ( #11491 )
2024-07-02 16:28:15 +08:00
Yuwen Hu
986b10e397
Further fix for performance tests triggered by pr ( #11488 )
2024-07-02 15:29:42 +08:00
Yuwen Hu
bb6953c19e
Support pr validate perf test ( #11486 )
...
* Support triggering performance tests through commits
* Small fix
* Small fix
* Small fixes
2024-07-02 15:20:42 +08:00
Shaojun Liu
a1164e45b6
Enable Release Pypi workflow to be called in another repo ( #11483 )
2024-07-01 19:48:21 +08:00
Yuwen Hu
fb4774b076
Update pull request template for manually-ttriggered Unit tests ( #11482 )
2024-07-01 19:06:29 +08:00
Yuwen Hu
ca24794dd0
Fixes for performance test triggering ( #11481 )
2024-07-01 18:39:54 +08:00
Yuwen Hu
6bdc562f4c
Enable triggering nightly tests/performance tests from another repo ( #11480 )
...
* Enable triggering from another workflow for nightly tests and example tests
* Enable triggering from another workflow for nightly performance tests
2024-07-01 17:45:42 +08:00
Yuwen Hu
dbba51f455
Enable LLM UT workflow to be called in another repo ( #11475 )
...
* Enable LLM UT workflow to be called in another repo
* Small fixes
* Small fix
2024-07-01 15:26:17 +08:00
Shaojun Liu
13f59ae6b4
Fix llm binary build linux-build-avxvnni failure ( #11447 )
...
* skip gpg check failure
* skip gpg check
2024-06-27 14:12:14 +08:00
Yuwen Hu
75f836f288
Add extra warmup for THUDM/glm-4-9b-chat in igpu-performance test ( #11417 )
2024-06-24 18:08:05 +08:00
Shaojun Liu
5e823ef2ce
Fix nightly arc perf ( #11404 )
...
* pip install pytest for arc perf test
* trigger test
2024-06-24 15:58:41 +08:00
Shaojun Liu
5aa3e427a9
Fix docker images ( #11362 )
...
* Fix docker images
* add-apt-repository requires gnupg, gpg-agent, software-properties-common
* update
* avoid importing ipex again
2024-06-20 15:44:55 +08:00
Wenjing Margaret Mao
c0e86c523a
Add qwen-moe batch1 to nightly perf ( #11369 )
...
* add moe
* reduce 437 models
* rename
* fix syntax
* add moe check result
* add 430 + 437
* all modes
* 4-37-4 exclud
* revert & comment
---------
Co-authored-by: Yishuo Wang <yishuo.wang@intel.com>
2024-06-20 14:17:41 +08:00
Wenjing Margaret Mao
b2f62a8561
Add batch 4 perf test ( #11355 )
...
* copy files to this branch
* add tasks
* comment one model
* change the model to test the 4.36
* only test batch-4
* typo
* typo
* typo
* typo
* typo
* typo
* add 4.37-batch4
* change the file name
* revet yaml file
* no print
* add batch4 task
* revert
---------
Co-authored-by: Yishuo Wang <yishuo.wang@intel.com>
2024-06-20 09:48:52 +08:00
Qiyuan Gong
de4bb97b4f
Remove accelerate 0.23.0 install command in readme and docker ( #11333 )
...
*ipex-llm's accelerate has been upgraded to 0.23.0. Remove accelerate 0.23.0 install command in README and docker。
2024-06-17 17:52:12 +08:00
Yuwen Hu
a2a5890b48
Make manually-triggered perf test able to choose which test to run ( #11324 )
2024-06-17 10:23:13 +08:00
Yuwen Hu
1978f63f6b
Fix igpu performance guide regarding html generation ( #11328 )
2024-06-17 10:21:30 +08:00
Wenjing Margaret Mao
b61f6e3ab1
Add update_parent_folder for nightly_perf_test ( #11287 )
...
* add update_parent_folder and change the workflow file
* add update_parent_folder and change the workflow file
* move to pr mode and comment the test
* use one model per comfig
* revert
---------
Co-authored-by: Yishuo Wang <yishuo.wang@intel.com>
2024-06-12 17:58:13 +08:00
Wenjing Margaret Mao
70b17c87be
Merge multiple batches ( #11264 )
...
* add merge steps
* move to pr mode
* remove build + add merge.py
* add tohtml and change cp
* change test_batch folder path
* change merge_temp path
* change to html folder
* revert
* change place
* revert 437
* revert space
---------
Co-authored-by: Yishuo Wang <yishuo.wang@intel.com>
2024-06-07 18:38:45 +08:00
Shaojun Liu
8aabb5bac7
Enable CodeQL Check for CT39 ( #11242 )
...
* Create codeql.yml
* Update codeql.yml
* Update codeql.yml
* Update codeql.yml
* Update codeql.yml
2024-06-06 17:41:12 +08:00
Wenjing Margaret Mao
c825a7e1e9
change the workflow file to test ftp ( #11241 )
...
* change the workflow to test ftp
* comment some models
* revert file
---------
Co-authored-by: Yishuo Wang <yishuo.wang@intel.com>
2024-06-06 16:53:19 +08:00
Wenjing Margaret Mao
231b968aba
Modify the check_results.py to support batch 2&4 ( #11133 )
...
* add batch 2&4 and exclude to perf_test
* modify the perf-test&437 yaml
* modify llm_performance_test.yml
* remove batch 4
* modify check_results.py to support batch 2&4
* change the batch_size format
* remove genxir
* add str(batch_size)
* change actual_test_casese in check_results file to support batch_size
* change html highlight
* less models to test html and html_path
* delete the moe model
* split batch html
* split
* use installing from pypi
* use installing from pypi - batch2
* revert cpp
* revert cpp
* merge two jobs into one, test batch_size in one job
* merge two jobs into one, test batch_size in one job
* change file directory in workflow
* try catch deal with odd file without batch_size
* modify pandas version
* change the dir
* organize the code
* organize the code
* remove Qwen-MOE
* modify based on feedback
* modify based on feedback
* modify based on second round of feedback
* modify based on second round of feedback + change run-arc.sh mode
* modify based on second round of feedback + revert config
* modify based on second round of feedback + revert config
* modify based on second round of feedback + remove comments
* modify based on second round of feedback + remove comments
* modify based on second round of feedback + revert arc-perf-test
* modify based on third round of feedback
* change error type
* change error type
* modify check_results.html
* split batch into two folders
* add all models
* move csv_name
* revert pr test
* revert pr test
---------
Co-authored-by: Yishuo Wang <yishuo.wang@intel.com>
2024-06-05 15:04:55 +08:00
Shaojun Liu
dc4fea7e3f
always cleanup conda env after build ( #11211 )
2024-06-05 13:46:30 +08:00
Yuwen Hu
9f8074c653
Add extra warmup for chatglm3-6b in igpu-performance test ( #11197 )
...
* Add extra warmup for chatglm3-6b to record more stable performance (int4+fp32)
* Small updates
2024-06-04 14:06:09 +08:00
Shaojun Liu
401013a630
Remove chatglm_C Module to Eliminate LGPL Dependency ( #11178 )
...
* remove chatglm_C.**.pyd to solve ngsolve weak copyright vunl
* fix style check error
* remove chatglm native int4 from langchain
2024-05-31 17:03:11 +08:00
Jin Qiao
25b6402315
Add Windows GPU unit test ( #11050 )
...
* Add Windows GPU UT
* temporarily remove ut on arc
* retry
* retry
* retry
* fix
* retry
* retry
* fix
* retry
* retry
* retry
* retry
* retry
* retry
* retry
* retry
* retry
* retry
* retry
* retry
* retry
* fix
* retry
* retry
* retry
* retry
* retry
* retry
* merge into single workflow
* retry inference test
* retry
* retrigger
* try to fix inference test
* retry
* retry
* retry
* retry
* retry
* retry
* retry
* retry
* retry
* retry
* retry
* check lower_bound
* retry
* retry
* try example test
* try fix example test
* retry
* fix
* seperate function into shell script
* remove cygpath
* try remove all cygpath
* retry
* retry
* Revert "try remove all cygpath"
This reverts commit 7ceeff3e48f08429062ecef548c1a3ad3488756f.
* Revert "retry"
This reverts commit 40ea2457843bff6991b8db24316cde5de1d35418.
* Revert "retry"
This reverts commit 817d0db3e5aec3bd449d3deaf4fb01d3ecfdc8a3.
* enable ut
* fix
* retrigger
* retrigger
* update download url
* fix
* fix
* retry
* add comment
* fix
2024-05-28 13:29:47 +08:00
Yina Chen
b6b70d1ba0
Divide core-xe packages ( #11131 )
...
* temp
* add batch
* fix style
* update package name
* fix style
* add workflow
* use temp version to run uts
* trigger performance test
* trigger win igpu perf
* revert workflow & setup
2024-05-28 12:00:18 +08:00
Jiao Wang
0a06a6e1d4
Update tests for transformers 4.36 ( #10858 )
...
* update unit test
* update
* update
* update
* update
* update
* fix gpu attention test
* update
* update
* update
* update
* update
* update
* update example test
* replace replit code
* update
* update
* update
* update
* set safe_serialization false
* perf test
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* delete
* update
* update
* update
* update
* update
* update
* revert
* update
2024-05-24 10:26:38 +08:00
Yuwen Hu
1c5ed9b6cf
Fix arc ut ( #11096 )
2024-05-22 14:13:13 +08:00
Yuwen Hu
b3027e2d60
Update for cpu install option in performance tests ( #11060 )
2024-05-17 10:33:43 +08:00
Yuwen Hu
fff067d240
Make install ut for cpu exactly the same as what we want for users ( #11051 )
2024-05-17 10:11:01 +08:00
Shaojun Liu
c62e828281
Create release-ipex-llm.yaml ( #11039 )
2024-05-16 11:10:10 +08:00
Qiyuan Gong
4638682140
Fix xpu finetune image path in action ( #11037 )
...
* Fix xpu finetune image path in action
2024-05-16 10:48:02 +08:00
Xiangyu Tian
612a365479
LLM: Install CPU version torch with extras [all] ( #10868 )
...
Modify setup.py to install CPU version torch with extras [all]
2024-05-16 10:39:55 +08:00
Wang, Jian4
86cec80b51
LLM: Add llm inference_cpp_xpu_docker ( #10933 )
...
* test_cpp_docker
* update
* update
* update
* update
* add sudo
* update nodejs version
* no need npm
* remove blinker
* new cpp docker
* restore
* add line
* add manually_build
* update and add mtl
* update for workdir llm
* add benchmark part
* update readme
* update 1024-128
* update readme
* update
* fix
* update
* update
* update readme too
* update readme
* no change
* update dir_name
* update readme
2024-05-15 11:10:22 +08:00
Qiyuan Gong
1e00bd7bbe
Re-org XPU finetune images ( #10971 )
...
* Rename xpu finetune image from `ipex-llm-finetune-qlora-xpu` to `ipex-llm-finetune-xpu`.
* Add axolotl to xpu finetune image.
* Upgrade peft to 0.10.0, transformers to 4.36.0.
* Add accelerate default config to home.
2024-05-15 09:42:43 +08:00
Yuwen Hu
8010af700f
Update igpu performance test to use pypi installed oneAPI ( #11010 )
2024-05-14 14:05:33 +08:00
Kai Huang
f8dd2e52ad
Fix Langchain upstream ut ( #10985 )
...
* Fix Langchain upstream ut
* Small fix
* Install bigdl-llm
* Update run-langchain-upstream-tests.sh
* Update run-langchain-upstream-tests.sh
* Update llm_unit_tests.yml
* Update run-langchain-upstream-tests.sh
* Update llm_unit_tests.yml
* Update run-langchain-upstream-tests.sh
* fix git checkout
* fix
---------
Co-authored-by: Zhangky11 <2321096202@qq.com>
Co-authored-by: Keyan (Kyrie) Zhang <79576162+Zhangky11@users.noreply.github.com>
2024-05-11 14:40:37 +08:00
Yuwen Hu
9f6358e4c2
Deprecate support for pytorch 2.0 on Linux for ipex-llm >= 2.1.0b20240511 ( #10986 )
...
* Remove xpu_2.0 option in setup.py
* Disable xpu_2.0 test in UT and nightly
* Update docs for deprecated pytorch 2.0
* Small doc update
2024-05-11 12:33:35 +08:00
Wang, Jian4
459b764406
Remove munually_build_for_test push outside ( #10968 )
2024-05-09 10:40:34 +08:00
Zephyr1101
7e7d969dcb
a experimental for workflow abuse step1 fix a typo ( #10965 )
...
* Update llm_unit_tests.yml
* Update README.md
* Update llm_unit_tests.yml
* Update llm_unit_tests.yml
2024-05-08 17:12:50 +08:00
Qiyuan Gong
d7ca5d935b
Upgrade Peft version to 0.10.0 for LLM finetune ( #10886 )
...
* Upgrade Peft version to 0.10.0
* Upgrade Peft version in ARC unit test and HF-Peft example.
2024-05-07 15:09:14 +08:00
Wang, Jian4
33b8f524c2
Add cpp docker manually_test ( #10946 )
...
* add cpp docker
* update
2024-05-07 11:23:28 +08:00
Yuwen Hu
c936ba3b64
Small fix for supporting workflow dispatch in nightly perf ( #10908 )
2024-04-29 13:25:14 +08:00
Yuwen Hu
94b4e96fa6
Small updates for workflow-dispatch triggered nightly perf ( #10902 )
...
* Small fix for workflow-dispatch triggerd nightly perf
* Small fix
2024-04-28 11:27:20 +08:00
Yuwen Hu
7c290d3f92
Add workflow dispatch trigger to nightly perf ( #10900 )
2024-04-28 09:54:30 +08:00
Yuwen Hu
c7235e34a8
Small update to ut ( #10804 )
2024-04-19 10:59:00 +08:00
Zhicun
88463cbf47
fix transformer version ( #10788 )
...
* fix transformer version
* uninstall sentence transformer
* uninstall
* uninstall
2024-04-18 17:37:21 +08:00
Wenjing Margaret Mao
63a9a736be
Merge branch 'intel-analytics:main' into MargarettMao-parent_folder
2024-04-11 07:18:19 +08:00
Wenjing Margaret Mao
50dfcaa8fa
Update llm-ppl-evaluation.yml -- Update llm-ppl-evaluation.yml -- Update HTML file: change from ppl/update_in_parent_folder into harness/update_in_parent_folder
...
ppl test and harness test are using the same update_in_parent_folder file. To reduce the repetition, change the ppl update HTML file to the same one under the harness folder and delete the HTML file under the ppl folder.
2024-04-11 07:15:18 +08:00
Yuwen Hu
97db2492c8
Update setup.py for bigdl-core-xe-esimd-21 on Windows ( #10705 )
...
* Support bigdl-core-xe-esimd-21 for windows in setup.py
* Update setup-llm-env accordingly
2024-04-09 18:21:21 +08:00
Shaojun Liu
e10040b7f1
upgrade to python 3.11 ( #10695 )
2024-04-09 17:04:42 +08:00
Chen, Zhentao
d59e0cce5c
Migrate harness to ipexllm ( #10703 )
...
* migrate to ipexlm
* fix workflow
* fix run_multi
* fix precision map
* rename ipexlm to ipexllm
* rename bigdl to ipex in comments
2024-04-09 15:48:53 +08:00
Zhicun
f03c029914
pydantic version>=2.0.0 for llamaindex ( #10694 )
...
* pydantic version
* pydantic version
* upgrade version
2024-04-09 09:48:42 +08:00
Shaojun Liu
db7c5cb78f
update model path for spr perf test ( #10687 )
...
* update model path for spr perf test
* revert
2024-04-08 10:21:56 +08:00
Shaojun Liu
d18dbfb097
update spr perf test ( #10644 )
2024-04-03 15:53:55 +08:00
Shaojun Liu
0779ca3db0
Bump ossf/scorecard-action to v2.3.1 ( #10639 )
...
* Bump ossf/scorecard-action to v2.3.1
* revert
2024-04-03 11:14:18 +08:00
Shaojun Liu
dfcf08c58a
update ossf/scorecard-action to fix TUF invalid key bug ( #10635 )
2024-04-03 09:55:32 +08:00
Shaojun Liu
a10f5a1b8d
add python style check ( #10620 )
...
* add python style check
* fix style checks
* update runner
* add ipex-llm-finetune-qlora-cpu-k8s to manually_build workflow
* update tag to 2.1.0-SNAPSHOT
2024-04-02 16:17:56 +08:00
Shaojun Liu
20a5e72da0
refine and verify ipex-llm-serving-xpu docker document ( #10615 )
...
* refine serving on cpu/xpu
* minor fix
* replace localhost with 0.0.0.0 so that service can be accessed through ip address
2024-04-02 11:45:45 +08:00
Shaojun Liu
c4b533f0e1
nightly build docker images ( #10585 )
...
* nightly build docker images
2024-03-29 16:12:28 +08:00
Cheen Hau, 俊豪
1c5eb14128
Update pip install to use --extra-index-url for ipex package ( #10557 )
...
* Change to 'pip install .. --extra-index-url' for readthedocs
* Change to 'pip install .. --extra-index-url' for examples
* Change to 'pip install .. --extra-index-url' for remaining files
* Fix URL for ipex
* Add links for ipex US and CN servers
* Update ipex cpu url
* remove readme
* Update for github actions
* Update for dockerfiles
2024-03-28 09:56:23 +08:00
Shaojun Liu
924e01b842
Create scorecard.yml ( #10559 )
2024-03-27 16:51:10 +08:00
Shaojun Liu
bb9be70105
replace bigdl-llm with ipex-llm ( #10545 )
2024-03-26 15:12:38 +08:00
Shaojun Liu
c563b41491
add nightly_build workflow ( #10533 )
...
* add nightly_build workflow
* add create-job-status-badge action
* update
* update
* update
* update setup.py
* release
* revert
2024-03-26 12:47:38 +08:00
Shaojun Liu
93e6804bfe
update nightly test ( #10520 )
...
* trigger nightly test
* trigger perf test
* update bigdl-llm to ipex-llm
* revert
2024-03-25 18:22:05 +08:00
Wang, Jian4
a1048ca7f6
Update setup.py and add new actions and add compatible mode ( #25 )
...
* update setup.py
* add new action
* add compatible mode
2024-03-22 15:44:59 +08:00
Yuwen Hu
1579ee4421
[LLM] Add nightly igpu perf test for INT4+FP16 1024-128 ( #10496 )
2024-03-21 16:07:06 +08:00
Shaojun Liu
a57fd52a5b
pip install notebook ( #10444 )
2024-03-18 13:56:34 +08:00
Keyan (Kyrie) Zhang
444b11af22
Add LangChain upstream ut test for ipynb ( #10387 )
...
* Add LangChain upstream ut test for ipynb
* Integrate unit test for LangChain upstream ut and ipynb into one file
* Modify file name
* Remove LangChain version update in unit test
* Move Langchain upstream ut job to arc
* Modify path in .yml file
* Modify path in llm_unit_tests.yml
* Avoid create directory repeatedly
2024-03-15 16:31:01 +08:00
Yuxuan Xia
a90e9b6ec2
Fix C-Eval Workflow ( #10359 )
...
* Fix Baichuan2 prompt format
* Fix ceval workflow errors
* Fix ceval workflow error
* Fix ceval error
* Fix ceval error
* Test ceval
* Fix ceval
* Fix ceval
* Fix ceval
* Fix ceval
* Fix ceval
* Fix ceval
* Fix ceval
* Fix ceval
* Fix ceval
* Fix ceval
* Fix ceval
* Fix ceval
* Add ceval dependency test
* Fix ceval
* Fix ceval
* Test full ceval
* Test full ceval
* Fix ceval
* Fix ceval
2024-03-13 17:23:17 +08:00
Keyan (Kyrie) Zhang
7cf01e6ec8
Add LangChain upstream ut test ( #10349 )
...
* Add LangChain upstream ut test
* Add LangChain upstream ut test
* Specify version numbers in yml script
* Correct langchain-community version
2024-03-13 09:52:45 +08:00
binbin Deng
df3bcc0e65
LLM: remove english_quotes dataset ( #10370 )
2024-03-12 16:57:40 +08:00
WeiguangHan
17bdb1a60b
LLM: add whisper models into nightly test ( #10193 )
...
* LLM: add whisper models into nightly test
* small fix
* small fix
* add more whisper models
* test all cases
* test specific cases
* collect the csv
* store the resut
* to html
* small fix
* small test
* test all cases
* modify whisper_csv_to_html
2024-03-11 20:00:47 +08:00
Chen, Zhentao
a425eaabfc
fix from_pretrained when device_map=None ( #10361 )
...
* pr trigger
* fix error when device_map=None
* fix device_map=None
2024-03-11 16:06:12 +08:00
Keyan (Kyrie) Zhang
f1825d7408
Add RMSNorm unit test ( #10190 )
2024-03-08 15:51:03 +08:00
Yuxuan Xia
0c8d3c9830
Add C-Eval HTML report ( #10294 )
...
* Add C-Eval HTML report
* Fix C-Eval workflow pr trigger path
* Fix C-Eval workflow typos
* Add permissions to C-Eval workflow
* Fix C-Eval workflow typo
* Add pandas dependency
* Fix C-Eval workflow typo
2024-03-07 16:44:49 +08:00
hxsz1997
b7db21414e
Update llamaindex ut ( #10338 )
...
* add test_llamaindex of gpu
* add llamaindex gpu tests bash
* add llamaindex cpu tests bash
* update name of Run LLM langchain GPU test
* import llama_index in llamaindex gpu ut
* update the dependency of test_llamaindex
* add Run LLM llamaindex GPU test
* modify import dependency of llamaindex cpu test
* add Run LLM llamaindex test
* update llama_model_path
* delete unused model path
* add LLAMA2_7B_ORIGIN_PATH in llamaindex cpu test
2024-03-07 10:06:16 +08:00
dingbaorong
fc7f10cd12
add langchain gpu example ( #10277 )
...
* first draft
* fix
* add readme for transformer_int4_gpu
* fix doc
* check device_map
* add arc ut test
* fix ut test
* fix langchain ut
* Refine README
* fix gpu mem too high
* fix ut test
---------
Co-authored-by: Ariadne <wyn2000330@126.com>
2024-03-05 13:33:57 +08:00
Yuwen Hu
5dbbe1a826
[LLM] Support for new arc ut runner ( #10311 )
...
* Support for new arc ut runner
* Comment unnecessary OMP_NUM_THREADS related settings for arc uts
2024-03-04 18:42:02 +08:00
Yuwen Hu
d45e577d8c
[LLM] Test load_low_bit in iGPU perf test on Windows ( #10313 )
2024-03-04 18:03:57 +08:00
Shaojun Liu
bab2ee5f9e
update nightly spr perf test ( #10178 )
...
* update nightly spr perf test
* update
* update runner lable
* update
* update
* update folder
* revert
2024-03-04 13:46:33 +08:00
Shaojun Liu
57e211dab4
topLevel 'contents' permission set to 'read' ( #10295 )
2024-03-04 10:33:19 +08:00
hxsz1997
925aff730e
Integrate the result of ppl and harness ( #10265 )
...
* modify NIGHTLY_MATRIX_PRECISION
* change ACC_FOLDER of harness
* change ACC_FOLDER of ppl
2024-02-28 17:53:02 +08:00
Yuwen Hu
d85f7c78df
Small fix for better trail ( #10256 )
2024-02-27 20:00:40 +08:00
hxsz1997
cba61a2909
Add html report of ppl ( #10218 )
...
* remove include and language option, select the corresponding dataset based on the model name in Run
* change the nightly test time
* change the nightly test time of harness and ppl
* save the ppl result to json file
* generate csv file and print table result
* generate html
* modify the way to get parent folder
* update html in parent folder
* add llm-ppl-summary and llm-ppl-summary-html
* modify echo single result
* remove download fp16.csv
* change model name of PR
* move ppl nightly related files to llm/test folder
* reformat
* seperate make_table from make_table_and_csv.py
* separate make_csv from make_table_and_csv.py
* update llm-ppl-html
* remove comment
* add Download fp16.results
2024-02-27 17:37:08 +08:00
hxsz1997
15ad2fd72e
Merge pull request #10226 from zhentaocc/fix_harness
...
Fix harness
2024-02-26 16:49:27 +08:00
Chen, Zhentao
5ad752bae8
Separate llmcpp build of linux and windows ( #10136 )
...
* separate linux window llmcpp build
* harness run on linux only
* fix platform
* skip error
* change to linux only build
* add judgement of platform
* add download args
* remove ||true
2024-02-26 15:04:29 +08:00
Chen, Zhentao
62350a36f0
fix if in update html
2024-02-26 13:39:59 +08:00
Yuxuan Xia
0c6aef0f47
Add einops dependency for C-Eval ( #10234 )
...
* Add c-eval workflow and modify running files
* Modify the chatglm evaluator file
* Modify the ceval workflow for triggering test
* Modify the ceval workflow file
* Modify the ceval workflow file
* Modify ceval workflow
* Adjust the ceval dataset download
* Add ceval workflow dependencies
* Modify ceval workflow dataset download
* Add ceval test dependencies
* Add ceval test dependencies
* Correct the result print
* Fix the nightly test trigger time
* Fix ChatGLM loading issue
* Add einops dependency
2024-02-26 10:13:10 +08:00
Chen, Zhentao
85d13c65de
run one job only if triggered by pr
2024-02-24 00:33:33 +08:00