Commit graph

279 commits

Author SHA1 Message Date
Chen, Zhentao
f36d7b2d59 Fix harness stuck (#9435)
* remove env to avoid being stuck

* use small model for test
2023-11-13 15:29:53 +08:00
Yuwen Hu
4faf5af8f1 [LLM] Add perf test for core on Windows (#9397)
* temporary stop other perf test

* Add framework for core performance test with one test model

* Small fix and add platform control

* Comment out lp for now

* Add missing ymal file

* Small fix

* Fix sed contents

* Small fix

* Small path fixes

* Small fix

* Add update to ftp

* Small upload fix

* add chatglm3-6b

* LLM: add model names

* Keep repo id same as ftp and temporary make baichuan2 first priority

* change order

* Remove temp if false and separate pr and nightly results

* Small fix

---------

Co-authored-by: jinbridge <2635480475@qq.com>
2023-11-13 13:58:40 +08:00
WeiguangHan
2cfef5ef1e LLM: store the nightly test and pr results separately (#9404)
* LLM: store the csv results separately

* modify the trigger files of LLM Performance Test
2023-11-11 06:35:27 +08:00
Yuwen Hu
3d107f6d25 [LLM] Separate windows build UT and build runner (#9403)
* Separate windows build UT and build runner

* Small fix
2023-11-09 18:47:38 +08:00
WeiguangHan
34449cb4bb LLM: add remaining models to the arc perf test (#9384)
* add remaining models

* modify the filepath which stores the test result on ftp server

* resolve some comments
2023-11-09 14:28:42 +08:00
Yuwen Hu
d4b248fcd4 Add windows binary build label AVX_VNNI (#9387) 2023-11-08 18:13:35 +08:00
Chen, Zhentao
298b64217e add auto triggered acc test (#9364)
* add auto triggered acc test

* use llama 7b instead

* fix env

* debug download

* fix download prefix

* add cut dirs

* fix env of model path

* fix dataset download

* full job

* source xpu env vars

* use matrix to trigger model run

* reset batch=1

* remove redirect

* remove some trigger

* add task matrix

* add precision list

* test llama-7b-chat

* use /mnt/disk1 to store model and datasets

* remove installation test

* correct downloading path

* fix HF vars

* add bigdl-llm env vars

* rename file

* fix hf_home

* fix script path

* rename as harness evalution

* rerun
2023-11-08 10:22:27 +08:00
WeiguangHan
84ab614aab LLM: add more models and skip runtime error (#9349)
* add more models and skip runtime error

* upgrade transformers

* temporarily removed Mistral-7B-v0.1

* temporarily disable the upload of arc perf result
2023-11-08 09:45:53 +08:00
Shaojun Liu
833e4dbc8d fix llm-performance-test-on-arc bug (#9357) 2023-11-06 10:00:25 +08:00
ZehuaCao
ef83c3302e Use to test llm-performance on spr-perf (#9316)
* Update llm_performance_tests.yml

* Update llm_performance_tests.yml

* Update action.yml

* Create cpu-perf-test.yaml

* Update action.yml

* Update action.yml

* Update llm_performance_tests.yml

* Update llm_performance_tests.yml

* Update llm_performance_tests.yml

* Update llm_performance_tests.yml

* Update llm_performance_tests.yml

* Update llm_performance_tests.yml

* Update llm_performance_tests.yml

* Update llm_performance_tests.yml

* Update llm_performance_tests.yml

* Update llm_performance_tests.yml

* Update llm_performance_tests.yml
2023-11-03 11:17:16 +08:00
Cheen Hau, 俊豪
8f23fb04dc Add inference test for Whisper model on Arc (#9330)
* Add inference test for Whisper model

* Remove unnecessary inference time measurement
2023-11-03 10:15:52 +08:00
Ziteng Zhang
dd3cf2f153 LLM: Add python 3.10 & 3.11 UT
LLM: Add python 3.10 & 3.11 UT
2023-11-02 14:09:29 +08:00
Jasonzzt
d1bdc0ef72 spr & arc ut with python 3.9 & 3.10 & 3.11 2023-11-01 22:57:48 +08:00
Jasonzzt
687da21467 test 3.11 2023-11-01 19:14:53 +08:00
WeiguangHan
9722e811be LLM: add more models to the arc perf test (#9297)
* LLM: add more models to the arc perf test

* remove some old models

* install some dependencies
2023-11-01 16:56:32 +08:00
Jasonzzt
3c3329010d add conda update -n base conda 2023-11-01 16:36:35 +08:00
Jasonzzt
2fff0e8c21 use runner avx2 with linux 2023-11-01 16:28:29 +08:00
Jasonzzt
964a8e6dc1 update conda 2023-11-01 16:20:19 +08:00
Jasonzzt
cb7ef38e86 rerun 2023-11-01 15:30:34 +08:00
Jasonzzt
8f6e979fad test again 2023-11-01 15:10:11 +08:00
Jasonzzt
b66584f23b test 2023-11-01 14:51:23 +08:00
Jasonzzt
ba148ff3ff test py311 2023-11-01 14:08:49 +08:00
Jasonzzt
6f1cee90a4 test 2023-11-01 13:58:03 +08:00
Jasonzzt
d51821e264 test 2023-11-01 13:49:32 +08:00
Jasonzzt
7c7a7f2ec1 spr & arc ut with python3,9&3.10&3.11 2023-11-01 13:17:13 +08:00
Jasonzzt
4f9fd0dffd arc-ut with 3.10 & 3.11 2023-11-01 10:51:57 +08:00
Cengguang Zhang
d4ab5904ef LLM: Add python 3.10 llm UT (#9302)
* add py310 test for llm-unit-test.

* add py310 llm-unit-tests

* add llm-cpp-build-py310

* test

* test

* test.

* test

* test

* fix deactivate.

* fix

* fix.

* fix

* test

* test

* test

* add build chatglm for win.

* test.

* fix
2023-11-01 10:15:32 +08:00
WeiguangHan
03aa368776 LLM: add the comparison between latest arc perf test and last one (#9296)
* add the comparison between latest test and last one to html

* resolve some comments

* modify some code logics
2023-11-01 09:53:02 +08:00
Cheen Hau, 俊豪
d638b93dfe Add test script and workflow for qlora fine-tuning (#9295)
* Add test script and workflow for qlora fine-tuning

* Test fix export model

* Download dataset

* Fix export model issue

* Reduce number of training steps

* Rename script

* Correction
2023-11-01 09:39:53 +08:00
Yuwen Hu
21631209a9 [LLM] Skip CPU performance test for now (#9291)
* Skip llm cpu performance test for now

* Add install for wheel package
2023-10-27 12:55:04 +08:00
Ziteng Zhang
46ab0419b8 Merge pull request #9279 from Jasonzzt/main
Add bigdl-llm-finetune-cpu to manually_build to upload image on hub
2023-10-27 09:55:08 +08:00
Yuwen Hu
733df28a2b [LLM] Migrate Arc UT to another runner (#9286)
* Separate arc llm ut to another runner

* Add dependency for einops
2023-10-26 19:08:57 +08:00
Ziteng Zhang
916ccc0779 Update manually_build_for_testing.yml 2023-10-26 16:26:14 +08:00
Ziteng Zhang
14a23015f8 Update manually_build.yml 2023-10-26 16:24:03 +08:00
Jasonzzt
37b1708d16 Add bigdl-llm-finetune-cpu to manually_build 2023-10-26 15:53:44 +08:00
Lilac09
4ed7f066d3 add bigdl-llm-finetune-xpu to manually_build (#9278) 2023-10-26 15:30:05 +08:00
Cheen Hau, 俊豪
ab40607b87 Enable unit test workflow on Arc (#9213)
* Add gpu workflow and a transformers API inference test

* Set device-specific env variables in script instead of workflow

* Fix status message

---------

Co-authored-by: sgwhat <ge.song@intel.com>
2023-10-25 15:17:18 +08:00
SONG Ge
160a1e5ee7 [WIP] Add UT for Mistral Optimized Model (#9248)
* add ut for mistral model

* update

* fix model path

* upgrade transformers version for mistral model

* refactor correctness ut for mustral model

* refactor mistral correctness ut

* revert test_optimize_model back

* remove mistral from test_optimize_model

* add to revert transformers version back to 4.31.0
2023-10-25 15:14:17 +08:00
WeiguangHan
ec9195da42 LLM: using html to visualize the perf result for Arc (#9228)
* LLM: using html to visualize the perf result for Arc

* deploy the html file

* add python license

* reslove some comments
2023-10-24 18:05:25 +08:00
Guancheng Fu
f37547249d Refine README/CICD (#9253) 2023-10-24 12:56:03 +08:00
Guancheng Fu
9faa2f1eef Fix bigdl-llm-serving-tdx image (#9251) 2023-10-24 10:49:35 +08:00
Guancheng Fu
6cb884d82d Fix missing manually_build_for_testing entry (#9245) 2023-10-23 16:35:09 +08:00
Guancheng Fu
2ead3f7d54 add manually build (#9244) 2023-10-23 15:53:30 +08:00
WeiguangHan
f87f67ee1c LLM: arc perf test for some popular models (#9188) 2023-10-19 15:56:15 +08:00
ZehuaCao
65dd73b62e Update manually_build.yml (#9138)
* Update manually_build.yml

fix llm-serving-tdx image build dir

* Update manually_build.yml
2023-10-11 15:07:09 +08:00
Yuwen Hu
dc70fc7b00 Update performance tests for dependency of bigdl-core-xe-esimd (#9124) 2023-10-10 19:32:17 +08:00
Yuwen Hu
0e09dd926b [LLM] Fix example test (#9118)
* Update llm example test link due to example layout change

* Add better change detect
2023-10-10 13:24:18 +08:00
Zhengjin Wang
0dbb3a283e amend manually_build 2023-10-10 10:03:23 +08:00
Zhengjin Wang
bb3bb46400 add llm-serving-xpu on github action 2023-10-10 09:48:58 +08:00
Yuwen Hu
65212451cc [LLM] Small update to performance tests (#9106)
* small updates to llm performance tests regarding model handling

* Small fix
2023-10-09 16:55:25 +08:00
ZehuaCao
aad68100ae Add trusted-bigdl-llm-serving-tdx image. (#9093)
* add entrypoint in cpu serving

* kubernetes support for fastchat cpu serving

* Update Readme

* add image to manually_build action

* update manually_build.yml

* update README.md

* update manually_build.yaml

* update attestation_cli.py

* update manually_build.yml

* update Dockerfile

* rename

* update trusted-bigdl-llm-serving-tdx Dockerfile
2023-10-08 10:13:51 +08:00
ZehuaCao
b773d67dd4 Add Kubernetes support for BigDL-LLM-serving CPU. (#9071) 2023-10-07 09:37:48 +08:00
Lilac09
c91b2bd574 fix:modify indentation (#9070)
* modify Dockerfile

* add README.md

* add README.md

* Modify Dockerfile

* Add bigdl inference cpu image build

* Add bigdl llm cpu image build

* Add bigdl llm cpu image build

* Add bigdl llm cpu image build

* Modify Dockerfile

* Add bigdl inference cpu image build

* Add bigdl inference cpu image build

* Add bigdl llm xpu image build

* manually build

* recover file

* manually build

* recover file

* modify indentation
2023-09-27 14:53:52 +08:00
Lilac09
ecee02b34d Add bigdl llm xpu image build (#9062)
* modify Dockerfile

* add README.md

* add README.md

* Modify Dockerfile

* Add bigdl inference cpu image build

* Add bigdl llm cpu image build

* Add bigdl llm cpu image build

* Add bigdl llm cpu image build

* Modify Dockerfile

* Add bigdl inference cpu image build

* Add bigdl inference cpu image build

* Add bigdl llm xpu image build
2023-09-26 14:29:03 +08:00
Lilac09
9ac950fa52 Add bigdl llm cpu image build (#9047)
* modify Dockerfile

* add README.md

* add README.md

* Modify Dockerfile

* Add bigdl inference cpu image build

* Add bigdl llm cpu image build

* Add bigdl llm cpu image build

* Add bigdl llm cpu image build
2023-09-26 13:22:11 +08:00
Yuwen Hu
c389e1323d fix xpu performance tests by making sure that latest bigdl-core-xe is installed (#9001) 2023-09-19 17:33:30 +08:00
Wang Jian
7563b26ca9 Occlum fastchat build Use nocache and update order (#8972) 2023-09-14 14:05:15 +08:00
Yuwen Hu
ca35c93825 [LLM] Fix langchain UT (#8929)
* Change dependency version for langchain uts

* Downgrade pandas version instead; and update example readme accordingly
2023-09-08 13:51:04 +08:00
xingyuan li
704a896e90 [LLM] Add perf test on xpu for bigdl-llm (#8866)
* add xpu latency job
* update install way
* remove duplicated workflow
* add perf upload
2023-09-05 17:36:24 +09:00
xingyuan li
de6c6bb17f [LLM] Downgrade amx build gcc version and remove avx flag display (#8856)
* downgrade to gcc 11
* remove avx display
2023-08-31 14:08:13 +09:00
Shengsheng Huang
7b566bf686 [LLM] add new API for optimize any pytorch models (#8827)
* add new API for optimize any pytorch models

* change test util name

* revise API and update UT

* fix python style

* update ut config, change default value

* change defaults, disable ut transcribe
2023-08-30 19:41:53 +08:00
Wang Jian
954ef954b6 [PPML] Add occlum llm image munually build (#8849) 2023-08-30 11:31:47 +08:00
xingyuan li
67052198eb [LLM] Build with multiprocess (#8797)
* build with multiprocess
2023-08-29 10:49:52 +09:00
xingyuan li
6a902b892e [LLM] Add amx build step (#8822)
* add amx build step
2023-08-28 17:41:18 +09:00
Song Jiaming
b8b1b6888b [LLM] Performance test (#8796) 2023-08-25 14:31:45 +08:00
SONG Ge
d2926c7672 [LLM] Unify Langchain Native and Transformers LLM API (#8752)
* deprecate BigDLNativeTransformers and add specific LMEmbedding method

* deprecate and add LM methods for langchain llms

* add native params to native langchain

* new imple for embedding

* move ut from bigdlnative to casual llm

* rename embeddings api and examples update align with usage updating

* docqa example hot-fix

* add more api docs

* add langchain ut for starcoder

* support model_kwargs for transformer methods when calling causalLM and add ut

* ut fix for transformers embedding

* update for langchain causal supporting transformers

* remove model_family in readme doc

* add model_families params to support more models

* update api docs and remove chatglm embeddings for now

* remove chatglm embeddings in examples

* new refactor for ut to add bloom and transformers llama ut

* disable llama transformers embedding ut
2023-08-25 11:14:21 +08:00
xingyuan li
9537194b4b [LLM] Fix llm test workflow repeatedly download model files 2023-08-25 11:20:46 +09:00
Jin Hanyu
a73a3e5ff9 Fix bugs in manually_build_for_testing.yml. (#8792) 2023-08-23 15:49:23 +08:00
xingyuan li
c94bdd3791 [LLM] Merge windows & linux nightly test (#8756)
* fix download statement
* add check before build wheel
* use curl to upload files
* windows unittest won't upload converted model
* split llm-cli test into windows & linux versions
* update tempdir create way
* fix nightly converted model name
* windows llm-cli starcoder test temply disabled
* remove taskset dependency
* rename llm_unit_tests_linux to llm_unit_tests
2023-08-23 12:48:41 +09:00
Shaojun Liu
394304b918 Re organize llm test (#8766)
* run llm-example-test in llm-nightly-test.yml

* comment out the schedule event
2023-08-17 09:42:25 +08:00
Shaojie Cui
0a8db3abe0 [PPML]refactor python toolkit (#8740)
* add dependency and example

* fix stage 3

* downgrade protobuf

* reduce epc memory

* add script

* Readme reduction

* delete unused note
2023-08-15 10:11:53 +08:00
xingyuan li
1cb8f5abbd [LLM] Revert compile OS for llm build workflow (#8732)
* use almalinux to build
2023-08-11 17:47:45 +09:00
xingyuan li
33d9ad234f [LLM] Linux vnni build with ubuntu 18.04 (#8710)
* move from almalinux
2023-08-10 19:04:03 +09:00
Song Jiaming
e717e304a6 LLM first example test and template (#8658) 2023-08-10 10:03:11 +08:00
Yishuo Wang
710b9b8982 [LLM] add linux chatglm pybinding binary file (#8698) 2023-08-08 11:16:30 +08:00
xingyuan li
4482ccb329 [LLM] Change build system from centos7 to ubuntu18.04 (#8686)
* centos7 to ubuntu18
* ubuntu git version 2.17 need to update
* use almalinux8 to build avx2 binaries
2023-08-07 19:09:58 +09:00
Yishuo Wang
5837cc424a [LLM] add chatglm pybinding binary file release (#8677) 2023-08-04 11:45:27 +08:00
xingyuan li
bc4cdb07c9 Remove conda for llm workflow (#8671) 2023-08-04 12:09:42 +09:00
xingyuan li
110cfb5546 [LLM] Remove old windows nightly test code (#8668)
Remove old Windows nightly test code triggered by task scheduler
Add new Windows nightly workflow for nightly testing
2023-08-03 17:12:23 +09:00
Yina Chen
bd177ab612 [LLM] llm binary build linux add avx & avx2 (#8665)
* llm add linux avx & avx2 release

* fix name

* update check
2023-08-03 14:38:31 +08:00
xingyuan li
610084e3c0 [LLM] Complete windows unittest (#8611)
* add windows nightly test workflow
* use github runner to run pr test
* model load should use lowbit
* remove tmp dir after testing
2023-08-03 14:48:42 +09:00
Xin Qiu
0714888705 build windows avx dll (#8657)
* windows avx

* add to actions
2023-08-03 02:06:24 +08:00
Yina Chen
15b3adc7ec [LLM] llm linux binary make -> cmake (#8656)
* llm linux make -> cmake

* update

* update
2023-08-02 16:41:54 +08:00
xingyuan li
769209b7f0 Chatglm unittest disable due to missing instruction (#8650) 2023-08-02 10:28:42 +09:00
xingyuan li
cdfbe652ca [LLM] Add chatglm support for llm-cli (#8641)
* add chatglm build
* add llm-cli support
* update git
* install cmake
* add ut for chatglm
* add files to setup
* fix bug cause permission error when sf lack file
2023-08-01 14:30:17 +09:00
xingyuan li
3361b66449 [LLM] Revert llm-cli to disable selecting executables on Windows (#8630)
* revert vnni file select
* revert setup.py
* add model-api.dll
2023-07-31 11:15:44 +09:00
xingyuan li
919791e406 Add needs to make sure run in order (#8621) 2023-07-26 14:16:57 +09:00
xingyuan li
e3418d7e61 [LLM] Remove concurrency group for binary build workflow (#8619)
* remove concurrency group for nightly test
2023-07-26 12:15:53 +09:00
xingyuan li
a98b3fe961 Fix cancel flag causing nightly builds to fail (#8618) 2023-07-26 11:11:08 +09:00
xingyuan li
7d45233825 fix trigger enable flag (#8616) 2023-07-26 10:53:03 +09:00
Guancheng Fu
07d1aee825 [PPML] add fastchat image for tdx (#8610) 2023-07-25 15:23:41 +08:00
Song Jiaming
650b82fa6e [LLM] add CausalLM and Speech UT (#8597) 2023-07-25 11:22:36 +08:00
xingyuan li
9c897ac7db [LLM] Merge redundant code in workflow (#8596)
* modify workflow concurrency group
* Add build check to avoid repeated compilation
* remove redundant code
2023-07-25 12:12:00 +09:00
Yuwen Hu
bbde423349 [LLM] Add current Linux UT inference tests to nightly tests (#8578)
* Add current inference uts to nightly tests

* Change test model from chatglm-6b to chatglm2-6b

* Add thread num env variable for nightly test

* Fix urls

* Small fix
2023-07-21 13:26:38 +08:00
Yuwen Hu
2266ca7d2b [LLM] Small updates to transformers int4 ut (#8574)
* Small fix to transformers int4 ut

* Small fix
2023-07-20 13:20:25 +08:00
xingyuan li
2eeb653c75 fix llm build workflow misspell (#8575) 2023-07-20 12:08:54 +09:00
Song Jiaming
411d896636 LLM first transformers UT (#8514)
* ut

* transformers api first ut

* name

* dir issue

* use chatglm instead of chatglm2

* omp

* set omp in sh

* source

* taskset

* test

* test omp

* add test
2023-07-20 10:16:27 +08:00
Yishuo Wang
3bd1420b71 LLM: use MSVC to build avx-vnni binary files (#8570) 2023-07-19 17:38:14 +08:00
Guancheng Fu
4f287df664 Fix manullay_build_for_testing (#8556) 2023-07-18 16:21:39 +08:00
Guancheng Fu
3e0e370898 [PPML] Add bigdl-llm-demo dependencies to TDX image (#8551)
* add bigdl-llm-demo dependencies to tdx image

* use only one RUN command

* Add bigdl-ppml

* done
2023-07-18 14:23:07 +08:00