Commit graph

21 commits

Author SHA1 Message Date
xingyuan li
bc4cdb07c9 Remove conda for llm workflow (#8671) 2023-08-04 12:09:42 +09:00
xingyuan li
610084e3c0 [LLM] Complete windows unittest (#8611)
* add windows nightly test workflow
* use github runner to run pr test
* model load should use lowbit
* remove tmp dir after testing
2023-08-03 14:48:42 +09:00
xingyuan li
769209b7f0 Chatglm unittest disable due to missing instruction (#8650) 2023-08-02 10:28:42 +09:00
xingyuan li
cdfbe652ca [LLM] Add chatglm support for llm-cli (#8641)
* add chatglm build
* add llm-cli support
* update git
* install cmake
* add ut for chatglm
* add files to setup
* fix bug cause permission error when sf lack file
2023-08-01 14:30:17 +09:00
xingyuan li
7d45233825 fix trigger enable flag (#8616) 2023-07-26 10:53:03 +09:00
Song Jiaming
650b82fa6e [LLM] add CausalLM and Speech UT (#8597) 2023-07-25 11:22:36 +08:00
xingyuan li
9c897ac7db [LLM] Merge redundant code in workflow (#8596)
* modify workflow concurrency group
* Add build check to avoid repeated compilation
* remove redundant code
2023-07-25 12:12:00 +09:00
Yuwen Hu
bbde423349 [LLM] Add current Linux UT inference tests to nightly tests (#8578)
* Add current inference uts to nightly tests

* Change test model from chatglm-6b to chatglm2-6b

* Add thread num env variable for nightly test

* Fix urls

* Small fix
2023-07-21 13:26:38 +08:00
Yuwen Hu
2266ca7d2b [LLM] Small updates to transformers int4 ut (#8574)
* Small fix to transformers int4 ut

* Small fix
2023-07-20 13:20:25 +08:00
Song Jiaming
411d896636 LLM first transformers UT (#8514)
* ut

* transformers api first ut

* name

* dir issue

* use chatglm instead of chatglm2

* omp

* set omp in sh

* source

* taskset

* test

* test omp

* add test
2023-07-20 10:16:27 +08:00
Yuwen Hu
df97d39e29 Change thread_num in Linux inference actions (#8528) 2023-07-14 10:46:03 +08:00
xingyuan li
4f152b4e3a [LLM] Merge the llm.cpp build and the pypi release (#8503)
* checkout llm.cpp to build new binary
* use artifact to get latest built binary files
* rename quantize
* modify all release workflow
2023-07-13 16:34:24 +09:00
xingyuan li
04f2f04410 Add workflow_dispatch for llm unittest workflow (#8485) 2023-07-10 13:16:18 +08:00
Yuwen Hu
936d21635f [LLM] Extract tests to .github/actions to improve reusability (#8457)
* Extract tests to .github/actions for better reusing in nightly tests

* Small fix

* Small fix
2023-07-05 10:09:10 +08:00
Yuwen Hu
372c775cb4 [LLM] Change default runner for LLM Linux tests to the ones with AVX512 (#8448)
* Basic change for AVX512 runner

* Remove conda channel and action rename

* Small fix

* Small fix and reduce peak convert disk space

* Define n_threads based on runner status

* Small thread num fix

* Define thread_num for cli

* test

* Add self-hosted label and other small fix
2023-07-04 14:53:03 +08:00
Ruonan Wang
4be784a49d LLM: add UT for starcoder (convert, inference) update examples and readme (#8379)
* first commit to add path

* update example and readme

* update path

* fix

* update based on comment
2023-06-27 12:12:11 +08:00
Shengsheng Huang
c113ecb929 [LLM] langchain bloom, UT's, default parameters (#8357)
* update langchain default parameters to align w/ api

* add ut's for llm and embeddings

* update inference test script to install langchain deps

* update tests workflows

---------

Co-authored-by: leonardozcm <changmin.zhao@intel.com>
2023-06-25 17:38:00 +08:00
binbin Deng
ab1a833990 LLM: add basic uts related to inference (#8346) 2023-06-19 10:25:51 +08:00
xingyuan li
daae7bd4e4 [LLM] Unittest for llm-cli (#8343)
* add llm-cli test shell
2023-06-16 17:42:24 +08:00
Yuwen Hu
1aa33d35d5 [LLM] Refactor LLM Linux tests (#8349)
* Small name fix

* Add convert nightly tests, and for other llm tests, use stable ckpt

* Small fix and ftp fix

* Small fix

* Small fix
2023-06-16 15:22:48 +08:00
Yuwen Hu
50dd9dd1c5 [LLM] Small improve for LLM base actions (#8344)
* Hide ftp url for now

* Small file name fix
2023-06-15 16:22:41 +08:00
Renamed from .github/workflows/llm_unit_test_linux.yml (Browse further)