ipex-llm/.github/workflows
Yuwen Hu c38e18f2ff [LLM] Migrate iGPU perf tests to new machine (#9784)
* Move 1024 test just after 32-32 test; and enable all model for 1024-128

* Make sure python output encoding in utf-8 so that redirect to txt can always be success

* Upload results to ftp

* Small fix
2023-12-26 19:15:57 +08:00
..
llm-binary-build.yml [LLM] Separate windows build UT and build runner (#9403) 2023-11-09 18:47:38 +08:00
llm-harness-evaluation.yml fix harness manual run env typo (#9763) 2023-12-22 18:42:35 +08:00
llm-nightly-test.yml [LLM] Separate windows build UT and build runner (#9403) 2023-11-09 18:47:38 +08:00
llm_example_tests.yml [LLM] Fix example test (#9118) 2023-10-10 13:24:18 +08:00
llm_performance_tests.yml [LLM] Migrate iGPU perf tests to new machine (#9784) 2023-12-26 19:15:57 +08:00
llm_performance_tests_stable_version.yml bigdl-llm stable version: let the perf test fail if the difference between perf and baseline is greater than 5% (#9750) 2023-12-25 13:47:11 +08:00
llm_unit_tests.yml [LLM] Separate windows build UT and build runner (#9403) 2023-11-09 18:47:38 +08:00
manually_build.yml Add qlora cpu docker manually build (#9501) 2023-11-21 14:39:16 +08:00
manually_build_for_testing.yml Add qlora cpu docker manually build (#9501) 2023-11-21 14:39:16 +08:00