ipex-llm/.github/workflows
Yuwen Hu c998f5f2ba [LLM] iGPU long context tests (#9598)
* Temp enable PR

* Enable tests for 256-64

* Try again 128-64

* Empty cache after each iteration for igpu benchmark scripts

* Try tests for 512

* change order for 512

* Skip chatglm3 and llama2 for now

* Separate tests for 512-64

* Small fix

* Further fixes

* Change back to nightly again
2023-12-06 10:19:20 +08:00
..
llm-binary-build.yml [LLM] Separate windows build UT and build runner (#9403) 2023-11-09 18:47:38 +08:00
llm-harness-evaluation.yml Add harness summary job (#9457) 2023-12-05 10:04:10 +08:00
llm-nightly-test.yml [LLM] Separate windows build UT and build runner (#9403) 2023-11-09 18:47:38 +08:00
llm_example_tests.yml [LLM] Fix example test (#9118) 2023-10-10 13:24:18 +08:00
llm_performance_tests.yml [LLM] iGPU long context tests (#9598) 2023-12-06 10:19:20 +08:00
llm_unit_tests.yml [LLM] Separate windows build UT and build runner (#9403) 2023-11-09 18:47:38 +08:00
manually_build.yml Add qlora cpu docker manually build (#9501) 2023-11-21 14:39:16 +08:00
manually_build_for_testing.yml Add qlora cpu docker manually build (#9501) 2023-11-21 14:39:16 +08:00