ipex-llm/.github/workflows
Yuwen Hu cbdd49f229 [LLM] win igpu performance for ipex 2.1 and oneapi 2024.0 (#9679)
* Change igpu win tests for ipex 2.1 and oneapi 2024.0

* Qwen model repo id updates; updates model list for 512-64

* Add .eval for win igpu all-in-one benchmark for best performance
2023-12-13 18:52:29 +08:00
..
llm-binary-build.yml [LLM] Separate windows build UT and build runner (#9403) 2023-11-09 18:47:38 +08:00
llm-harness-evaluation.yml gsm8k OOM workaround (#9597) 2023-12-08 18:47:25 +08:00
llm-nightly-test.yml [LLM] Separate windows build UT and build runner (#9403) 2023-11-09 18:47:38 +08:00
llm_example_tests.yml [LLM] Fix example test (#9118) 2023-10-10 13:24:18 +08:00
llm_performance_tests.yml [LLM] win igpu performance for ipex 2.1 and oneapi 2024.0 (#9679) 2023-12-13 18:52:29 +08:00
llm_unit_tests.yml [LLM] Separate windows build UT and build runner (#9403) 2023-11-09 18:47:38 +08:00
manually_build.yml Add qlora cpu docker manually build (#9501) 2023-11-21 14:39:16 +08:00
manually_build_for_testing.yml Add qlora cpu docker manually build (#9501) 2023-11-21 14:39:16 +08:00