ipex-llm/python/llm/test/benchmark
Yuwen Hu c998f5f2ba [LLM] iGPU long context tests (#9598)
* Temp enable PR

* Enable tests for 256-64

* Try again 128-64

* Empty cache after each iteration for igpu benchmark scripts

* Try tests for 512

* change order for 512

* Skip chatglm3 and llama2 for now

* Separate tests for 512-64

* Small fix

* Further fixes

* Change back to nightly again
2023-12-06 10:19:20 +08:00
..
harness_nightly Fix harness nightly (#9586) 2023-12-04 11:45:00 +08:00
32-igpu-perf-test-434.yaml [LLM] iGPU long context tests (#9598) 2023-12-06 10:19:20 +08:00
32-igpu-perf-test.yaml [LLM] iGPU long context tests (#9598) 2023-12-06 10:19:20 +08:00
512-igpu-perf-test-434.yaml [LLM] iGPU long context tests (#9598) 2023-12-06 10:19:20 +08:00
512-igpu-perf-test.yaml [LLM] iGPU long context tests (#9598) 2023-12-06 10:19:20 +08:00
arc-perf-test.yaml [LLM] Fix performance tests (#9596) 2023-12-05 10:59:28 +08:00
arc-perf-transformers-434.yaml [LLM] Fix performance tests (#9596) 2023-12-05 10:59:28 +08:00
concat_csv.py LLM: enable previous models (#9505) 2023-11-28 10:21:07 +08:00
core-perf-test.yaml [LLM] Fix performance tests (#9596) 2023-12-05 10:59:28 +08:00
cpu-perf-test.yaml [LLM] Fix performance tests (#9596) 2023-12-05 10:59:28 +08:00
csv_to_html.py LLM: modify the script to generate html results more accurately (#9445) 2023-11-16 19:50:23 +08:00