ipex-llm/.github
Yuwen Hu c998f5f2ba [LLM] iGPU long context tests (#9598)
* Temp enable PR

* Enable tests for 256-64

* Try again 128-64

* Empty cache after each iteration for igpu benchmark scripts

* Try tests for 512

* change order for 512

* Skip chatglm3 and llama2 for now

* Separate tests for 512-64

* Small fix

* Further fixes

* Change back to nightly again
2023-12-06 10:19:20 +08:00
..
actions/llm try to fix deps installation of bigdl (#9578) 2023-12-01 15:25:47 +08:00
workflows [LLM] iGPU long context tests (#9598) 2023-12-06 10:19:20 +08:00
CODEOWNERS Update CODEOWNERS 2022-08-18 16:22:14 +08:00
pull_request_template.md Update PR template (#5136) 2022-07-20 10:07:46 +08:00