..
benchmark
[LLM] win igpu performance for ipex 2.1 and oneapi 2024.0 ( #9679 )
2023-12-13 18:52:29 +08:00
convert
LLM: Adapt transformers models for optimize model SL ( #9022 )
2023-10-09 11:13:44 +08:00
inference
[WIP] Add UT for Mistral Optimized Model ( #9248 )
2023-10-25 15:14:17 +08:00
inference_gpu
disable test_optimized_model.py temporarily due to out of memory on A730M(pr validation machine) ( #9658 )
2023-12-12 17:13:52 +08:00
install
[LLM] Refactor LLM Linux tests ( #8349 )
2023-06-16 15:22:48 +08:00
langchain
[LLM] Unify Langchain Native and Transformers LLM API ( #8752 )
2023-08-25 11:14:21 +08:00
win
[LLM] Remove old windows nightly test code ( #8668 )
2023-08-03 17:12:23 +09:00
__init__.py
[LLM] Enable UT workflow logics for LLM ( #8243 )
2023-06-02 17:06:35 +08:00
run-llm-convert-tests.sh
[LLM] Change default runner for LLM Linux tests to the ones with AVX512 ( #8448 )
2023-07-04 14:53:03 +08:00
run-llm-example-tests-gpu.sh
Add test script and workflow for qlora fine-tuning ( #9295 )
2023-11-01 09:39:53 +08:00
run-llm-inference-tests-gpu.sh
disable test_optimized_model.py temporarily due to out of memory on A730M(pr validation machine) ( #9658 )
2023-12-12 17:13:52 +08:00
run-llm-inference-tests.sh
[WIP] Add UT for Mistral Optimized Model ( #9248 )
2023-10-25 15:14:17 +08:00
run-llm-install-tests.sh
[LLM] Refactor LLM Linux tests ( #8349 )
2023-06-16 15:22:48 +08:00
run-llm-langchain-tests.sh
[LLM] langchain bloom, UT's, default parameters ( #8357 )
2023-06-25 17:38:00 +08:00
run-llm-windows-tests.sh
LLM: fix langchain windows failure ( #8417 )
2023-06-30 09:59:10 +08:00