| .. |
|
benchmark
|
[LLM] Add nightly igpu perf test for INT4+FP16 1024-128 (#10496)
|
2024-03-21 16:07:06 +08:00 |
|
convert
|
LLM: Adapt transformers models for optimize model SL (#9022)
|
2023-10-09 11:13:44 +08:00 |
|
inference
|
[LLM] Add UTs of load_low_bit for transformers-style API (#10001)
|
2024-01-29 10:18:23 +08:00 |
|
inference_gpu
|
LLM: Add decoder/layernorm unit tests (#10211)
|
2024-03-13 19:41:47 +08:00 |
|
install
|
[LLM] Refactor LLM Linux tests (#8349)
|
2023-06-16 15:22:48 +08:00 |
|
langchain
|
LLM: modify transformersembeddings.embed() in langchain (#10051)
|
2024-02-05 10:42:10 +08:00 |
|
langchain_gpu
|
add langchain gpu example (#10277)
|
2024-03-05 13:33:57 +08:00 |
|
llamaindex
|
Update llamaindex ut (#10338)
|
2024-03-07 10:06:16 +08:00 |
|
llamaindex_gpu
|
Update llamaindex ut (#10338)
|
2024-03-07 10:06:16 +08:00 |
|
win
|
[LLM] Remove old windows nightly test code (#8668)
|
2023-08-03 17:12:23 +09:00 |
|
__init__.py
|
[LLM] Enable UT workflow logics for LLM (#8243)
|
2023-06-02 17:06:35 +08:00 |
|
run-langchain-upstream-tests.sh
|
Add LangChain upstream ut test for ipynb (#10387)
|
2024-03-15 16:31:01 +08:00 |
|
run-llm-convert-tests.sh
|
[LLM] Change default runner for LLM Linux tests to the ones with AVX512 (#8448)
|
2023-07-04 14:53:03 +08:00 |
|
run-llm-example-tests-gpu.sh
|
LLM: remove english_quotes dataset (#10370)
|
2024-03-12 16:57:40 +08:00 |
|
run-llm-inference-tests-gpu-434.sh
|
Add RMSNorm unit test (#10190)
|
2024-03-08 15:51:03 +08:00 |
|
run-llm-inference-tests-gpu.sh
|
LLM: Add decoder/layernorm unit tests (#10211)
|
2024-03-13 19:41:47 +08:00 |
|
run-llm-inference-tests.sh
|
[LLM] Add UTs of load_low_bit for transformers-style API (#10001)
|
2024-01-29 10:18:23 +08:00 |
|
run-llm-install-tests.sh
|
[LLM] Refactor LLM Linux tests (#8349)
|
2023-06-16 15:22:48 +08:00 |
|
run-llm-langchain-tests-gpu.sh
|
Add LangChain upstream ut test for ipynb (#10387)
|
2024-03-15 16:31:01 +08:00 |
|
run-llm-langchain-tests.sh
|
[LLM] langchain bloom, UT's, default parameters (#8357)
|
2023-06-25 17:38:00 +08:00 |
|
run-llm-llamaindex-tests-gpu.sh
|
Update llamaindex ut (#10338)
|
2024-03-07 10:06:16 +08:00 |
|
run-llm-llamaindex-tests.sh
|
Update llamaindex ut (#10338)
|
2024-03-07 10:06:16 +08:00 |
|
run-llm-windows-tests.sh
|
LLM: fix langchain windows failure (#8417)
|
2023-06-30 09:59:10 +08:00 |