ipex-llm/python/llm/test
Wenjing Margaret Mao 231b968aba
Modify the check_results.py to support batch 2&4 (#11133)
* add batch 2&4 and exclude to perf_test

* modify the perf-test&437 yaml

* modify llm_performance_test.yml

* remove batch 4

* modify check_results.py to support batch 2&4

* change the batch_size format

* remove genxir

* add str(batch_size)

* change actual_test_casese in check_results file to support batch_size

* change html highlight

* less models to test html and html_path

* delete the moe model

* split batch html

* split

* use installing from pypi

* use installing from pypi - batch2

* revert cpp

* revert cpp

* merge two jobs into one, test batch_size in one job

* merge two jobs into one, test batch_size in one job

* change file directory in workflow

* try catch deal with odd file without batch_size

* modify pandas version

* change the dir

* organize the code

* organize the code

* remove Qwen-MOE

* modify based on feedback

* modify based on feedback

* modify based on second round of feedback

* modify based on second round of feedback + change run-arc.sh mode

* modify based on second round of feedback + revert config

* modify based on second round of feedback + revert config

* modify based on second round of feedback + remove comments

* modify based on second round of feedback + remove comments

* modify based on second round of feedback + revert arc-perf-test

* modify based on third round of feedback

* change error type

* change error type

* modify check_results.html

* split batch into two folders

* add all models

* move csv_name

* revert pr test

* revert pr test

---------

Co-authored-by: Yishuo Wang <yishuo.wang@intel.com>
2024-06-05 15:04:55 +08:00
..
benchmark Modify the check_results.py to support batch 2&4 (#11133) 2024-06-05 15:04:55 +08:00
convert Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
inference Update tests for transformers 4.36 (#10858) 2024-05-24 10:26:38 +08:00
inference_gpu Add Windows GPU unit test (#11050) 2024-05-28 13:29:47 +08:00
install Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
langchain Update tests for transformers 4.36 (#10858) 2024-05-24 10:26:38 +08:00
langchain_gpu Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
llamaindex Fix llamaindex ut (#10673) 2024-04-08 09:47:51 +08:00
llamaindex_gpu Fix llamaindex ut (#10673) 2024-04-08 09:47:51 +08:00
win [LLM] Remove old windows nightly test code (#8668) 2023-08-03 17:12:23 +09:00
__init__.py [LLM] Enable UT workflow logics for LLM (#8243) 2023-06-02 17:06:35 +08:00
run-langchain-upstream-tests.sh Add Windows GPU unit test (#11050) 2024-05-28 13:29:47 +08:00
run-llm-check-function.sh Add Windows GPU unit test (#11050) 2024-05-28 13:29:47 +08:00
run-llm-convert-tests.sh [LLM] Change default runner for LLM Linux tests to the ones with AVX512 (#8448) 2023-07-04 14:53:03 +08:00
run-llm-example-tests-gpu.sh Add Windows GPU unit test (#11050) 2024-05-28 13:29:47 +08:00
run-llm-inference-tests-gpu.sh Add Windows GPU unit test (#11050) 2024-05-28 13:29:47 +08:00
run-llm-inference-tests.sh Update tests for transformers 4.36 (#10858) 2024-05-24 10:26:38 +08:00
run-llm-install-tests.sh [LLM] Refactor LLM Linux tests (#8349) 2023-06-16 15:22:48 +08:00
run-llm-langchain-tests-gpu.sh Add Windows GPU unit test (#11050) 2024-05-28 13:29:47 +08:00
run-llm-langchain-tests.sh [LLM] langchain bloom, UT's, default parameters (#8357) 2023-06-25 17:38:00 +08:00
run-llm-llamaindex-tests-gpu.sh Add Windows GPU unit test (#11050) 2024-05-28 13:29:47 +08:00
run-llm-llamaindex-tests.sh Update llamaindex ut (#10338) 2024-03-07 10:06:16 +08:00
run-llm-windows-tests.sh LLM: fix langchain windows failure (#8417) 2023-06-30 09:59:10 +08:00