ipex-llm/python/llm/test
Yuwen Hu 0d58c2fdf9
Update performance test regarding updated default transformers==4.37.0 (#11869)
* Update igpu performance from transformers 4.36.2 to 4.37.0 (#11841)

* upgrade arc perf test to transformers 4.37 (#11842)

* fix load low bit com dtype (#11832)

* feat: add mixed_precision argument on ppl longbench evaluation

* fix: delete extra code

* feat: upgrade arc perf test to transformers 4.37

* fix: add missing codes

* fix: keep perf test for qwen-vl-chat in transformers 4.36

* fix: remove extra space

* fix: resolve pr comment

* fix: add empty line

* fix: add pip install for spr and core test

* fix: delete extra comments

* fix: remove python -m for pip

* Revert "fix load low bit com dtype (#11832)"

This reverts commit 6841a9ac8f.

---------

Co-authored-by: Zhao Changmin <changmin.zhao@intel.com>
Co-authored-by: Jinhe Tang <jin.tang1337@gmail.com>

* add transformers==4.36 for qwen vl in igpu-perf (#11846)

* add transformers==4.36.2 for qwen-vl

* Small update

---------

Co-authored-by: Yuwen Hu <yuwen.hu@intel.com>

* fix: remove qwen-7b on core test (#11851)

* fix: remove qwen-7b on core test

* fix: change delete to comment

---------

Co-authored-by: Jinhe Tang <jin.tang1337@gmail.com>

* replce filename (#11854)

* fix: remove qwen-7b on core test

* fix: change delete to comment

* fix: replace filename

---------

Co-authored-by: Jinhe Tang <jin.tang1337@gmail.com>

* fix: delete extra comments (#11863)

* Remove transformers installation for temp test purposes

* Small fix

* Small update

---------

Co-authored-by: Chu,Youcheng <70999398+cranechu0131@users.noreply.github.com>
Co-authored-by: Zhao Changmin <changmin.zhao@intel.com>
Co-authored-by: Jinhe Tang <jin.tang1337@gmail.com>
Co-authored-by: Zijie Li <michael20001122@gmail.com>
Co-authored-by: Chu,Youcheng <1340390339@qq.com>
2024-08-20 17:59:28 +08:00
..
benchmark Update performance test regarding updated default transformers==4.37.0 (#11869) 2024-08-20 17:59:28 +08:00
convert Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
inference Use merge_qkv to replace fused_qkv for llama2 (#11727) 2024-08-07 18:04:01 +08:00
inference_gpu Update ipex-llm default transformers version to 4.37.0 (#11859) 2024-08-20 17:37:58 +08:00
install Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
langchain Update tests for transformers 4.36 (#10858) 2024-05-24 10:26:38 +08:00
langchain_gpu Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
llamaindex Fix llamaindex ut (#10673) 2024-04-08 09:47:51 +08:00
llamaindex_gpu Fix llamaindex ut (#10673) 2024-04-08 09:47:51 +08:00
win [LLM] Remove old windows nightly test code (#8668) 2023-08-03 17:12:23 +09:00
__init__.py [LLM] Enable UT workflow logics for LLM (#8243) 2023-06-02 17:06:35 +08:00
run-langchain-upstream-tests.sh Add Windows GPU unit test (#11050) 2024-05-28 13:29:47 +08:00
run-llm-check-function.sh Add Windows GPU unit test (#11050) 2024-05-28 13:29:47 +08:00
run-llm-convert-tests.sh [LLM] Change default runner for LLM Linux tests to the ones with AVX512 (#8448) 2023-07-04 14:53:03 +08:00
run-llm-example-tests-gpu.sh Add Windows GPU unit test (#11050) 2024-05-28 13:29:47 +08:00
run-llm-inference-tests-gpu.sh Add Windows GPU unit test (#11050) 2024-05-28 13:29:47 +08:00
run-llm-inference-tests.sh Update tests for transformers 4.36 (#10858) 2024-05-24 10:26:38 +08:00
run-llm-install-tests.sh [LLM] Refactor LLM Linux tests (#8349) 2023-06-16 15:22:48 +08:00
run-llm-langchain-tests-gpu.sh Add Windows GPU unit test (#11050) 2024-05-28 13:29:47 +08:00
run-llm-langchain-tests.sh [LLM] langchain bloom, UT's, default parameters (#8357) 2023-06-25 17:38:00 +08:00
run-llm-llamaindex-tests-gpu.sh Add Windows GPU unit test (#11050) 2024-05-28 13:29:47 +08:00
run-llm-llamaindex-tests.sh Update llamaindex ut (#10338) 2024-03-07 10:06:16 +08:00
run-llm-windows-tests.sh LLM: fix langchain windows failure (#8417) 2023-06-30 09:59:10 +08:00