ipex-llm/python/llm/test/benchmark/igpu-perf/2048-256_int4_fp16_443.yaml
Yuwen Hu ec184af243
Add gemma-2-2b-it and gemma-2-9b-it to igpu nightly performance test (#11778)
* add yaml and modify `concat_csv.py` for `transformers` 4.43.1 (#11758)

* add yaml and modify `concat_csv.py` for `transformers` 4.43.1

* remove 4.43 for arc; fix;

* remove 4096-512 for 4.43

* comment some models

* Small fix

* uncomment models (#11777)

---------

Co-authored-by: Ch1y0q <qiyue2001@gmail.com>
2024-08-13 15:39:56 +08:00

14 lines
501 B
YAML

repo_id:
- 'google/gemma-2-2b-it'
- 'google/gemma-2-9b-it'
local_model_hub: 'path to your local model hub'
warm_up: 1
num_trials: 3
num_beams: 1 # default to greedy search
low_bit: 'sym_int4' # default to use 'sym_int4' (i.e. symmetric int4)
batch_size: 1 # default to 1
in_out_pairs:
- '2048-256'
test_api:
- "transformer_int4_fp16_gpu_win" # on Intel GPU for Windows (catch GPU peak memory)
cpu_embedding: True # whether put embedding to CPU (only avaiable now for gpu win related test_api)