ipex-llm/python/llm/dev/benchmark/perplexity
RyuKosei 3b630fb9df
updated ppl README (#11807)
* edit README.md

* update the branch

* edited README.md

* updated

* updated description

---------

Co-authored-by: jenniew <jenniewang123@gmail.com>
2024-08-16 15:49:25 +08:00
..
make_csv.py combine english and chinese, remove nan 2024-04-08 19:37:51 +08:00
make_table.py Mark Color Modification 2024-04-12 14:00:50 +08:00
ppl.py Align ppl with llama.cpp (#11055) 2024-05-22 16:43:11 +08:00
README.md updated ppl README (#11807) 2024-08-16 15:49:25 +08:00
run_longbench.py updated ppl README (#11807) 2024-08-16 15:49:25 +08:00
run_wikitext.py add mixed_precision argument on ppl wikitext evaluation (#11813) 2024-08-15 17:58:53 +08:00

Perplexity

Perplexity (PPL) is one of the most common metrics for evaluating language models. This benchmark implementation is adapted from transformers/perplexity and benchmark_patch_llm.py

Run on Wikitext

pip install datasets

An example to run perplexity on wikitext:


python run_wikitext.py --model_path meta-llama/Meta-Llama-3-8B --dataset path=wikitext,name=wikitext-2-raw-v1 --precision sym_int4 --device xpu --stride 512 --max_length 4096

Run on THUDM/LongBench dataset

pip install datasets

An example to run perplexity on chatglm3-6b using the default Chinese datasets("multifieldqa_zh", "dureader", "vcsum", "lsht", "passage_retrieval_zh")

python run_longbench.py --model_path THUDM/chatglm3-6b --precisions float16 sym_int4 --device xpu --language zh

Notes:

  • If you want to test model perplexity on a few selected datasets from the LongBench dataset, please use the format below.
    --datasets narrativeqa qasper ...
    
  • The language argument will only take effect if datasets is None. The choices for this argument are en, zh, all, which stands for all the English datasets, all the Chinese datasets and all the datasets respectively during testing.
  • If you want to test perplexity on pre-downloaded datasets, please specify the <path/to/dataset> in the dataset_path argument in your command.
  • You can run python make_table.py <input_dir> to summarize the results.