ipex-llm/python/llm/dev/benchmark
Yuwen Hu c998f5f2ba [LLM] iGPU long context tests (#9598)
* Temp enable PR

* Enable tests for 256-64

* Try again 128-64

* Empty cache after each iteration for igpu benchmark scripts

* Try tests for 512

* change order for 512

* Skip chatglm3 and llama2 for now

* Separate tests for 512-64

* Small fix

* Further fixes

* Change back to nightly again
2023-12-06 10:19:20 +08:00
..
all-in-one [LLM] iGPU long context tests (#9598) 2023-12-06 10:19:20 +08:00
harness Add harness summary job (#9457) 2023-12-05 10:04:10 +08:00
benchmark_util.py LLM: update benchmark_util.py for beam search (#9126) 2023-10-11 09:41:53 +08:00
README.md update benchmark_utils readme (#8925) 2023-09-08 10:30:26 +08:00

Benchmark tool for transformers int4 (separate 1st token and rest)

benchmark_util.py is used to provide a simple benchmark tool for transformer int4 model to calculate 1st token performance and the rest on CPU and GPU.

CPU Usage

Just put this file into your benchmark directory, and then wrap your transformer int4 model with BenchmarkWrapper (model = BenchmarkWrapper(model)). Take chatglm-6b as an example:

import torch
from bigdl.llm.transformers import AutoModel
from transformers import AutoTokenizer
from benchmark_util import BenchmarkWrapper

model_path ='THUDM/chatglm-6b'
model = AutoModel.from_pretrained(model_path, trust_remote_code=True, load_in_4bit=True)
model = BenchmarkWrapper(model, do_print=True)
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
prompt = "今天睡不着怎么办"
 
with torch.inference_mode():
    input_ids = tokenizer.encode(prompt, return_tensors="pt")
    output = model.generate(input_ids, do_sample=False, max_new_tokens=32)
    output_str = tokenizer.decode(output[0], skip_special_tokens=True)

Output will be like:

=========First token cost xx.xxxxs=========
=========Last token cost average xx.xxxxs (31 tokens in all)=========

GPU Usage

Just put this file into your benchmark directory, and then wrap your transformer int4 model with BenchmarkWrapper (model = BenchmarkWrapper(model)). Take chatglm-6b as an example:

import torch
import intel_extension_for_pytorch as ipex
from bigdl.llm.transformers import AutoModel
from transformers import AutoTokenizer
from benchmark_util import BenchmarkWrapper

model_path ='THUDM/chatglm-6b'
model = AutoModel.from_pretrained(model_path, trust_remote_code=True, load_in_4bit=True)
model = model.to('xpu')
model = BenchmarkWrapper(model, do_print=True)
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
prompt = "今天睡不着怎么办"
 
with torch.inference_mode():
    # wamup two times as use ipex
    for i in range(2):
        input_ids = tokenizer.encode(prompt, return_tensors="pt").to('xpu')
        output = model.generate(input_ids, do_sample=False, max_new_tokens=32)
        output_str = tokenizer.decode(output[0], skip_special_tokens=True)
    # collect performance data now
    for i in range(5):
        input_ids = tokenizer.encode(prompt, return_tensors="pt").to('xpu')
        output = model.generate(input_ids, do_sample=False, max_new_tokens=32)
        output_str = tokenizer.decode(output[0], skip_special_tokens=True)

Output will be like:

=========First token cost xx.xxxxs=========
=========Last token cost average xx.xxxxs (31 tokens in all)=========