Add readme for Whisper Test (#9944)

* Fix local data path

* Remove non-essential files

* Add readme

* Minor fixes to script

* Bugfix, refactor

* Add references to original source. Bugfixes.

* Reviewer comments

* Properly print and explain output

* Move files to dev/benchmark

* Fixes
This commit is contained in:
Cheen Hau, 俊豪 2024-01-22 15:11:33 +08:00 committed by GitHub
parent 6fb3f40f7e
commit 947b1e27b7
7 changed files with 74 additions and 222 deletions

View file

@ -0,0 +1,40 @@
# Whisper Test
The Whisper Test allows users to evaluate the performance and accuracy of [Whisper](https://huggingface.co/openai/whisper-base) speech-to-text models.
For accuracy, the model is tested on the [LibriSpeech](https://huggingface.co/datasets/librispeech_asr) dataset using [Word Error Rate (WER)](https://github.com/huggingface/evaluate/tree/main/metrics/wer) metric.
Before running, make sure to have [bigdl-llm](../../../README.md) installed.
## Install Dependencies
```bash
pip install datasets evaluate soundfile librosa jiwer
```
## Run
```bash
python run_whisper.py --model_path /path/to/model --data_type other --device cpu
```
The LibriSpeech dataset contains 'clean' and 'other' splits.
You can specify the split to evaluate with ```--data_type```.
By default, we set it to ```other```.
You can specify the device to run the test on with ```--device```.
To run on Intel GPU, set it to ```xpu```, and refer to [GPU installation guide](https://bigdl.readthedocs.io/en/latest/doc/LLM/Overview/install_gpu.html) for details on installation and optimal configuration.
> **Note**
>
> If you get the error message `ConnectionError: Couldn't reach http://www.openslr.org/resources/12/test-other.tar.gz (error 403)`, you can source from a local dataset instead.
## Using a local dataset
By default, the LibriSpeech dataset is downloaded at runtime with Huggingface Hub. If you prefer to source from a local dataset instead, please set the following environment variable before running the evaluation script
```bash
export LIBRISPEECH_DATASET_PATH=/path/to/dataset_folder
```
Make sure the local dataset folder contains 'dev-other.tar.gz','test-other.tar.gz', and 'train-other-500.tar.gz'. The files can be downloaded from http://www.openslr.org/resources/12/
## Printed metrics
Three metrics are printed:
- Realtime Factor(RTF): RTF indicates total prediction time over the total duration of speech samples.
- Realtime X(RTX): RTX is the inverse of RTF
- Word Error Rate (WER): WER indicates the average number of errors per reference word.

View file

@ -56,9 +56,9 @@ audiobooks from the LibriVox project, and has been carefully segmented and align
"""
_URL = "http://www.openslr.org/12"
#_DL_URL = "http://www.openslr.org/resources/12/"
_DL_URL = "./librispeech/"
_DL_URL = os.getenv("LIBRISPEECH_DATASET_PATH", default = "http://www.openslr.org/resources/12/")
_DL_URL = os.path.join(_DL_URL, '')
print(f'LibriSpeech dataset path: {_DL_URL}')
_DL_URLS = {
"clean": {
@ -77,9 +77,9 @@ _DL_URLS = {
"dev.other": _DL_URL + "dev-other.tar.gz",
"test.clean": _DL_URL + "test-clean.tar.gz",
"test.other": _DL_URL + "test-other.tar.gz",
#"train.clean.100": _DL_URL + "train-clean-100.tar.gz",
#"train.clean.360": _DL_URL + "train-clean-360.tar.gz",
#"train.other.500": _DL_URL + "train-other-500.tar.gz",
"train.clean.100": _DL_URL + "train-clean-100.tar.gz",
"train.clean.360": _DL_URL + "train-clean-360.tar.gz",
"train.other.500": _DL_URL + "train-other-500.tar.gz",
},
}
@ -198,29 +198,29 @@ class LibrispeechASR(datasets.GeneratorBasedBuilder):
)
]
elif self.config.name == "all":
#train_splits = [
# datasets.SplitGenerator(
# name="train.clean.100",
# gen_kwargs={
# "local_extracted_archive": local_extracted_archive.get("train.clean.100"),
# "files": dl_manager.iter_archive(archive_path["train.clean.100"]),
# },
# ),
# datasets.SplitGenerator(
# name="train.clean.360",
# gen_kwargs={
# "local_extracted_archive": local_extracted_archive.get("train.clean.360"),
# "files": dl_manager.iter_archive(archive_path["train.clean.360"]),
# },
# ),
# datasets.SplitGenerator(
# name="train.other.500",
# gen_kwargs={
# "local_extracted_archive": local_extracted_archive.get("train.other.500"),
# "files": dl_manager.iter_archive(archive_path["train.other.500"]),
# },
# ),
#]
train_splits = [
datasets.SplitGenerator(
name="train.clean.100",
gen_kwargs={
"local_extracted_archive": local_extracted_archive.get("train.clean.100"),
"files": dl_manager.iter_archive(archive_path["train.clean.100"]),
},
),
datasets.SplitGenerator(
name="train.clean.360",
gen_kwargs={
"local_extracted_archive": local_extracted_archive.get("train.clean.360"),
"files": dl_manager.iter_archive(archive_path["train.clean.360"]),
},
),
datasets.SplitGenerator(
name="train.other.500",
gen_kwargs={
"local_extracted_archive": local_extracted_archive.get("train.other.500"),
"files": dl_manager.iter_archive(archive_path["train.other.500"]),
},
),
]
dev_splits = [
datasets.SplitGenerator(
name="validation.clean",

View file

@ -21,10 +21,7 @@ import torch
from evaluate import load
import time
import argparse
from bigdl.llm import optimize_model
import intel_extension_for_pytorch as ipex
# python whisper_bigdl.py --model_path ./whisper-pretrained-model/whisper-base --data_type other --device xpu
def get_args():
parser = argparse.ArgumentParser(description="Evaluate Whisper performance and accuracy")
parser.add_argument('--model_path', required=True, help='pretrained model path')
@ -42,11 +39,8 @@ if __name__ == '__main__':
speech_dataset = load_dataset('./librispeech_asr.py', name=args.data_type, split='test').select(range(500))
processor = WhisperProcessor.from_pretrained(args.model_path)
forced_decoder_ids = processor.get_decoder_prompt_ids(language='en', task='transcribe')
# model = AutoModelForSpeechSeq2Seq.from_pretrained(args.model_path)
# model = optimize_model(model, low_bit="sym_int4", optimize_llm=False, modules_to_not_convert=[]).to(args.device)
model = AutoModelForSpeechSeq2Seq.from_pretrained(args.model_path, load_in_low_bit="sym_int4", optimize_model=True).eval().to(args.device)
# model = AutoModelForSpeechSeq2Seq.from_pretrained(args.model_path, load_in_low_bit="fp8_e5m2", optimize_model=True).eval().to(args.device)
model.config.forced_decoder_ids = None
def map_to_pred(batch):
@ -70,9 +64,9 @@ if __name__ == '__main__':
return batch
result = speech_dataset.map(map_to_pred, keep_in_memory=True)
wer = load("wer")
wer = load("./wer")
speech_length = sum(result["length"][1:])
prc_time = sum(result["time"][1:])
print("Realtime Factor(RTF) is : %.4f" % (prc_time/speech_length))
print("Realtime X(RTX) is : %.2f" % (speech_length/prc_time))
print(100 * wer.compute(references=result["reference"], predictions=result["prediction"]))
print(f'WER is {100 * wer.compute(references=result["reference"], predictions=result["prediction"])}')

View file

@ -126,7 +126,7 @@ class WER(evaluate.Metric):
incorrect += measures["substitutions"] + measures["deletions"] + measures["insertions"]
total += measures["substitutions"] + measures["deletions"] + measures["hits"]
id += 1
print(id_max, max)
print(predictions[id_max])
print(references[id_max])
# print(id_max, max)
# print(predictions[id_max])
# print(references[id_max])
return incorrect / total

View file

@ -1,158 +0,0 @@
---
title: WER
emoji: 🤗
colorFrom: blue
colorTo: red
sdk: gradio
sdk_version: 3.0.2
app_file: app.py
pinned: false
tags:
- evaluate
- metric
description: >-
Word error rate (WER) is a common metric of the performance of an automatic speech recognition system.
The general difficulty of measuring performance lies in the fact that the recognized word sequence can have a different length from the reference word sequence (supposedly the correct one). The WER is derived from the Levenshtein distance, working at the word level instead of the phoneme level. The WER is a valuable tool for comparing different systems as well as for evaluating improvements within one system. This kind of measurement, however, provides no details on the nature of translation errors and further work is therefore required to identify the main source(s) of error and to focus any research effort.
This problem is solved by first aligning the recognized word sequence with the reference (spoken) word sequence using dynamic string alignment. Examination of this issue is seen through a theory called the power law that states the correlation between perplexity and word error rate.
Word error rate can then be computed as:
WER = (S + D + I) / N = (S + D + I) / (S + D + C)
where
S is the number of substitutions,
D is the number of deletions,
I is the number of insertions,
C is the number of correct words,
N is the number of words in the reference (N=S+D+C).
This value indicates the average number of errors per reference word. The lower the value, the better the
performance of the ASR system with a WER of 0 being a perfect score.
---
# Metric Card for WER
## Metric description
Word error rate (WER) is a common metric of the performance of an automatic speech recognition (ASR) system.
The general difficulty of measuring the performance of ASR systems lies in the fact that the recognized word sequence can have a different length from the reference word sequence (supposedly the correct one). The WER is derived from the [Levenshtein distance](https://en.wikipedia.org/wiki/Levenshtein_distance), working at the word level.
This problem is solved by first aligning the recognized word sequence with the reference (spoken) word sequence using dynamic string alignment. Examination of this issue is seen through a theory called the power law that states the correlation between [perplexity](https://huggingface.co/metrics/perplexity) and word error rate (see [this article](https://www.cs.cmu.edu/~roni/papers/eval-metrics-bntuw-9802.pdf) for further information).
Word error rate can then be computed as:
`WER = (S + D + I) / N = (S + D + I) / (S + D + C)`
where
`S` is the number of substitutions,
`D` is the number of deletions,
`I` is the number of insertions,
`C` is the number of correct words,
`N` is the number of words in the reference (`N=S+D+C`).
## How to use
The metric takes two inputs: references (a list of references for each speech input) and predictions (a list of transcriptions to score).
```python
from evaluate import load
wer = load("wer")
wer_score = wer.compute(predictions=predictions, references=references)
```
## Output values
This metric outputs a float representing the word error rate.
```
print(wer_score)
0.5
```
This value indicates the average number of errors per reference word.
The **lower** the value, the **better** the performance of the ASR system, with a WER of 0 being a perfect score.
### Values from popular papers
This metric is highly dependent on the content and quality of the dataset, and therefore users can expect very different values for the same model but on different datasets.
For example, datasets such as [LibriSpeech](https://huggingface.co/datasets/librispeech_asr) report a WER in the 1.8-3.3 range, whereas ASR models evaluated on [Timit](https://huggingface.co/datasets/timit_asr) report a WER in the 8.3-20.4 range.
See the leaderboards for [LibriSpeech](https://paperswithcode.com/sota/speech-recognition-on-librispeech-test-clean) and [Timit](https://paperswithcode.com/sota/speech-recognition-on-timit) for the most recent values.
## Examples
Perfect match between prediction and reference:
```python
from evaluate import load
wer = load("wer")
predictions = ["hello world", "good night moon"]
references = ["hello world", "good night moon"]
wer_score = wer.compute(predictions=predictions, references=references)
print(wer_score)
0.0
```
Partial match between prediction and reference:
```python
from evaluate import load
wer = load("wer")
predictions = ["this is the prediction", "there is an other sample"]
references = ["this is the reference", "there is another one"]
wer_score = wer.compute(predictions=predictions, references=references)
print(wer_score)
0.5
```
No match between prediction and reference:
```python
from evaluate import load
wer = load("wer")
predictions = ["hello world", "good night moon"]
references = ["hi everyone", "have a great day"]
wer_score = wer.compute(predictions=predictions, references=references)
print(wer_score)
1.0
```
## Limitations and bias
WER is a valuable tool for comparing different systems as well as for evaluating improvements within one system. This kind of measurement, however, provides no details on the nature of translation errors and further work is therefore required to identify the main source(s) of error and to focus any research effort.
## Citation
```bibtex
@inproceedings{woodard1982,
author = {Woodard, J.P. and Nelson, J.T.,
year = {1982},
journal = {Workshop on standardisation for speech I/O technology, Naval Air Development Center, Warminster, PA},
title = {An information theoretic measure of speech recognition performance}
}
```
```bibtex
@inproceedings{morris2004,
author = {Morris, Andrew and Maier, Viktoria and Green, Phil},
year = {2004},
month = {01},
pages = {},
title = {From WER and RIL to MER and WIL: improved evaluation measures for connected speech recognition.}
}
```
## Further References
- [Word Error Rate -- Wikipedia](https://en.wikipedia.org/wiki/Word_error_rate)
- [Hugging Face Tasks -- Automatic Speech Recognition](https://huggingface.co/tasks/automatic-speech-recognition)

View file

@ -1,22 +0,0 @@
#
# Copyright 2016 The BigDL Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import evaluate
from evaluate.utils import launch_gradio_widget
module = evaluate.load("wer")
launch_gradio_widget(module)

View file

@ -1,2 +0,0 @@
git+https://github.com/huggingface/evaluate@{COMMIT_PLACEHOLDER}
jiwer