Add cpu and gpu examples of distil-whisper (#9374)
* Add distil-whisper examples * Fixes based on comments * Minor fixes --------- Co-authored-by: Ariadne330 <wyn2000330@126.com>
This commit is contained in:
parent
ad81b5d838
commit
0674146cfb
10 changed files with 605 additions and 0 deletions
|
|
@ -164,6 +164,7 @@ Over 20 models have been optimized/verified on `bigdl-llm`, including *LLaMA/LLa
|
||||||
| WizardCoder-Python | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/wizardcoder-python) | |
|
| WizardCoder-Python | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/wizardcoder-python) | |
|
||||||
| CodeShell | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/CodeShell) | |
|
| CodeShell | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/CodeShell) | |
|
||||||
| Fuyu | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/fuyu) | |
|
| Fuyu | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/fuyu) | |
|
||||||
|
| Distil-Whisper | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/distil-whisper) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/distil-whisper) |
|
||||||
|
|
||||||
|
|
||||||
***For more details, please refer to the `bigdl-llm` [Document](https://test-bigdl-llm.readthedocs.io/en/main/doc/LLM/index.html), [Readme](python/llm), [Tutorial](https://github.com/intel-analytics/bigdl-llm-tutorial) and [API Doc](https://bigdl.readthedocs.io/en/latest/doc/PythonAPI/LLM/index.html).***
|
***For more details, please refer to the `bigdl-llm` [Document](https://test-bigdl-llm.readthedocs.io/en/main/doc/LLM/index.html), [Readme](python/llm), [Tutorial](https://github.com/intel-analytics/bigdl-llm-tutorial) and [API Doc](https://bigdl.readthedocs.io/en/latest/doc/PythonAPI/LLM/index.html).***
|
||||||
|
|
|
||||||
|
|
@ -71,6 +71,7 @@ Over 20 models have been optimized/verified on `bigdl-llm`, including *LLaMA/LLa
|
||||||
| WizardCoder-Python | [link](example/CPU/HF-Transformers-AutoModels/Model/wizardcoder-python) | |
|
| WizardCoder-Python | [link](example/CPU/HF-Transformers-AutoModels/Model/wizardcoder-python) | |
|
||||||
| CodeShell | [link](example/CPU/HF-Transformers-AutoModels/Model/CodeShell) | |
|
| CodeShell | [link](example/CPU/HF-Transformers-AutoModels/Model/CodeShell) | |
|
||||||
| Fuyu | [link](example/CPU/HF-Transformers-AutoModels/Model/fuyu) | |
|
| Fuyu | [link](example/CPU/HF-Transformers-AutoModels/Model/fuyu) | |
|
||||||
|
| Distil-Whisper | [link](example/CPU/HF-Transformers-AutoModels/Model/distil-whisper) | [link](example/GPU/HF-Transformers-AutoModels/Model/distil-whisper) |
|
||||||
|
|
||||||
### Working with `bigdl-llm`
|
### Working with `bigdl-llm`
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -0,0 +1,86 @@
|
||||||
|
# Distil-Whisper
|
||||||
|
|
||||||
|
In this directory, you will find examples on how you could apply BigDL-LLM INT4 optimizations on Distil-Whisper models. For illustration purposes, we utilize the [distil-whisper/distil-large-v2](https://huggingface.co/distil-whisper/distil-large-v2) as a reference Distil-Whisper model.
|
||||||
|
|
||||||
|
## 0. Requirements
|
||||||
|
To run these examples with BigDL-LLM, we have some recommended requirements for your machine, please refer to [here](../README.md#recommended-requirements) for more information.
|
||||||
|
|
||||||
|
## Example: Recognize Tokens using `generate()` API
|
||||||
|
In the example [recognize.py](./recognize.py), we show a basic use case for a Distil-Whisper model to conduct transcription using `generate()` API, with BigDL-LLM INT4 optimizations.
|
||||||
|
### 1. Install
|
||||||
|
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
|
||||||
|
|
||||||
|
After installing conda, create a Python environment for BigDL-LLM:
|
||||||
|
```bash
|
||||||
|
conda create -n llm python=3.9
|
||||||
|
conda activate llm
|
||||||
|
|
||||||
|
pip install --pre --upgrade bigdl-llm[all] # install the latest bigdl-llm nightly build with 'all' option
|
||||||
|
pip install datasets soundfile librosa # required by audio processing
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Run
|
||||||
|
```
|
||||||
|
python ./recognize.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --repo-id-or-data-path REPO_ID_OR_DATA_PATH --language LANGUAGE --chunk-length CHUNK_LENGTH --batch-size BATCH_SIZE
|
||||||
|
```
|
||||||
|
|
||||||
|
Arguments info:
|
||||||
|
- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the Distil-Whisper model to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'distil-whisper/distil-large-v2'`.
|
||||||
|
- `--repo-id-or-data-path REPO_ID_OR_DATA_PATH`: argument defining the huggingface repo id for the audio dataset to be downloaded, or the path to the huggingface dataset folder. It is default to be `'distil-whisper/librispeech_long'`.
|
||||||
|
- `--language LANGUAGE`: argument defining language to be transcribed. It is default to be `english`.
|
||||||
|
- `--chunk-length CHUNK_LENGTH`: argument defining the maximum number of chuncks of sampling_rate samples used to trim and pad longer or shorter audio sequences. For audio recordings less than 30 seconds, it can be set to 0 for better performance. It is default to be 15.
|
||||||
|
- `--batch-size BATCH_SIZE`: argument defining the batch_size of pipeline inference, it usually equals of length of the audio divided by chunk-length. It is default to be 16.
|
||||||
|
|
||||||
|
> **Note**: When loading the model in 4-bit, BigDL-LLM converts linear layers in the model into INT4 format. In theory, a *X*B model saved in 16-bit will requires approximately 2*X* GB of memory for loading, and ~0.5*X* GB memory for further inference.
|
||||||
|
>
|
||||||
|
> Please select the appropriate size of the Distil-Whisper model based on the capabilities of your machine.
|
||||||
|
|
||||||
|
|
||||||
|
#### 2.1 Client
|
||||||
|
On client Windows machine, it is recommended to run directly with full utilization of all cores:
|
||||||
|
```powershell
|
||||||
|
python ./recognize.py
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 2.2 Server
|
||||||
|
For optimal performance on server, it is recommended to set several environment variables (refer to [here](../README.md#best-known-configuration-on-linux) for more information), and run the example with all the physical cores of a single socket.
|
||||||
|
|
||||||
|
E.g. on Linux,
|
||||||
|
```bash
|
||||||
|
# set BigDL-Nano env variables
|
||||||
|
source bigdl-nano-init
|
||||||
|
|
||||||
|
# e.g. for a server with 48 cores per socket
|
||||||
|
export OMP_NUM_THREADS=48
|
||||||
|
numactl -C 0-47 -m 0 python ./recognize.py
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 2.3 Sample Output
|
||||||
|
#### 2.3.1 Short-Form Transcription
|
||||||
|
|
||||||
|
Model: [distil-whisper/distil-large-v2](https://huggingface.co/distil-whisper/distil-large-v2)
|
||||||
|
|
||||||
|
Command line:
|
||||||
|
```bash
|
||||||
|
python ./recognize.py --repo-id-or-data-path 'hf-internal-testing/librispeech_asr_dummy' --chunk-length 0
|
||||||
|
```
|
||||||
|
Output:
|
||||||
|
```log
|
||||||
|
Inference time: xxxx s
|
||||||
|
-------------------- Output --------------------
|
||||||
|
[' Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel.']
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 2.3.2 Long-Form Transcription
|
||||||
|
|
||||||
|
Model: [distil-whisper/distil-large-v2](https://huggingface.co/distil-whisper/distil-large-v2)
|
||||||
|
|
||||||
|
Command line:
|
||||||
|
```bash
|
||||||
|
python ./recognize.py --repo-id-or-data-path 'distil-whisper/librispeech_long' --chunk-length 15
|
||||||
|
```
|
||||||
|
Output:
|
||||||
|
```log
|
||||||
|
inference time is xxxx s
|
||||||
|
Mr Quilter is the Apostle of the Middle classes, and we are glad to welcome his Gospel. Nor is Mr Quilter's manner less interesting than his matter. He tells us that at this festive season of the year, with Christmas and roast beef looming before us, similes drawn from eating and its results occur most readily to the mind. He has grave doubts whether Sir Frederick Leighton's work is really Greek after all, and can discover in it but little of rocky Ithaca. Linel's pictures are a sort of upguards and Adam paintings, and Mason's exquisite itels are as national as a Jingo poem. Mr Birkett Foster's landscapes smile at one much in the same way that Mr. Karker used to flash his teeth, and Mr. John Collier gives his sitter a cheerful slap on the back before he says, like a shampoo or a Turkish bath, next man.
|
||||||
|
```
|
||||||
|
|
@ -0,0 +1,68 @@
|
||||||
|
#
|
||||||
|
# Copyright 2016 The BigDL Authors.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
#
|
||||||
|
|
||||||
|
import time
|
||||||
|
import argparse
|
||||||
|
|
||||||
|
from bigdl.llm.transformers import AutoModelForSpeechSeq2Seq
|
||||||
|
from datasets import load_dataset
|
||||||
|
from transformers import pipeline
|
||||||
|
from transformers.models.whisper import WhisperFeatureExtractor, WhisperTokenizer
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
parser = argparse.ArgumentParser(description='Recognize Long Segment using `generate()` API for Distil-Whisper model')
|
||||||
|
parser.add_argument('--repo-id-or-model-path', type=str, default="distil-whisper/distil-large-v2",
|
||||||
|
help='The huggingface repo id for the Distil-Whisper model to be downloaded'
|
||||||
|
', or the path to the huggingface checkpoint folder')
|
||||||
|
parser.add_argument('--repo-id-or-data-path', type=str,
|
||||||
|
default="distil-whisper/librispeech_long",
|
||||||
|
help='The huggingface repo id for the audio dataset to be downloaded'
|
||||||
|
', or the path to the huggingface dataset folder')
|
||||||
|
parser.add_argument('--language', type=str, default="english",
|
||||||
|
help='language to be transcribed')
|
||||||
|
parser.add_argument('--batch-size', type=int, default=16,
|
||||||
|
help='The batch_size of pipeline inference, '
|
||||||
|
'it usually equals of length of the audio divided by chunk-length.')
|
||||||
|
parser.add_argument('--chunk-length', type=int, default=15,
|
||||||
|
help="The maximum time lengths of chuncks of sampling_rate samples used to trim"
|
||||||
|
"and pad longer or shorter audio sequences. Default to be 30s.")
|
||||||
|
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
model_path = args.repo_id_or_model_path
|
||||||
|
dataset_path = args.repo_id_or_data_path
|
||||||
|
|
||||||
|
# Load dummy dataset and read audio files
|
||||||
|
dataset = load_dataset(dataset_path, "clean", split="validation")
|
||||||
|
audio = dataset[0]["audio"]
|
||||||
|
|
||||||
|
model = AutoModelForSpeechSeq2Seq.from_pretrained(model_path, load_in_4bit=True)
|
||||||
|
model.config.forced_decoder_ids = None
|
||||||
|
|
||||||
|
pipe = pipeline(
|
||||||
|
"automatic-speech-recognition",
|
||||||
|
model=model,
|
||||||
|
feature_extractor=WhisperFeatureExtractor.from_pretrained(model_path),
|
||||||
|
tokenizer=WhisperTokenizer.from_pretrained(model_path, language=args.language),
|
||||||
|
chunk_length_s=args.chunk_length,
|
||||||
|
)
|
||||||
|
|
||||||
|
start = time.time()
|
||||||
|
prediction = pipe(audio, batch_size=args.batch_size)["text"]
|
||||||
|
print(f"inference time is {time.time()-start}")
|
||||||
|
|
||||||
|
print(prediction)
|
||||||
|
|
@ -0,0 +1,87 @@
|
||||||
|
# Distil-Whisper
|
||||||
|
|
||||||
|
In this directory, you will find examples on how you could apply BigDL-LLM INT4 optimizations on Distil-Whisper models. For illustration purposes, we utilize the [distil-whisper/distil-large-v2](https://huggingface.co/distil-whisper/distil-large-v2) as a reference Distil-Whisper model.
|
||||||
|
|
||||||
|
## 0. Requirements
|
||||||
|
To run these examples with BigDL-LLM, we have some recommended requirements for your machine, please refer to [here](../README.md#recommended-requirements) for more information.
|
||||||
|
|
||||||
|
## Example: Recognize Tokens using `generate()` API
|
||||||
|
In the example [recognize.py](./recognize.py), we show a basic use case for a Distil-Whisper model to conduct transcription using `generate()` API, with BigDL-LLM INT4 optimizations.
|
||||||
|
### 1. Install
|
||||||
|
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
|
||||||
|
|
||||||
|
After installing conda, create a Python environment for BigDL-LLM:
|
||||||
|
```bash
|
||||||
|
conda create -n llm python=3.9
|
||||||
|
conda activate llm
|
||||||
|
|
||||||
|
pip install --pre --upgrade bigdl-llm[all] # install the latest bigdl-llm nightly build with 'all' option
|
||||||
|
pip install datasets soundfile librosa # required by audio processing
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Run
|
||||||
|
After setting up the Python environment, you could run the example by following steps.
|
||||||
|
|
||||||
|
> **Note**: When loading the model in 4-bit, BigDL-LLM converts linear layers in the model into INT4 format. In theory, a *X*B model saved in 16-bit will requires approximately 2*X* GB of memory for loading, and ~0.5*X* GB memory for further inference.
|
||||||
|
>
|
||||||
|
> Please select the appropriate size of the Distil-Whisper model based on the capabilities of your machine.
|
||||||
|
|
||||||
|
#### 2.1 Client
|
||||||
|
On client Windows machines, it is recommended to run directly with full utilization of all cores:
|
||||||
|
```powershell
|
||||||
|
python ./recognize.py
|
||||||
|
```
|
||||||
|
More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.
|
||||||
|
|
||||||
|
#### 2.2 Server
|
||||||
|
For optimal performance on server, it is recommended to set several environment variables (refer to [here](../README.md#best-known-configuration-on-linux) for more information), and run the example with all the physical cores of a single socket.
|
||||||
|
|
||||||
|
E.g. on Linux,
|
||||||
|
```bash
|
||||||
|
# set BigDL-Nano env variables
|
||||||
|
source bigdl-nano-init
|
||||||
|
|
||||||
|
# e.g. for a server with 48 cores per socket
|
||||||
|
export OMP_NUM_THREADS=48
|
||||||
|
numactl -C 0-47 -m 0 python ./recognize.py
|
||||||
|
```
|
||||||
|
More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.
|
||||||
|
|
||||||
|
#### 2.3 Arguments Info
|
||||||
|
In the example, several arguments can be passed to satisfy your requirements:
|
||||||
|
|
||||||
|
- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the Distil-Whisper model to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'distil-whisper/distil-large-v2'`.
|
||||||
|
- `--repo-id-or-data-path REPO_ID_OR_DATA_PATH`: argument defining the huggingface repo id for the audio dataset to be downloaded, or the path to the huggingface dataset folder. It is default to be `'distil-whisper/librispeech_long'`.
|
||||||
|
- `--language LANGUAGE`: argument defining language to be transcribed. It is default to be `english`.
|
||||||
|
- `--chunk-length CHUNK_LENGTH`: argument defining the maximum number of chuncks of sampling_rate samples used to trim and pad longer or shorter audio sequences. For audio recordings less than 30 seconds, it can be set to 0 for better performance. It is default to be 15.
|
||||||
|
- `--batch-size BATCH_SIZE`: argument defining the batch_size of pipeline inference, it usually equals of length of the audio divided by chunk-length. It is default to be 16.
|
||||||
|
|
||||||
|
#### 2.4 Sample Output
|
||||||
|
#### 2.4.1 Short-Form Transcription
|
||||||
|
|
||||||
|
Model: [distil-whisper/distil-large-v2](https://huggingface.co/distil-whisper/distil-large-v2)
|
||||||
|
|
||||||
|
Command line:
|
||||||
|
```bash
|
||||||
|
python ./recognize.py --repo-id-or-data-path 'hf-internal-testing/librispeech_asr_dummy' --chunk-length 0
|
||||||
|
```
|
||||||
|
Output:
|
||||||
|
```log
|
||||||
|
Inference time: xxxx s
|
||||||
|
-------------------- Output --------------------
|
||||||
|
[' Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel.']
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 2.4.2 Long-Form Transcription
|
||||||
|
|
||||||
|
Model: [distil-whisper/distil-large-v2](https://huggingface.co/distil-whisper/distil-large-v2)
|
||||||
|
|
||||||
|
Command line:
|
||||||
|
```bash
|
||||||
|
python ./recognize.py --repo-id-or-data-path 'distil-whisper/librispeech_long' --chunk-length 15
|
||||||
|
```
|
||||||
|
Output:
|
||||||
|
```log
|
||||||
|
inference time is xxxx s
|
||||||
|
Mr Quilter is the Apostle of the Middle classes, and we are glad to welcome his Gospel. Nor is Mr Quilter's manner less interesting than his matter. He tells us that at this festive season of the year, with Christmas and roast beef looming before us, similes drawn from eating and its results occur most readily to the mind. He has grave doubts whether Sir Frederick Leighton's work is really Greek after all, and can discover in it but little of rocky Ithaca. Linel's pictures are a sort of upguards and Adam paintings, and Mason's exquisite itels are as national as a Jingo poem. Mr Birkett Foster's landscapes smile at one much in the same way that Mr. Karker used to flash his teeth, and Mr. John Collier gives his sitter a cheerful slap on the back before he says, like a shampoo or a Turkish bath, next man.
|
||||||
|
```
|
||||||
|
|
@ -0,0 +1,69 @@
|
||||||
|
#
|
||||||
|
# Copyright 2016 The BigDL Authors.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
#
|
||||||
|
|
||||||
|
import time
|
||||||
|
import argparse
|
||||||
|
|
||||||
|
from bigdl.llm import optimize_model
|
||||||
|
from datasets import load_dataset
|
||||||
|
from transformers import AutoModelForSpeechSeq2Seq, pipeline
|
||||||
|
from transformers.models.whisper import WhisperFeatureExtractor, WhisperTokenizer
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
parser = argparse.ArgumentParser(description='Recognize Long Segment using `generate()` API for Distil-Whisper model')
|
||||||
|
parser.add_argument('--repo-id-or-model-path', type=str, default="distil-whisper/distil-large-v2",
|
||||||
|
help='The huggingface repo id for the Distil-Whisper model to be downloaded'
|
||||||
|
', or the path to the huggingface checkpoint folder')
|
||||||
|
parser.add_argument('--repo-id-or-data-path', type=str,
|
||||||
|
default="distil-whisper/librispeech_long",
|
||||||
|
help='The huggingface repo id for the audio dataset to be downloaded'
|
||||||
|
', or the path to the huggingface dataset folder')
|
||||||
|
parser.add_argument('--language', type=str, default="english",
|
||||||
|
help='language to be transcribed')
|
||||||
|
parser.add_argument('--batch-size', type=int, default=16,
|
||||||
|
help='The batch_size of pipeline inference, '
|
||||||
|
'it usually equals of length of the audio divided by chunk-length.')
|
||||||
|
parser.add_argument('--chunk-length', type=int, default=15,
|
||||||
|
help="The maximum time lengths of chuncks of sampling_rate samples used to trim"
|
||||||
|
"and pad longer or shorter audio sequences. Default to be 30s.")
|
||||||
|
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
model_path = args.repo_id_or_model_path
|
||||||
|
dataset_path = args.repo_id_or_data_path
|
||||||
|
|
||||||
|
# Load dummy dataset and read audio files
|
||||||
|
dataset = load_dataset(dataset_path, "clean", split="validation")
|
||||||
|
audio = dataset[0]["audio"]
|
||||||
|
|
||||||
|
model = AutoModelForSpeechSeq2Seq.from_pretrained(model_path)
|
||||||
|
model = optimize_model(model)
|
||||||
|
model.config.forced_decoder_ids = None
|
||||||
|
|
||||||
|
pipe = pipeline(
|
||||||
|
"automatic-speech-recognition",
|
||||||
|
model=model,
|
||||||
|
feature_extractor=WhisperFeatureExtractor.from_pretrained(model_path),
|
||||||
|
tokenizer= WhisperTokenizer.from_pretrained(model_path, language=args.language),
|
||||||
|
chunk_length_s=args.chunk_length,
|
||||||
|
)
|
||||||
|
|
||||||
|
start = time.time()
|
||||||
|
prediction = pipe(audio, batch_size=args.batch_size)["text"]
|
||||||
|
print(f"inference time is {time.time()-start}")
|
||||||
|
|
||||||
|
print(prediction)
|
||||||
|
|
@ -0,0 +1,75 @@
|
||||||
|
# Distil-Whisper
|
||||||
|
|
||||||
|
In this directory, you will find examples on how you could apply BigDL-LLM INT4 optimizations on Distil-Whisper models on [Intel GPUs](../README.md). For illustration purposes, we utilize the [distil-whisper/distil-large-v2](https://huggingface.co/distil-whisper/distil-large-v2) as a reference Distil-Whisper model.
|
||||||
|
|
||||||
|
## 0. Requirements
|
||||||
|
To run these examples with BigDL-LLM on Intel GPUs, we have some recommended requirements for your machine, please refer to [here](../README.md#recommended-requirements) for more information.
|
||||||
|
|
||||||
|
## Example: Recognize Tokens using `generate()` API
|
||||||
|
In the example [recognize.py](./recognize.py), we show a basic use case for a Distil-Whisper model to conduct transcription using `pipeline()` API for long audio input, with BigDL-LLM INT4 optimizations.
|
||||||
|
### 1. Install
|
||||||
|
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
|
||||||
|
|
||||||
|
After installing conda, create a Python environment for BigDL-LLM:
|
||||||
|
```bash
|
||||||
|
conda create -n llm python=3.9
|
||||||
|
conda activate llm
|
||||||
|
# below command will install intel_extension_for_pytorch==2.0.110+xpu as default
|
||||||
|
# you can install specific ipex/torch version for your need
|
||||||
|
pip install --pre --upgrade bigdl-llm[xpu] -f https://developer.intel.com/ipex-whl-stable-xpu
|
||||||
|
pip install datasets soundfile librosa # required by audio processing
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Configures OneAPI environment variables
|
||||||
|
```bash
|
||||||
|
source /opt/intel/oneapi/setvars.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Run
|
||||||
|
For optimal performance on Arc, it is recommended to set several environment variables.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export USE_XETLA=OFF
|
||||||
|
export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
|
||||||
|
```
|
||||||
|
|
||||||
|
```
|
||||||
|
python ./recognize.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --repo-id-or-data-path REPO_ID_OR_DATA_PATH --language LANGUAGE --chunk-length CHUNK_LENGTH --batch-size BATCH_SIZE
|
||||||
|
```
|
||||||
|
|
||||||
|
Arguments info:
|
||||||
|
- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the Distil-Whisper model to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'distil-whisper/distil-large-v2'`.
|
||||||
|
- `--repo-id-or-data-path REPO_ID_OR_DATA_PATH`: argument defining the huggingface repo id for the audio dataset to be downloaded, or the path to the huggingface dataset folder. It is default to be `'distil-whisper/librispeech_long'`.
|
||||||
|
- `--language LANGUAGE`: argument defining language to be transcribed. It is default to be `english`.
|
||||||
|
- `--chunk-length CHUNK_LENGTH`: argument defining the maximum number of chuncks of sampling_rate samples used to trim and pad longer or shorter audio sequences. For audio recordings less than 30 seconds, it can be set to 0 for better performance. It is default to be 15.
|
||||||
|
- `--batch-size BATCH_SIZE`: argument defining the batch_size of pipeline inference, it usually equals of length of the audio divided by chunk-length. It is default to be 16.
|
||||||
|
|
||||||
|
#### Sample Output
|
||||||
|
##### Short-Form Transcription
|
||||||
|
|
||||||
|
Model: [distil-whisper/distil-large-v2](https://huggingface.co/distil-whisper/distil-large-v2)
|
||||||
|
|
||||||
|
Command line:
|
||||||
|
```bash
|
||||||
|
python ./recognize.py --repo-id-or-data-path 'hf-internal-testing/librispeech_asr_dummy' --chunk-length 0
|
||||||
|
```
|
||||||
|
Output:
|
||||||
|
```log
|
||||||
|
Inference time: xxxx s
|
||||||
|
-------------------- Output --------------------
|
||||||
|
[' Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel.']
|
||||||
|
```
|
||||||
|
|
||||||
|
##### Long-Form Transcription
|
||||||
|
|
||||||
|
Model: [distil-whisper/distil-large-v2](https://huggingface.co/distil-whisper/distil-large-v2)
|
||||||
|
|
||||||
|
Command line:
|
||||||
|
```bash
|
||||||
|
python ./recognize.py --repo-id-or-data-path 'distil-whisper/librispeech_long' --chunk-length 15
|
||||||
|
```
|
||||||
|
Output:
|
||||||
|
```log
|
||||||
|
inference time is xxxx s
|
||||||
|
Mr Quilter is the Apostle of the Middle classes, and we are glad to welcome his Gospel. Nor is Mr Quilter's manner less interesting than his matter. He tells us that at this festive season of the year, with Christmas and roast beef looming before us, similes drawn from eating and its results occur most readily to the mind. He has grave doubts whether Sir Frederick Leighton's work is really Greek after all, and can discover in it but little of rocky Ithaca. Linel's pictures are a sort of upguards and Adam paintings, and Mason's exquisite itels are as national as a Jingo poem. Mr Birkett Foster's landscapes smile at one much in the same way that Mr. Karker used to flash his teeth, and Mr. John Collier gives his sitter a cheerful slap on the back before he says, like a shampoo or a Turkish bath, next man.
|
||||||
|
```
|
||||||
|
|
@ -0,0 +1,71 @@
|
||||||
|
#
|
||||||
|
# Copyright 2016 The BigDL Authors.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
#
|
||||||
|
|
||||||
|
import intel_extension_for_pytorch as ipex
|
||||||
|
import time
|
||||||
|
import argparse
|
||||||
|
|
||||||
|
from transformers import pipeline
|
||||||
|
from bigdl.llm.transformers import AutoModelForSpeechSeq2Seq
|
||||||
|
from transformers.models.whisper import WhisperFeatureExtractor, WhisperTokenizer
|
||||||
|
from datasets import load_dataset
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
parser = argparse.ArgumentParser(description='Recognize Long Segment using `generate()` API for Distil-Whisper model')
|
||||||
|
parser.add_argument('--repo-id-or-model-path', type=str, default="distil-whisper/distil-large-v2",
|
||||||
|
help='The huggingface repo id for the Distil-Whisper model to be downloaded'
|
||||||
|
', or the path to the huggingface checkpoint folder')
|
||||||
|
parser.add_argument('--repo-id-or-data-path', type=str,
|
||||||
|
default="distil-whisper/librispeech_long",
|
||||||
|
help='The huggingface repo id for the audio dataset to be downloaded'
|
||||||
|
', or the path to the huggingface dataset folder')
|
||||||
|
parser.add_argument('--language', type=str, default="english",
|
||||||
|
help='language to be transcribed')
|
||||||
|
parser.add_argument('--batch-size', type=int, default=16,
|
||||||
|
help='The batch_size of pipeline inference, '
|
||||||
|
'it usually equals of length of the audio divided by chunk-length.')
|
||||||
|
parser.add_argument('--chunk-length', type=int, default=15,
|
||||||
|
help="The maximum time lengths of chuncks of sampling_rate samples used to trim"
|
||||||
|
"and pad longer or shorter audio sequences. Default to be 30s.")
|
||||||
|
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
model_path = args.repo_id_or_model_path
|
||||||
|
dataset_path = args.repo_id_or_data_path
|
||||||
|
|
||||||
|
# Load dummy dataset and read audio files
|
||||||
|
dataset = load_dataset(dataset_path, "clean", split="validation")
|
||||||
|
audio = dataset[0]["audio"]
|
||||||
|
|
||||||
|
model = AutoModelForSpeechSeq2Seq.from_pretrained(model_path, load_in_4bit=True)
|
||||||
|
model.to('xpu')
|
||||||
|
model.config.forced_decoder_ids = None
|
||||||
|
|
||||||
|
pipe = pipeline(
|
||||||
|
"automatic-speech-recognition",
|
||||||
|
model=model,
|
||||||
|
feature_extractor=WhisperFeatureExtractor.from_pretrained(model_path),
|
||||||
|
tokenizer= WhisperTokenizer.from_pretrained(model_path, language=args.language),
|
||||||
|
chunk_length_s=args.chunk_length,
|
||||||
|
device='xpu'
|
||||||
|
)
|
||||||
|
|
||||||
|
start = time.time()
|
||||||
|
prediction = pipe(audio, batch_size=args.batch_size)["text"]
|
||||||
|
print(f"inference time is {time.time()-start}")
|
||||||
|
|
||||||
|
print(prediction)
|
||||||
|
|
@ -0,0 +1,75 @@
|
||||||
|
# Distil-Whisper
|
||||||
|
|
||||||
|
In this directory, you will find examples on how you could apply BigDL-LLM INT4 optimizations on Distil-Whisper models on [Intel GPUs](../README.md). For illustration purposes, we utilize the [distil-whisper/distil-large-v2](https://huggingface.co/distil-whisper/distil-large-v2) as a reference Distil-Whisper model.
|
||||||
|
|
||||||
|
## 0. Requirements
|
||||||
|
To run these examples with BigDL-LLM on Intel GPUs, we have some recommended requirements for your machine, please refer to [here](../README.md#recommended-requirements) for more information.
|
||||||
|
|
||||||
|
## Example: Recognize Tokens using `generate()` API
|
||||||
|
In the example [recognize.py](./recognize.py), we show a basic use case for a Distil-Whisper model to conduct transcription using `pipeline()` API for long audio input, with BigDL-LLM INT4 optimizations.
|
||||||
|
### 1. Install
|
||||||
|
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
|
||||||
|
|
||||||
|
After installing conda, create a Python environment for BigDL-LLM:
|
||||||
|
```bash
|
||||||
|
conda create -n llm python=3.9
|
||||||
|
conda activate llm
|
||||||
|
# below command will install intel_extension_for_pytorch==2.0.110+xpu as default
|
||||||
|
# you can install specific ipex/torch version for your need
|
||||||
|
pip install --pre --upgrade bigdl-llm[xpu] -f https://developer.intel.com/ipex-whl-stable-xpu
|
||||||
|
pip install datasets soundfile librosa # required by audio processing
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Configures OneAPI environment variables
|
||||||
|
```bash
|
||||||
|
source /opt/intel/oneapi/setvars.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Run
|
||||||
|
For optimal performance on Arc, it is recommended to set several environment variables.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export USE_XETLA=OFF
|
||||||
|
export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
|
||||||
|
```
|
||||||
|
|
||||||
|
```
|
||||||
|
python ./recognize.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --repo-id-or-data-path REPO_ID_OR_DATA_PATH --language LANGUAGE --chunk-length CHUNK_LENGTH --batch-size BATCH_SIZE
|
||||||
|
```
|
||||||
|
|
||||||
|
Arguments info:
|
||||||
|
- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the Distil-Whisper model to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'distil-whisper/distil-large-v2'`.
|
||||||
|
- `--repo-id-or-data-path REPO_ID_OR_DATA_PATH`: argument defining the huggingface repo id for the audio dataset to be downloaded, or the path to the huggingface dataset folder. It is default to be `'distil-whisper/librispeech_long'`.
|
||||||
|
- `--language LANGUAGE`: argument defining language to be transcribed. It is default to be `english`.
|
||||||
|
- `--chunk-length CHUNK_LENGTH`: argument defining the maximum number of chuncks of sampling_rate samples used to trim and pad longer or shorter audio sequences. For audio recordings less than 30 seconds, it can be set to 0 for better performance. It is default to be 15.
|
||||||
|
- `--batch-size BATCH_SIZE`: argument defining the batch_size of pipeline inference, it usually equals of length of the audio divided by chunk-length. It is default to be 16.
|
||||||
|
|
||||||
|
#### Sample Output
|
||||||
|
##### Short-Form Transcription
|
||||||
|
|
||||||
|
Model: [distil-whisper/distil-large-v2](https://huggingface.co/distil-whisper/distil-large-v2)
|
||||||
|
|
||||||
|
Command line:
|
||||||
|
```bash
|
||||||
|
python ./recognize.py --repo-id-or-data-path 'hf-internal-testing/librispeech_asr_dummy' --chunk-length 0
|
||||||
|
```
|
||||||
|
Output:
|
||||||
|
```log
|
||||||
|
Inference time: xxxx s
|
||||||
|
-------------------- Output --------------------
|
||||||
|
[' Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel.']
|
||||||
|
```
|
||||||
|
|
||||||
|
##### Long-Form Transcription
|
||||||
|
|
||||||
|
Model: [distil-whisper/distil-large-v2](https://huggingface.co/distil-whisper/distil-large-v2)
|
||||||
|
|
||||||
|
Command line:
|
||||||
|
```bash
|
||||||
|
python ./recognize.py --repo-id-or-data-path 'distil-whisper/librispeech_long' --chunk-length 15
|
||||||
|
```
|
||||||
|
Output:
|
||||||
|
```log
|
||||||
|
inference time is xxxx s
|
||||||
|
Mr Quilter is the Apostle of the Middle classes, and we are glad to welcome his Gospel. Nor is Mr Quilter's manner less interesting than his matter. He tells us that at this festive season of the year, with Christmas and roast beef looming before us, similes drawn from eating and its results occur most readily to the mind. He has grave doubts whether Sir Frederick Leighton's work is really Greek after all, and can discover in it but little of rocky Ithaca. Linel's pictures are a sort of upguards and Adam paintings, and Mason's exquisite itels are as national as a Jingo poem. Mr Birkett Foster's landscapes smile at one much in the same way that Mr. Karker used to flash his teeth, and Mr. John Collier gives his sitter a cheerful slap on the back before he says, like a shampoo or a Turkish bath, next man.
|
||||||
|
```
|
||||||
|
|
@ -0,0 +1,72 @@
|
||||||
|
#
|
||||||
|
# Copyright 2016 The BigDL Authors.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
#
|
||||||
|
|
||||||
|
import intel_extension_for_pytorch as ipex
|
||||||
|
import time
|
||||||
|
import argparse
|
||||||
|
|
||||||
|
from bigdl.llm import optimize_model
|
||||||
|
from datasets import load_dataset
|
||||||
|
from transformers import AutoModelForSpeechSeq2Seq, pipeline
|
||||||
|
from transformers.models.whisper import WhisperFeatureExtractor, WhisperTokenizer
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
parser = argparse.ArgumentParser(description='Recognize Long Segment using `generate()` API for Distil-Whisper model')
|
||||||
|
parser.add_argument('--repo-id-or-model-path', type=str, default="distil-whisper/distil-large-v2",
|
||||||
|
help='The huggingface repo id for the Distil-Whisper model to be downloaded'
|
||||||
|
', or the path to the huggingface checkpoint folder')
|
||||||
|
parser.add_argument('--repo-id-or-data-path', type=str,
|
||||||
|
default="distil-whisper/librispeech_long",
|
||||||
|
help='The huggingface repo id for the audio dataset to be downloaded'
|
||||||
|
', or the path to the huggingface dataset folder')
|
||||||
|
parser.add_argument('--language', type=str, default="english",
|
||||||
|
help='language to be transcribed')
|
||||||
|
parser.add_argument('--batch-size', type=int, default=16,
|
||||||
|
help='The batch_size of pipeline inference, '
|
||||||
|
'it usually equals of length of the audio divided by chunk-length.')
|
||||||
|
parser.add_argument('--chunk-length', type=int, default=15,
|
||||||
|
help="The maximum time lengths of chuncks of sampling_rate samples used to trim"
|
||||||
|
"and pad longer or shorter audio sequences. Default to be 30s.")
|
||||||
|
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
model_path = args.repo_id_or_model_path
|
||||||
|
dataset_path = args.repo_id_or_data_path
|
||||||
|
|
||||||
|
# Load dummy dataset and read audio files
|
||||||
|
dataset = load_dataset(dataset_path, "clean", split="validation")
|
||||||
|
audio = dataset[0]["audio"]
|
||||||
|
|
||||||
|
model = AutoModelForSpeechSeq2Seq.from_pretrained(model_path)
|
||||||
|
model = optimize_model(model)
|
||||||
|
model.to('xpu')
|
||||||
|
model.config.forced_decoder_ids = None
|
||||||
|
|
||||||
|
pipe = pipeline(
|
||||||
|
"automatic-speech-recognition",
|
||||||
|
model=model,
|
||||||
|
feature_extractor=WhisperFeatureExtractor.from_pretrained(model_path),
|
||||||
|
tokenizer= WhisperTokenizer.from_pretrained(model_path, language=args.language),
|
||||||
|
chunk_length_s=args.chunk_length,
|
||||||
|
device='xpu'
|
||||||
|
)
|
||||||
|
|
||||||
|
start = time.time()
|
||||||
|
prediction = pipe(audio, batch_size=args.batch_size)["text"]
|
||||||
|
print(f"inference time is {time.time()-start}")
|
||||||
|
|
||||||
|
print(prediction)
|
||||||
Loading…
Reference in a new issue