ipex-llm/python/llm/example/pytorch-models/bark/README.md
2023-09-21 12:24:58 +08:00

62 lines
3.2 KiB
Markdown

# Bark
In this directory, you will find examples on how you could use BigDL-LLM `optimize_model` API to accelerate Bark models. For illustration purposes, we utilize the [suno/bark](https://huggingface.co/suno/bark) as reference Bark models.
## Requirements
To run these examples with BigDL-LLM, we have some recommended requirements for your machine, please refer to [here](../README.md#recommended-requirements) for more information.
## Example: Synthesize speech with the given input text
In the example [synthesize_speech.py](./synthesize_speech.py), we show a basic use case for Bark model to synthesize speech based on the given text, with BigDL-LLM INT4 optimizations.
### 1. Install
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
After installing conda, create a Python environment for BigDL-LLM:
```bash
conda create -n llm python=3.9 # recommend to use Python 3.9
conda activate llm
pip install --pre --upgrade bigdl-llm[all] # install the latest bigdl-llm nightly build with 'all' option
pip install TTS scipy
```
### 2. Download Bark model
Before running the example, you need to download Bark model to local folder:
```python
from huggingface_hub import snapshot_download
model_path = snapshot_download(repo_id='suno/bark',
local_dir='bark/') # you can change `local_dir` parameter to specify any local folder
```
Please refer to [here](https://huggingface.co/docs/huggingface_hub/guides/download#download-files-to-local-folder) for more information about `snapshot_download`.
### 3. Run
After setting up the Python environment and downloading Bark model, you could run the example by following steps.
#### 3.1 Client
On client Windows machines, it is recommended to run directly with full utilization of all cores:
```powershell
# make sure `--model-path` corresponds to the local folder of downloaded model
python ./synthesize_speech.py --model-path 'bark/' --text "This is an example text for synthesize speech."
```
More information about arguments can be found in [Arguments Info](#33-arguments-info) section.
#### 3.2 Server
For optimal performance on server, it is recommended to set several environment variables (refer to [here](../README.md#best-known-configuration-on-linux) for more information), and run the example with all the physical cores of a single socket.
E.g. on Linux,
```bash
# set BigDL-Nano env variables
source bigdl-nano-init
# e.g. for a server with 48 cores per socket
export OMP_NUM_THREADS=48
# make sure `--model-path` corresponds to the local folder of downloaded model
numactl -C 0-47 -m 0 python ./synthesize_speech.py --model-path 'bark/' --text "This is an example text for synthesize speech."
```
More information about arguments can be found in [Arguments Info](#33-arguments-info) section.
#### 3.3 Arguments Info
In the example, several arguments can be passed to satisfy your requirements:
- `--model-path MODEL_PATH`: **required**, argument defining the local path to the Bark model checkpoint folder.
- `--text TEXT`: argument defining the text to synthesize speech. It is default to be `"This is an example text for synthesize speech."`.