Nano: update PyTorch inference key feature doc (#6938)
* upddate installation * update * update runtime acceleration * update link in rst * add bf16 quantization and optimize() * update based on comment * update * update based on comment
This commit is contained in:
parent
3b6c56b505
commit
935fc48354
1 changed files with 158 additions and 97 deletions
|
|
@ -2,7 +2,7 @@
|
|||
|
||||
BigDL-Nano provides several APIs which can help users easily apply optimizations on inference pipelines to improve latency and throughput. Currently, performance accelerations are achieved by integrating extra runtimes as inference backend engines or using quantization methods on full-precision trained models to reduce computation during inference. InferenceOptimizer (`bigdl.nano.pytorch.InferenceOptimizer`) provides the APIs for all optimizations that you need for inference.
|
||||
|
||||
For runtime acceleration, BigDL-Nano has enabled three kinds of runtime for users in `InferenceOptimizer.trace()`, ONNXRuntime, OpenVINO and jit.
|
||||
For runtime acceleration, BigDL-Nano has enabled three kinds of graph mode format and corresponding runtime in `InferenceOptimizer.trace()`: ONNXRuntime, OpenVINO and TorchScript.
|
||||
|
||||
```eval_rst
|
||||
.. warning::
|
||||
|
|
@ -11,7 +11,7 @@ For runtime acceleration, BigDL-Nano has enabled three kinds of runtime for user
|
|||
Please use ``bigdl.nano.pytorch.InferenceOptimizer.trace`` instead.
|
||||
```
|
||||
|
||||
For quantization, BigDL-Nano provides only post-training quantization in `InferenceOptimizer.quantize()` for users to infer with models of 8-bit precision or 16-bit precision. Quantization-aware training is not available for now. Model conversion to 16-bit like BF16 is supported now.
|
||||
For quantization, BigDL-Nano provides only post-training quantization in `InferenceOptimizer.quantize()` for users to infer with models of 8-bit precision or 16-bit precision. Quantization-aware training is not available for now.
|
||||
|
||||
```eval_rst
|
||||
.. warning::
|
||||
|
|
@ -22,7 +22,28 @@ For quantization, BigDL-Nano provides only post-training quantization in `Infere
|
|||
|
||||
Before you go ahead with these APIs, you have to make sure BigDL-Nano is correctly installed for PyTorch. If not, please follow [this](../Overview/nano.md) to set up your environment.
|
||||
|
||||
## Runtime Acceleration
|
||||
```eval_rst
|
||||
.. note::
|
||||
You can install all required dependencies by
|
||||
|
||||
::
|
||||
|
||||
pip install --pre --upgrade bigdl-nano[pytorch,inference]
|
||||
|
||||
This will install all dependencies required by BigDL-Nano PyTorch inference.
|
||||
|
||||
Or if you just want to use one of supported optimizations:
|
||||
|
||||
- `INC (Intel Neural Compressor) <https://github.com/intel/neural-compressor>`_: ``pip install neural-compressor``
|
||||
|
||||
- `OpenVINO <https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/overview.html>`_: ``pip install openvino-dev``
|
||||
|
||||
- `ONNXRuntime <https://onnxruntime.ai/>`_: ``pip install onnx onnxruntime onnxruntime-extensions onnxsim neural-compressor``
|
||||
|
||||
We recommand installing all dependencies by ``pip install --pre --upgrade bigdl-nano[pytorch,inference]``, because you may run into version issues if you install dependencies manually.
|
||||
```
|
||||
|
||||
## Graph Mode Acceleration
|
||||
All available runtime accelerations are integrated in `InferenceOptimizer.trace(accelerator='onnxruntime'/'openvino'/'jit')` with different accelerator values. Let's take mobilenetv3 as an example model and here is a short script that you might have before applying any BigDL-Nano's optimizations:
|
||||
```python
|
||||
from torchvision.models.mobilenetv3 import mobilenet_v3_small
|
||||
|
|
@ -41,22 +62,9 @@ ds = TensorDataset(x, y)
|
|||
dataloader = DataLoader(ds, batch_size=2)
|
||||
|
||||
# (Optional) step 3: Something else, like training ...
|
||||
trainer = Trainer()
|
||||
trainer.fit(model, dataloader)
|
||||
...
|
||||
...
|
||||
|
||||
# Inference/Prediction
|
||||
trainer.validate(ort_model, dataloader)
|
||||
trainer.test(ort_model, dataloader)
|
||||
trainer.predict(ort_model, dataloader)
|
||||
```
|
||||
### ONNXRuntime Acceleration
|
||||
Before you start with ONNXRuntime accelerator, you are required to install some ONNX packages as follows to set up your environment with ONNXRuntime acceleration.
|
||||
```shell
|
||||
pip install onnx onnxruntime
|
||||
```
|
||||
When you're ready, you can simply append the following part to enable your ONNXRuntime acceleration.
|
||||
You can simply append the following part to enable your [ONNXRuntime](https://onnxruntime.ai/) acceleration.
|
||||
```python
|
||||
# step 4: trace your model as an ONNXRuntime model
|
||||
# if you have run `trainer.fit` before trace, then argument `input_sample` is not required.
|
||||
|
|
@ -65,20 +73,9 @@ ort_model = InferenceOptimizer.trace(model, accelerator='onnruntime', input_samp
|
|||
# step 5: use returned model for transparent acceleration
|
||||
# The usage is almost the same with any PyTorch module
|
||||
y_hat = ort_model(x)
|
||||
|
||||
# validate, predict, test in Trainer also support acceleration
|
||||
trainer.validate(ort_model, dataloader)
|
||||
trainer.test(ort_model, dataloader)
|
||||
trainer.predict(ort_model, dataloader)
|
||||
# note that `ort_model` is not trainable any more, so you can't use like
|
||||
# trainer.fit(ort_model, dataloader) # this is illegal
|
||||
```
|
||||
### OpenVINO Acceleration
|
||||
To use OpenVINO acceleration, you have to install the OpenVINO toolkit:
|
||||
```shell
|
||||
pip install openvino-dev
|
||||
```
|
||||
The OpenVINO usage is quite similar to ONNXRuntime, the following usage is for OpenVINO:
|
||||
The [OpenVINO](https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/overview.html) usage is quite similar to ONNXRuntime, the following usage is for OpenVINO:
|
||||
```python
|
||||
# step 4: trace your model as a openvino model
|
||||
# if you have run `trainer.fit` before trace, then argument `input_sample` is not required.
|
||||
|
|
@ -87,104 +84,67 @@ ov_model = InferenceOptimizer.trace(model, accelerator='openvino', input_sample=
|
|||
# step 5: use returned model for transparent acceleration
|
||||
# The usage is almost the same with any PyTorch module
|
||||
y_hat = ov_model(x)
|
||||
```
|
||||
|
||||
# validate, predict, test in Trainer also support acceleration
|
||||
trainer = Trainer()
|
||||
trainer.validate(ort_model, dataloader)
|
||||
trainer.test(ort_model, dataloader)
|
||||
trainer.predict(ort_model, dataloader)
|
||||
# note that `ort_model` is not trainable any more, so you can't use like
|
||||
# trainer.fit(ort_model, dataloader) # this is illegal
|
||||
### TorchScript Acceleration
|
||||
The [TorchScript](https://pytorch.org/docs/stable/jit.html) usage is a little different from above two cases. In addition to specifying `accelerator=jit`, you can also set `use_ipex=True` to enable the additional acceleration provided by [IPEX (Intel® Extension for PyTorch*)](https://www.intel.com/content/www/us/en/developer/tools/oneapi/extension-for-pytorch.html), we generally recommend the combination of `jit` and `ipex`.The following usage is for TorchScript:
|
||||
```python
|
||||
# step 4: trace your model as a JIT model
|
||||
jit_model = InferenceOptimizer.trace(model, accelerator='jit', input_sample=x)
|
||||
|
||||
# or you can combine jit with ipex
|
||||
jit_model = InferenceOptimizer.trace(model, accelerator='jit',
|
||||
use_ipex=True, input_sample=x)
|
||||
|
||||
# step 5: use returned model for transparent acceleration
|
||||
# The usage is almost the same with any PyTorch module
|
||||
y_hat = jit_model(x)
|
||||
```
|
||||
|
||||
## Quantization
|
||||
Quantization is widely used to compress models to a lower precision, which not only reduces the model size but also accelerates inference. BigDL-Nano provides `InferenceOptimizer.quantize()` API for users to quickly obtain a quantized model with accuracy control by specifying a few arguments. Intel Neural Compressor (INC) and Post-training Optimization Tools (POT) from OpenVINO toolkit are enabled as options. In the meantime, runtime acceleration is also included directly in the quantization pipeline when using `accelerator='onnxruntime'/'openvino'` so you don't have to run `InferenceOptimizer.trace` before quantization.
|
||||
Quantization is widely used to compress models to a lower precision, which not only reduces the model size but also accelerates inference. For quantization precision, BigDL-Nano supports two common choices: `int8` and `bfloat16`. The usage of the two kinds of precision is quite different.
|
||||
|
||||
### Int8 Quantization
|
||||
BigDL-Nano provides `InferenceOptimizer.quantize()` API for users to quickly obtain a int8 quantized model with accuracy control by specifying a few arguments. Intel Neural Compressor (INC) and Post-training Optimization Tools (POT) from OpenVINO toolkit are enabled as options.
|
||||
|
||||
To use INC as your quantization engine, you can choose accelerator as `None` or `'onnxruntime'`. Otherwise, `accelerator='openvino'` means using OpenVINO POT to do quantization.
|
||||
|
||||
By default, `InferenceOptimizer.quantize()` doesn't search the tuning space and returns the fully-quantized model without considering the accuracy drop. If you need to search quantization tuning space for a model with accuracy control, you'll have to specify a few arguments to define the tuning space. More instructions in [Quantization with Accuracy Control](#quantization-with-accuracy-control)
|
||||
|
||||
### Quantization using Intel Neural Compressor
|
||||
By default, Intel Neural Compressor is not installed with BigDL-Nano. So if you determine to use it as your quantization backend, you'll need to install it first:
|
||||
```shell
|
||||
pip install neural-compressor==1.11.0
|
||||
```
|
||||
#### Quantization using Intel Neural Compressor
|
||||
**Quantization without extra accelerator**
|
||||
|
||||
Without extra accelerator, `InferenceOptimizer.quantize()` returns a PyTorch module with desired precision and accuracy. Following the example in [Runtime Acceleration](#runtime-acceleration), you can add quantization as below:
|
||||
```python
|
||||
q_model = InferenceOptimizer.quantize(model, calib_dataloader=dataloader)
|
||||
q_model = InferenceOptimizer.quantize(model, calib_data=dataloader)
|
||||
# run simple prediction with transparent acceleration
|
||||
y_hat = q_model(x)
|
||||
|
||||
# validate, predict, test in Trainer also support acceleration
|
||||
trainer.validate(q_model, dataloader)
|
||||
trainer.test(q_model, dataloader)
|
||||
trainer.predict(q_model, dataloader)
|
||||
```
|
||||
This is a most basic usage to quantize a model with defaults, INT8 precision, and without search tuning space to control accuracy drop.
|
||||
|
||||
**Quantization with ONNXRuntime accelerator**
|
||||
|
||||
With the ONNXRuntime accelerator, `InferenceOptimizer.quantize()` will return a model with compressed precision and running inference in the ONNXRuntime engine. It's also required to install onnxruntime-extensions as a dependency of INC when using ONNXRuntime as backend as well as the dependencies required in [ONNXRuntime Acceleration](#onnxruntime-acceleration):
|
||||
```shell
|
||||
pip install onnx onnxruntime onnxruntime-extensions
|
||||
```
|
||||
With the ONNXRuntime accelerator, `InferenceOptimizer.quantize()` will return a model with compressed precision and running inference in the ONNXRuntime engine.
|
||||
Still taking the example in [Runtime Acceleration](pytorch_inference.md#runtime-acceleration), you can add quantization as below:
|
||||
```python
|
||||
ort_q_model = InferenceOptimizer.quantize(model, accelerator='onnxruntime', calib_dataloader=dataloader)
|
||||
ort_q_model = InferenceOptimizer.quantize(model, accelerator='onnxruntime', calib_data=dataloader)
|
||||
# run simple prediction with transparent acceleration
|
||||
y_hat = ort_q_model(x)
|
||||
|
||||
# validate, predict, test in Trainer also support acceleration
|
||||
trainer.validate(ort_q_model, dataloader)
|
||||
trainer.test(ort_q_model, dataloader)
|
||||
trainer.predict(ort_q_model, dataloader)
|
||||
```
|
||||
Using `accelerator='onnxruntime'` actually equals to converting the model from PyTorch to ONNX firstly and then do quantization on the converted ONNX model:
|
||||
```python
|
||||
ort_model = InferenceOptimizer.trace(model, accelerator='onnruntime', input_sample=x):
|
||||
ort_q_model = InferenceOptimizer.quantize(ort_model, accelerator='onnxruntime', calib_dataloader=dataloader)
|
||||
|
||||
# run inference with transparent acceleration
|
||||
y_hat = ort_q_model(x)
|
||||
trainer.validate(ort_q_model, dataloader)
|
||||
trainer.test(ort_q_model, dataloader)
|
||||
trainer.predict(ort_q_model, dataloader)
|
||||
```
|
||||
|
||||
### Quantization using Post-training Optimization Tools
|
||||
The POT (Post-training Optimization Tools) is provided by OpenVINO toolkit. To use POT, you need to install OpenVINO as the same in [OpenVINO acceleration](#openvino-acceleration):
|
||||
```shell
|
||||
pip install openvino-dev
|
||||
```
|
||||
#### Quantization using Post-training Optimization Tools
|
||||
The POT (Post-training Optimization Tools) is provided by OpenVINO toolkit.
|
||||
Take the example in [Runtime Acceleration](#runtime-acceleration), and add quantization:
|
||||
```python
|
||||
ov_q_model = InferenceOptimizer.quantize(model, accelerator='openvino', calib_dataloader=dataloader)
|
||||
ov_q_model = InferenceOptimizer.quantize(model, accelerator='openvino', calib_data=dataloader)
|
||||
# run simple prediction with transparent acceleration
|
||||
y_hat = ov_q_model(x)
|
||||
|
||||
# validate, predict, test in Trainer also support acceleration
|
||||
trainer.validate(ov_q_model, dataloader)
|
||||
trainer.test(ov_q_model, dataloader)
|
||||
trainer.predict(ov_q_model, dataloader)
|
||||
```
|
||||
Same as using ONNXRuntime accelerator, it equals to converting the model from PyTorch to OpenVINO firstly and then doing quantization on the converted OpenVINO model:
|
||||
```python
|
||||
ov_model = InferenceOptimizer.trace(model, accelerator='openvino', input_sample=x):
|
||||
ov_q_model = InferenceOptimizer.quantize(ov_model, accelerator='onnxruntime', calib_dataloader=dataloader)
|
||||
|
||||
# run inference with transparent acceleration
|
||||
y_hat = ov_q_model(x)
|
||||
trainer.validate(ov_q_model, dataloader)
|
||||
trainer.test(ov_q_model, dataloader)
|
||||
trainer.predict(ov_q_model, dataloader)
|
||||
```
|
||||
|
||||
### Quantization with Accuracy Control
|
||||
#### Quantization with Accuracy Control
|
||||
A set of arguments that helps to tune the results for both INC and POT quantization:
|
||||
|
||||
- `calib_dataloader`: A calibration dataloader is required for static post-training quantization. And for POT, it's also used for evaluation
|
||||
- `calib_data`: A calibration dataloader is required for static post-training quantization. And for POT, it's also used for evaluation
|
||||
- `metric`: A metric of `torchmetric` to run evaluation and compare with baseline
|
||||
|
||||
- `accuracy_criterion`: A dictionary to specify the acceptable accuracy drop, e.g. `{'relative': 0.01, 'higher_is_better': True}`
|
||||
|
|
@ -204,7 +164,7 @@ from torchmetrics.classification import Accuracy
|
|||
InferenceOptimizer.quantize(model,
|
||||
precision='int8',
|
||||
accelerator=None,
|
||||
calib_dataloader= dataloader,
|
||||
calib_data=dataloader,
|
||||
metric=Accuracy()
|
||||
accuracy_criterion={'relative': 0.01, 'higher_is_better': True},
|
||||
approach='static',
|
||||
|
|
@ -212,7 +172,7 @@ InferenceOptimizer.quantize(model,
|
|||
tuning_strategy='bayesian',
|
||||
timeout=0,
|
||||
max_trials=10,
|
||||
):
|
||||
)
|
||||
```
|
||||
**Accuracy Control with POT**
|
||||
Similar to INC, we can run quantization like:
|
||||
|
|
@ -221,14 +181,115 @@ from torchmetrics.classification import Accuracy
|
|||
InferenceOptimizer.quantize(model,
|
||||
precision='int8',
|
||||
accelerator='openvino',
|
||||
calib_dataloader= dataloader,
|
||||
calib_data=dataloader,
|
||||
metric=Accuracy()
|
||||
accuracy_criterion={'relative': 0.01, 'higher_is_better': True},
|
||||
approach='static',
|
||||
max_trials=10,
|
||||
):
|
||||
)
|
||||
```
|
||||
|
||||
### BFloat16 Quantization
|
||||
|
||||
BigDL-Nano has support [mixed precision inference](https://pytorch.org/docs/stable/amp.html?highlight=mixed+precision) with BFloat16 and a series of additional performance tricks. BFloat16 Mixed Precison inference combines BFloat16 and FP32 during inference, which could lead to increased performance and reduced memory usage. Compared to FP16 mixed precison, BFloat16 mixed precision has better numerical stability.
|
||||
It's quite easy for you use BFloat16 Quantization as below:
|
||||
```python
|
||||
bf16_model = InferenceOptimizer.quantize(model,
|
||||
precision='bf16')
|
||||
# run simple prediction with transparent acceleration
|
||||
with InferenceOptimizer.get_context(bf16_model):
|
||||
y_hat = bf16_model(x)
|
||||
```
|
||||
|
||||
```eval_rst
|
||||
.. note::
|
||||
For BFloat16 quantization, make sure your inference is under ``with InferenceOptimizer.get_context(bf16_model):``. Otherwise, the whole inference process is actually FP32 precision.
|
||||
|
||||
For more details about the context manager provided by ``InferenceOptimizer.get_context()``, you could refer related `How-to guide <https://bigdl.readthedocs.io/en/latest/doc/Nano/Howto/Inference/PyTorch/pytorch_context_manager.html>`_.
|
||||
```
|
||||
|
||||
#### Channels Last Memory Format
|
||||
You could experience Bfloat16 Quantization with `channels_last=True` to use the channels last memory format, i.e. NHWC (batch size, height, width, channels), as an alternative way to store tensors in classic/contiguous NCHW order.
|
||||
The usage for this is as below:
|
||||
```python
|
||||
bf16_model = InferenceOptimizer.quantize(model,
|
||||
precision='bf16',
|
||||
channels_last=True)
|
||||
# run simple prediction with transparent acceleration
|
||||
with InferenceOptimizer.get_context(bf16_model):
|
||||
y_hat = bf16_model(x)
|
||||
```
|
||||
|
||||
#### Intel® Extension for PyTorch
|
||||
[Intel Extension for PyTorch](https://github.com/intel/intel-extension-for-pytorch) (a.k.a. IPEX) extends PyTorch with optimizations for an extra performance boost on Intel hardware.
|
||||
|
||||
BigDL-Nano integrates IPEX through `InferenceOptimizer.quantize()`. Users can turn on IPEX by setting `use_ipex=True`:
|
||||
```python
|
||||
bf16_model = InferenceOptimizer.quantize(model,
|
||||
precision='bf16',
|
||||
use_ipex=True,
|
||||
channels_last=True)
|
||||
# run simple prediction with transparent acceleration
|
||||
with InferenceOptimizer.get_context(bf16_model):
|
||||
y_hat = bf16_model(x)
|
||||
```
|
||||
|
||||
#### TorchScript Acceleration
|
||||
The [TorchScript](https://pytorch.org/docs/stable/jit.html) can also be used for Bfloat16 quantization. We recommend you take advantage of IPEX with TorchScript for further optimizations. The following usage is for TorchScript:
|
||||
```python
|
||||
bf16_model = InferenceOptimizer.quantize(model,
|
||||
precision='bf16',
|
||||
accelerator='jit',
|
||||
input_sample=x,
|
||||
use_ipex=True,
|
||||
channels_last=True)
|
||||
# run simple prediction with transparent acceleration
|
||||
with InferenceOptimizer.get_context(bf16_model):
|
||||
y_hat = bf16_model(x)
|
||||
```
|
||||
|
||||
## Automatically Choose the Best Optimization
|
||||
|
||||
If you have no idea about which one optimization to choose or you just want to compare them and choose the best one, you can use `InferenceOptimizer.optimize`.
|
||||
|
||||
Still taking the example in [Runtime Acceleration](#runtime-acceleration), you can use it as following:
|
||||
```python
|
||||
# try all supproted optimizations
|
||||
opt = InferenceOptimizer()
|
||||
opt.optimize(model, training_data=dataloader, thread_num=4)
|
||||
|
||||
# get the best optimization
|
||||
best_model, option = opt.get_best_model()
|
||||
|
||||
# use the quantized model as before
|
||||
with InferenceOptimizer.get_context(best_model):
|
||||
y_hat = best_model(x)
|
||||
```
|
||||
|
||||
`InferenceOptimizer.optimize()` will try all supported optimizations and choose the best one by `get_best_model()`.
|
||||
The output table of `optimize()` looks like:
|
||||
```bash
|
||||
-------------------------------- ---------------------- --------------
|
||||
| method | status | latency(ms) |
|
||||
-------------------------------- ---------------------- --------------
|
||||
| original | successful | 9.337 |
|
||||
| bf16 | successful | 8.974 |
|
||||
| static_int8 | successful | 8.934 |
|
||||
| jit_fp32_ipex | successful | 10.013 |
|
||||
| jit_fp32_ipex_channels_last | successful | 4.955 |
|
||||
| jit_bf16_ipex | successful | 2.563 |
|
||||
| jit_bf16_ipex_channels_last | successful | 3.135 |
|
||||
| openvino_fp32 | successful | 1.727 |
|
||||
| openvino_int8 | successful | 1.635 |
|
||||
| onnxruntime_fp32 | successful | 3.801 |
|
||||
| onnxruntime_int8_qlinear | successful | 4.727 |
|
||||
-------------------------------- ---------------------- --------------
|
||||
* means we assume the accuracy of the traced model does not change, so we don't recompute accuracy to save time.
|
||||
Optimization cost 58.3s in total.
|
||||
```
|
||||
|
||||
For more details, you can refer [How-to guide](https://bigdl.readthedocs.io/en/latest/doc/Nano/Howto/Inference/PyTorch/inference_optimizer_optimize.html) and [API Doc](https://bigdl.readthedocs.io/en/latest/doc/PythonAPI/Nano/pytorch.html#bigdl-nano-pytorch-inferenceoptimizer).
|
||||
|
||||
## Multi-instance Acceleration
|
||||
|
||||
BigDL-Nano also provides multi-instance inference. To use it, you should call `multi_model = InferenceOptimizer.to_multi_instance(model, num_processes=n)` first, where `num_processes` specifies the number of processes you want to use.
|
||||
|
|
|
|||
Loading…
Reference in a new issue