ipex-llm/docs/readthedocs/source/doc/Nano/Overview/pytorch_inference.md
Shengsheng Huang f2e4c40cee change the readthedocs theme and reorg the sections (#6056)
* refactor toc

* refactor toc

* Change to pydata-sphinx-theme and update packages requirement list for ReadtheDocs

* Remove customized css for old theme

* Add index page to each top bar section and limit dropdown maximum to be 4

* Use js to change 'More' to 'Libraries'

* Add custom.css to conf.py for further css changes

* Add BigDL logo and search bar

* refactor toc

* refactor toc and add overview

* refactor toc and add overview

* refactor toc and add overview

* refactor get started

* add paper and video section

* add videos

* add grid columns in landing page

* add document roadmap to index

* reapply search bar and github icon commit

* reorg orca and chronos sections

* Test: weaken ads by js

* update: change left attrbute

* update: add comments

* update: change opacity to 0.7

* Remove useless theme template override for old theme

* Add sidebar releases component in the home page

* Remove sidebar search and restore top nav search button

* Add BigDL handouts

* Add back to homepage button to pages except from the home page

* Update releases contents & styles in left sidebar

* Add version badge to the top bar

* Test: weaken ads by js

* update: add comments

* remove landing page contents

* rfix chronos install

* refactor install

* refactor chronos section titles

* refactor nano index

* change chronos landing

* revise chronos landing page

* add document navigator to nano landing page

* revise install landing page

* Improve css of versions in sidebar

* Make handouts image pointing to a page in new tab

* add win guide to install

* add dliib installation

* revise title bar

* rename index files

* add index page for user guide

* add dllib and orca API

* update user guide landing page

* refactor side bar

* Remove extra style configuration of card components & make different card usage consistent

* Remove extra styles for Nano how-to guides

* Remove extra styles for Chronos how-to guides

* Remove dark mode for now

* Update index page description

* Add decision tree for choosing BigDL libraries in index page

* add dllib models api, revise core layers formats

* Change primary & info color in light mode

* Restyle card components

* Restructure Chronos landing page

* Update card style

* Update BigDL library selection decision tree

* Fix failed Chronos tutorials filter

* refactor PPML documents

* refactor and add friesian documents

* add friesian arch diagram

* update landing pages and fill key features guide index page

* Restyle link card component

* Style video frames in PPML sections

* Adjust Nano landing page

* put api docs to the last in index for convinience

* Make badge horizontal padding smaller & small changes

* Change the second letter of all header titles to be small capitalizd

* Small changes on Chronos index page

* Revise decision tree to make it smaller

* Update: try to change the position of ads.

* Bugfix: deleted nonexist file config

* Update: update ad JS/CSS/config

* Update: change ad.

* Update: delete my template and change files.

* Update: change chronos installation table color.

* Update: change table font color to --pst-color-primary-text

* Remove old contents in landing page sidebar

* Restyle badge for usage in card footer again

* Add quicklinks template on landing page sidebar

* add quick links

* Add scala logo

* move tf, pytorch out of the link

* change orca key features cards

* fix typo

* fix a mistake in wording

* Restyle badge for card footer

* Update decision tree

* Remove useless html templates

* add more api docs and update tutorials in dllib

* update chronos install using new style

* merge changes in nano doc from master

* fix quickstart links in sidebar quicklinks

* Make tables responsive

* Fix overflow in api doc

* Fix list indents problems in [User guide] section

* Further fixes to nested bullets contents in [User Guide] section

* Fix strange title in Nano 5-min doc

* Fix list indent problems in [DLlib] section

* Fix misnumbered list problems and other small fixes for [Chronos] section

* Fix list indent problems and other small fixes for [Friesian] section

* Fix list indent problem and other small fixes for [PPML] section

* Fix list indent problem for developer guide

* Fix list indent problem for [Cluster Serving] section

* fix dllib links

* Fix wrong relative link in section landing page

Co-authored-by: Yuwen Hu <yuwen.hu@intel.com>
Co-authored-by: Juntao Luo <1072087358@qq.com>
2022-10-18 15:35:31 +08:00

12 KiB

PyTorch Inference

BigDL-Nano provides several APIs which can help users easily apply optimizations on inference pipelines to improve latency and throughput. Currently, performance accelerations are achieved by integrating extra runtimes as inference backend engines or using quantization methods on full-precision trained models to reduce computation during inference. InferenceOptimizer (bigdl.nano.pytorch.InferenceOptimizer) provides the APIs for all optimizations that you need for inference.

For runtime acceleration, BigDL-Nano has enabled three kinds of runtime for users in InferenceOptimizer.trace(), ONNXRuntime, OpenVINO and jit.

.. warning::
    ``bigdl.nano.pytorch.Trainer.trace`` will be deprecated in future release.

    Please use ``bigdl.nano.pytorch.InferenceOptimizer.trace`` instead.

For quantization, BigDL-Nano provides only post-training quantization in InferenceOptimizer.quantize() for users to infer with models of 8-bit precision or 16-bit precision. Quantization-aware training is not available for now. Model conversion to 16-bit like BF16 is supported now.

.. warning::
    ``bigdl.nano.pytorch.Trainer.quantize`` will be deprecated in future release.

    Please use ``bigdl.nano.pytorch.InferenceOptimizer.quantize`` instead.

Before you go ahead with these APIs, you have to make sure BigDL-Nano is correctly installed for PyTorch. If not, please follow this to set up your environment.

Runtime Acceleration

All available runtime accelerations are integrated in InferenceOptimizer.trace(accelerator='onnxruntime'/'openvino'/'jit') with different accelerator values. Let's take mobilenetv3 as an example model and here is a short script that you might have before applying any BigDL-Nano's optimizations:

from torchvision.models.mobilenetv3 import mobilenet_v3_small
import torch
from torch.utils.data.dataset import TensorDataset
from torch.utils.data.dataloader import DataLoader
from bigdl.nano.pytorch import InferenceOptimizer, Trainer

# step 1: create your model
model = mobilenet_v3_small(num_classes=10)

# step 2: prepare your data and dataloader
x = torch.rand((10, 3, 256, 256))
y = torch.ones((10, ), dtype=torch.long)
ds = TensorDataset(x, y)
dataloader = DataLoader(ds, batch_size=2)

# (Optional) step 3: Something else, like training ...
trainer = Trainer()
trainer.fit(model, dataloader)
...
...

# Inference/Prediction
trainer.validate(ort_model, dataloader)
trainer.test(ort_model, dataloader)
trainer.predict(ort_model, dataloader)

ONNXRuntime Acceleration

Before you start with ONNXRuntime accelerator, you are required to install some ONNX packages as follows to set up your environment with ONNXRuntime acceleration.

pip install onnx onnxruntime

When you're ready, you can simply append the following part to enable your ONNXRuntime acceleration.

# step 4: trace your model as an ONNXRuntime model
# if you have run `trainer.fit` before trace, then argument `input_sample` is not required.
ort_model = InferenceOptimizer.trace(model, accelerator='onnruntime', input_sample=x)

# step 5: use returned model for transparent acceleration
# The usage is almost the same with any PyTorch module
y_hat = ort_model(x)

# validate, predict, test in Trainer also support acceleration
trainer.validate(ort_model, dataloader)
trainer.test(ort_model, dataloader)
trainer.predict(ort_model, dataloader)
# note that `ort_model` is not trainable any more, so you can't use like
# trainer.fit(ort_model, dataloader) # this is illegal

OpenVINO Acceleration

To use OpenVINO acceleration, you have to install the OpenVINO toolkit:

pip install openvino-dev

The OpenVINO usage is quite similar to ONNXRuntime, the following usage is for OpenVINO:

# step 4: trace your model as a openvino model
# if you have run `trainer.fit` before trace, then argument `input_sample` is not required.
ov_model = InferenceOptimizer.trace(model, accelerator='openvino', input_sample=x)

# step 5: use returned model for transparent acceleration
# The usage is almost the same with any PyTorch module
y_hat = ov_model(x)

# validate, predict, test in Trainer also support acceleration
trainer = Trainer()
trainer.validate(ort_model, dataloader)
trainer.test(ort_model, dataloader)
trainer.predict(ort_model, dataloader)
# note that `ort_model` is not trainable any more, so you can't use like
# trainer.fit(ort_model, dataloader) # this is illegal

Quantization

Quantization is widely used to compress models to a lower precision, which not only reduces the model size but also accelerates inference. BigDL-Nano provides InferenceOptimizer.quantize() API for users to quickly obtain a quantized model with accuracy control by specifying a few arguments. Intel Neural Compressor (INC) and Post-training Optimization Tools (POT) from OpenVINO toolkit are enabled as options. In the meantime, runtime acceleration is also included directly in the quantization pipeline when using accelerator='onnxruntime'/'openvino' so you don't have to run InferenceOptimizer.trace before quantization.

To use INC as your quantization engine, you can choose accelerator as None or 'onnxruntime'. Otherwise, accelerator='openvino' means using OpenVINO POT to do quantization.

By default, InferenceOptimizer.quantize() doesn't search the tuning space and returns the fully-quantized model without considering the accuracy drop. If you need to search quantization tuning space for a model with accuracy control, you'll have to specify a few arguments to define the tuning space. More instructions in Quantization with Accuracy Control

Quantization using Intel Neural Compressor

By default, Intel Neural Compressor is not installed with BigDL-Nano. So if you determine to use it as your quantization backend, you'll need to install it first:

pip install neural-compressor==1.11.0

Quantization without extra accelerator

Without extra accelerator, InferenceOptimizer.quantize() returns a PyTorch module with desired precision and accuracy. Following the example in Runtime Acceleration, you can add quantization as below:

q_model = InferenceOptimizer.quantize(model, calib_dataloader=dataloader)
# run simple prediction with transparent acceleration
y_hat = q_model(x)

# validate, predict, test in Trainer also support acceleration
trainer.validate(q_model, dataloader)
trainer.test(q_model, dataloader)
trainer.predict(q_model, dataloader)

This is a most basic usage to quantize a model with defaults, INT8 precision, and without search tuning space to control accuracy drop.

Quantization with ONNXRuntime accelerator

With the ONNXRuntime accelerator, InferenceOptimizer.quantize() will return a model with compressed precision and running inference in the ONNXRuntime engine. It's also required to install onnxruntime-extensions as a dependency of INC when using ONNXRuntime as backend as well as the dependencies required in ONNXRuntime Acceleration:

pip install onnx onnxruntime onnxruntime-extensions

Still taking the example in Runtime Acceleration, you can add quantization as below:

ort_q_model = InferenceOptimizer.quantize(model, accelerator='onnxruntime', calib_dataloader=dataloader)
# run simple prediction with transparent acceleration
y_hat = ort_q_model(x)

# validate, predict, test in Trainer also support acceleration
trainer.validate(ort_q_model, dataloader)
trainer.test(ort_q_model, dataloader)
trainer.predict(ort_q_model, dataloader)

Using accelerator='onnxruntime' actually equals to converting the model from PyTorch to ONNX firstly and then do quantization on the converted ONNX model:

ort_model = InferenceOptimizer.trace(model, accelerator='onnruntime', input_sample=x):
ort_q_model = InferenceOptimizer.quantize(ort_model, accelerator='onnxruntime', calib_dataloader=dataloader)

# run inference with transparent acceleration
y_hat = ort_q_model(x)
trainer.validate(ort_q_model, dataloader)
trainer.test(ort_q_model, dataloader)
trainer.predict(ort_q_model, dataloader)

Quantization using Post-training Optimization Tools

The POT (Post-training Optimization Tools) is provided by OpenVINO toolkit. To use POT, you need to install OpenVINO as the same in OpenVINO acceleration:

pip install openvino-dev

Take the example in Runtime Acceleration, and add quantization:

ov_q_model = InferenceOptimizer.quantize(model, accelerator='openvino', calib_dataloader=dataloader)
# run simple prediction with transparent acceleration
y_hat = ov_q_model(x)

# validate, predict, test in Trainer also support acceleration
trainer.validate(ov_q_model, dataloader)
trainer.test(ov_q_model, dataloader)
trainer.predict(ov_q_model, dataloader)

Same as using ONNXRuntime accelerator, it equals to converting the model from PyTorch to OpenVINO firstly and then doing quantization on the converted OpenVINO model:

ov_model = InferenceOptimizer.trace(model, accelerator='openvino', input_sample=x):
ov_q_model = InferenceOptimizer.quantize(ov_model, accelerator='onnxruntime', calib_dataloader=dataloader)

# run inference with transparent acceleration
y_hat = ov_q_model(x)
trainer.validate(ov_q_model, dataloader)
trainer.test(ov_q_model, dataloader)
trainer.predict(ov_q_model, dataloader)

Quantization with Accuracy Control

A set of arguments that helps to tune the results for both INC and POT quantization:

  • calib_dataloader: A calibration dataloader is required for static post-training quantization. And for POT, it's also used for evaluation

  • metric: A metric of torchmetric to run evaluation and compare with baseline

  • accuracy_criterion: A dictionary to specify the acceptable accuracy drop, e.g. {'relative': 0.01, 'higher_is_better': True}

    • relative / absolute: Drop type, the accuracy drop should be relative or absolute to baseline
    • higher_is_better: Indicate if a larger value of metric means better accuracy
  • max_trials: Maximum trails on the search, if the algorithm can't find a satisfying model, it will exit and raise the error.

Accuracy Control with INC There are a few arguments required only by INC, and you should not specify or modify any of them if you use accelerator='openvino'.

  • tuning_strategy (optional): it specifies the algorithm to search the tuning space. In most cases, you don't need to change it.
  • timeout: Timeout of your tuning. Defaults 0 means endless time for tuning.

Here is an example to use INC with accuracy control as below. It will search for a model within 1% accuracy drop with 10 trials.

from torchmetrics.classification import Accuracy
InferenceOptimizer.quantize(model,
                            precision='int8',
                            accelerator=None,
                            calib_dataloader= dataloader,
                            metric=Accuracy()
                            accuracy_criterion={'relative': 0.01, 'higher_is_better': True},
                            approach='static',
                            method='fx',
                            tuning_strategy='bayesian',
                            timeout=0,
                            max_trials=10,
                            ):

Accuracy Control with POT Similar to INC, we can run quantization like:

from torchmetrics.classification import Accuracy
InferenceOptimizer.quantize(model,
                            precision='int8',
                            accelerator='openvino',
                            calib_dataloader= dataloader,
                            metric=Accuracy()
                            accuracy_criterion={'relative': 0.01, 'higher_is_better': True},
                            approach='static',
                            max_trials=10,
                            ):