[Nano] Nano How-to Guides: Format & PyTorch Inference (#5480)
* Create doc tree index for Nano How-to Guides * Add How to guide for PyTorch Inference using ONNXRuntime * Add How to guide for PyTorch Inference using OpenVINO * Update How to guide for PyTorch Inference using OpenVINO/ONNXRuntime * Change current notebook to md and revise contents to be more concentrated * Add How-to Guide: Install BigDL-Nano in Google Colab (need further update) * Revise words in How-to Guide for PyTorch Inference using OpenVINO/ONNXRuntime * Add How-To Guide: Quantize PyTorch Model for Inference using Intel Neural Compressor * Add How-To Guide: Quantize PyTorch Model for Inference using Post-training Quantization Tools * Add API doc links and small revision * Test: syncronization through marks in py files * Test: syncronization through notebook with cells hidden from rendering in doc * Remove test commits for runnable example <-> guides synchronization * Enable rendering notebook from location out of sphinx source root * Update guide "How to accelerate a PyTorch inference pipeline through OpenVINO" to notebook under python folder * Update guide "How to quantize your PyTorch model for inference using Intel Neural Compressor" to notebook under python folder * Fix bug that markdown will be ignored inside html tags for nbconvert, and notebook revise * Update guide 'How to quantize your PyTorch model for inference using Post-training Optimization Tools' to notebook under python folder * Small updates to index and current guides * Revision based on Junwei's comments * Update how-to guides: How to install BigDL-Nano in Google Colab, and update index page * Small typo fix
This commit is contained in:
parent
f8f717a684
commit
30dd0bd6c2
8 changed files with 131 additions and 3 deletions
|
|
@ -29,8 +29,9 @@ subtrees:
|
||||||
- file: doc/Nano/QuickStart/tensorflow_train
|
- file: doc/Nano/QuickStart/tensorflow_train
|
||||||
- file: doc/Nano/QuickStart/tensorflow_inference
|
- file: doc/Nano/QuickStart/tensorflow_inference
|
||||||
- file: doc/Nano/QuickStart/hpo
|
- file: doc/Nano/QuickStart/hpo
|
||||||
- file: doc/Nano/Overview/known_issues
|
|
||||||
- file: doc/Nano/QuickStart/index
|
- file: doc/Nano/QuickStart/index
|
||||||
|
- file: doc/Nano/Howto/index
|
||||||
|
- file: doc/Nano/Overview/known_issues
|
||||||
|
|
||||||
- caption: DLlib
|
- caption: DLlib
|
||||||
entries:
|
entries:
|
||||||
|
|
|
||||||
|
|
@ -30,8 +30,6 @@ sys.path.insert(0, os.path.abspath("../../../python/orca/src/"))
|
||||||
sys.path.insert(0, os.path.abspath("../../../python/serving/src/"))
|
sys.path.insert(0, os.path.abspath("../../../python/serving/src/"))
|
||||||
sys.path.insert(0, os.path.abspath("../../../python/nano/src/"))
|
sys.path.insert(0, os.path.abspath("../../../python/nano/src/"))
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
# -- Project information -----------------------------------------------------
|
# -- Project information -----------------------------------------------------
|
||||||
import sphinx_rtd_theme
|
import sphinx_rtd_theme
|
||||||
html_theme = "sphinx_rtd_theme"
|
html_theme = "sphinx_rtd_theme"
|
||||||
|
|
|
||||||
|
|
@ -0,0 +1,3 @@
|
||||||
|
{
|
||||||
|
"path": "../../../../../../python/nano/tutorial/notebook/inference/pytorch/accelerate_pytorch_inference_onnx.ipynb"
|
||||||
|
}
|
||||||
|
|
@ -0,0 +1,3 @@
|
||||||
|
{
|
||||||
|
"path": "../../../../../../python/nano/tutorial/notebook/inference/pytorch/accelerate_pytorch_inference_openvino.ipynb"
|
||||||
|
}
|
||||||
33
docs/readthedocs/source/doc/Nano/Howto/index.rst
Normal file
33
docs/readthedocs/source/doc/Nano/Howto/index.rst
Normal file
|
|
@ -0,0 +1,33 @@
|
||||||
|
Nano How-to Guides
|
||||||
|
=========================
|
||||||
|
.. note::
|
||||||
|
This page is still a work in progress. We are adding more guides.
|
||||||
|
|
||||||
|
In Nano How-to Guides, you could expect to find multiple task-oriented, bite-sized, and executable examples. These examples will show you various tasks that BigDL-Nano could help you accomplish smoothly.
|
||||||
|
|
||||||
|
PyTorch Inference
|
||||||
|
-------------------------
|
||||||
|
|
||||||
|
* `How to accelerate a PyTorch inference pipeline through ONNXRuntime <accelerate_pytorch_inference_onnx.html>`_
|
||||||
|
* `How to accelerate a PyTorch inference pipeline through OpenVINO <accelerate_pytorch_inference_openvino.html>`_
|
||||||
|
* `How to quantize your PyTorch model for inference using Intel Neural Compressor <quantize_pytorch_inference_inc.html>`_
|
||||||
|
* `How to quantize your PyTorch model for inference using OpenVINO Post-training Optimization Tools <quantize_pytorch_inference_pot.html>`_
|
||||||
|
|
||||||
|
.. toctree::
|
||||||
|
:maxdepth: 1
|
||||||
|
:hidden:
|
||||||
|
|
||||||
|
accelerate_pytorch_inference_onnx
|
||||||
|
accelerate_pytorch_inference_openvino
|
||||||
|
quantize_pytorch_inference_inc
|
||||||
|
quantize_pytorch_inference_pot
|
||||||
|
|
||||||
|
Install
|
||||||
|
-------------------------
|
||||||
|
* `How to install BigDL-Nano in Google Colab <install_in_colab.html>`_
|
||||||
|
|
||||||
|
.. toctree::
|
||||||
|
:maxdepth: 1
|
||||||
|
:hidden:
|
||||||
|
|
||||||
|
install_in_colab
|
||||||
84
docs/readthedocs/source/doc/Nano/Howto/install_in_colab.md
Normal file
84
docs/readthedocs/source/doc/Nano/Howto/install_in_colab.md
Normal file
|
|
@ -0,0 +1,84 @@
|
||||||
|
# Install BigDL-Nano in Google Colab
|
||||||
|
|
||||||
|
```eval_rst
|
||||||
|
.. note::
|
||||||
|
This page is still a work in progress.
|
||||||
|
```
|
||||||
|
|
||||||
|
In this guide, we will show you how to install BigDL-Nano in Google Colab, and the solutions to possible version conflicts caused by pre-installed packages in Colab hosted runtime.
|
||||||
|
|
||||||
|
Please select the corresponding section to follow for your specific usage.
|
||||||
|
|
||||||
|
## PyTorch
|
||||||
|
For PyTorch users, you need to install BigDL-Nano for PyTorch first:
|
||||||
|
|
||||||
|
```eval_rst
|
||||||
|
.. tabs::
|
||||||
|
|
||||||
|
.. tab:: Latest
|
||||||
|
|
||||||
|
.. code-block:: python
|
||||||
|
|
||||||
|
!pip install bigdl-nano[pytorch]
|
||||||
|
|
||||||
|
.. tab:: Nightly-Built
|
||||||
|
|
||||||
|
.. code-block:: python
|
||||||
|
|
||||||
|
!pip install --pre --upgrade bigdl-nano[pytorch]
|
||||||
|
```
|
||||||
|
|
||||||
|
```eval_rst
|
||||||
|
.. warning::
|
||||||
|
For Google Colab hosted runtime, ``source bigdl-nano-init`` is hardly to take effect as environment variables need to be set before jupyter kernel is started.
|
||||||
|
```
|
||||||
|
|
||||||
|
To avoid version conflicts caused by `torchtext`, you should uninstall it:
|
||||||
|
|
||||||
|
```python
|
||||||
|
!pip uninstall -y torchtext
|
||||||
|
```
|
||||||
|
|
||||||
|
### ONNXRuntime
|
||||||
|
To enable ONNXRuntime acceleration, you need to install corresponding onnx packages:
|
||||||
|
|
||||||
|
```python
|
||||||
|
!pip install onnx onnxruntime
|
||||||
|
```
|
||||||
|
|
||||||
|
### OpenVINO / Post-training Optimization Tools (POT)
|
||||||
|
To enable OpenVINO acceleration, or use POT for quantization, you need to install the OpenVINO toolkit:
|
||||||
|
|
||||||
|
```python
|
||||||
|
!pip install openvino-dev
|
||||||
|
# Please remember to restart runtime to use packages with newly-installed version
|
||||||
|
```
|
||||||
|
|
||||||
|
```eval_rst
|
||||||
|
.. note::
|
||||||
|
If you meet ``ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 88 from C header, got 80 from PyObject`` when using ``Trainer.trace`` or ``Trainer.quantize`` function, you could try to solve it by upgrading ``numpy`` through:
|
||||||
|
|
||||||
|
.. code-block:: python
|
||||||
|
|
||||||
|
!pip install --upgrade numpy
|
||||||
|
# Please remember to restart runtime to use numpy with newly-installed version
|
||||||
|
```
|
||||||
|
|
||||||
|
### Intel Neural Compressor (INC)
|
||||||
|
To use INC as your quantization backend, you need to install it:
|
||||||
|
|
||||||
|
```eval_rst
|
||||||
|
.. tabs::
|
||||||
|
|
||||||
|
.. tab:: With no Extra Runtime Acceleration
|
||||||
|
|
||||||
|
.. code-block:: python
|
||||||
|
|
||||||
|
!pip install neural-compressor==1.11.0
|
||||||
|
|
||||||
|
.. tab:: With Extra ONNXRuntime Acceleration
|
||||||
|
|
||||||
|
.. code-block:: python
|
||||||
|
|
||||||
|
!pip install neural-compressor==1.11.0 onnx onnxruntime onnxruntime_extensions
|
||||||
|
```
|
||||||
|
|
@ -0,0 +1,3 @@
|
||||||
|
{
|
||||||
|
"path": "../../../../../../python/nano/tutorial/notebook/inference/pytorch/quantize_pytorch_inference_inc.ipynb"
|
||||||
|
}
|
||||||
|
|
@ -0,0 +1,3 @@
|
||||||
|
{
|
||||||
|
"path": "../../../../../../python/nano/tutorial/notebook/inference/pytorch/quantize_pytorch_inference_pot.ipynb"
|
||||||
|
}
|
||||||
Loading…
Reference in a new issue