diff --git a/docs/readthedocs/source/_toc.yml b/docs/readthedocs/source/_toc.yml index 4c1dbdd6..d8c6774a 100644 --- a/docs/readthedocs/source/_toc.yml +++ b/docs/readthedocs/source/_toc.yml @@ -29,8 +29,9 @@ subtrees: - file: doc/Nano/QuickStart/tensorflow_train - file: doc/Nano/QuickStart/tensorflow_inference - file: doc/Nano/QuickStart/hpo - - file: doc/Nano/Overview/known_issues - file: doc/Nano/QuickStart/index + - file: doc/Nano/Howto/index + - file: doc/Nano/Overview/known_issues - caption: DLlib entries: diff --git a/docs/readthedocs/source/conf.py b/docs/readthedocs/source/conf.py index ec9e1f1b..e111cdbc 100644 --- a/docs/readthedocs/source/conf.py +++ b/docs/readthedocs/source/conf.py @@ -30,8 +30,6 @@ sys.path.insert(0, os.path.abspath("../../../python/orca/src/")) sys.path.insert(0, os.path.abspath("../../../python/serving/src/")) sys.path.insert(0, os.path.abspath("../../../python/nano/src/")) - - # -- Project information ----------------------------------------------------- import sphinx_rtd_theme html_theme = "sphinx_rtd_theme" diff --git a/docs/readthedocs/source/doc/Nano/Howto/accelerate_pytorch_inference_onnx.nblink b/docs/readthedocs/source/doc/Nano/Howto/accelerate_pytorch_inference_onnx.nblink new file mode 100644 index 00000000..6b834a20 --- /dev/null +++ b/docs/readthedocs/source/doc/Nano/Howto/accelerate_pytorch_inference_onnx.nblink @@ -0,0 +1,3 @@ +{ + "path": "../../../../../../python/nano/tutorial/notebook/inference/pytorch/accelerate_pytorch_inference_onnx.ipynb" +} \ No newline at end of file diff --git a/docs/readthedocs/source/doc/Nano/Howto/accelerate_pytorch_inference_openvino.nblink b/docs/readthedocs/source/doc/Nano/Howto/accelerate_pytorch_inference_openvino.nblink new file mode 100644 index 00000000..493220f8 --- /dev/null +++ b/docs/readthedocs/source/doc/Nano/Howto/accelerate_pytorch_inference_openvino.nblink @@ -0,0 +1,3 @@ +{ + "path": "../../../../../../python/nano/tutorial/notebook/inference/pytorch/accelerate_pytorch_inference_openvino.ipynb" +} \ No newline at end of file diff --git a/docs/readthedocs/source/doc/Nano/Howto/index.rst b/docs/readthedocs/source/doc/Nano/Howto/index.rst new file mode 100644 index 00000000..b96de11b --- /dev/null +++ b/docs/readthedocs/source/doc/Nano/Howto/index.rst @@ -0,0 +1,33 @@ +Nano How-to Guides +========================= +.. note:: + This page is still a work in progress. We are adding more guides. + +In Nano How-to Guides, you could expect to find multiple task-oriented, bite-sized, and executable examples. These examples will show you various tasks that BigDL-Nano could help you accomplish smoothly. + +PyTorch Inference +------------------------- + +* `How to accelerate a PyTorch inference pipeline through ONNXRuntime `_ +* `How to accelerate a PyTorch inference pipeline through OpenVINO `_ +* `How to quantize your PyTorch model for inference using Intel Neural Compressor `_ +* `How to quantize your PyTorch model for inference using OpenVINO Post-training Optimization Tools `_ + +.. toctree:: + :maxdepth: 1 + :hidden: + + accelerate_pytorch_inference_onnx + accelerate_pytorch_inference_openvino + quantize_pytorch_inference_inc + quantize_pytorch_inference_pot + +Install +------------------------- +* `How to install BigDL-Nano in Google Colab `_ + +.. toctree:: + :maxdepth: 1 + :hidden: + + install_in_colab \ No newline at end of file diff --git a/docs/readthedocs/source/doc/Nano/Howto/install_in_colab.md b/docs/readthedocs/source/doc/Nano/Howto/install_in_colab.md new file mode 100644 index 00000000..c1ce8bb1 --- /dev/null +++ b/docs/readthedocs/source/doc/Nano/Howto/install_in_colab.md @@ -0,0 +1,84 @@ +# Install BigDL-Nano in Google Colab + +```eval_rst +.. note:: + This page is still a work in progress. +``` + +In this guide, we will show you how to install BigDL-Nano in Google Colab, and the solutions to possible version conflicts caused by pre-installed packages in Colab hosted runtime. + +Please select the corresponding section to follow for your specific usage. + +## PyTorch +For PyTorch users, you need to install BigDL-Nano for PyTorch first: + +```eval_rst +.. tabs:: + + .. tab:: Latest + + .. code-block:: python + + !pip install bigdl-nano[pytorch] + + .. tab:: Nightly-Built + + .. code-block:: python + + !pip install --pre --upgrade bigdl-nano[pytorch] +``` + +```eval_rst +.. warning:: + For Google Colab hosted runtime, ``source bigdl-nano-init`` is hardly to take effect as environment variables need to be set before jupyter kernel is started. +``` + +To avoid version conflicts caused by `torchtext`, you should uninstall it: + +```python +!pip uninstall -y torchtext +``` + +### ONNXRuntime +To enable ONNXRuntime acceleration, you need to install corresponding onnx packages: + +```python +!pip install onnx onnxruntime +``` + +### OpenVINO / Post-training Optimization Tools (POT) +To enable OpenVINO acceleration, or use POT for quantization, you need to install the OpenVINO toolkit: + +```python +!pip install openvino-dev +# Please remember to restart runtime to use packages with newly-installed version +``` + +```eval_rst +.. note:: + If you meet ``ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 88 from C header, got 80 from PyObject`` when using ``Trainer.trace`` or ``Trainer.quantize`` function, you could try to solve it by upgrading ``numpy`` through: + + .. code-block:: python + + !pip install --upgrade numpy + # Please remember to restart runtime to use numpy with newly-installed version +``` + +### Intel Neural Compressor (INC) +To use INC as your quantization backend, you need to install it: + +```eval_rst +.. tabs:: + + .. tab:: With no Extra Runtime Acceleration + + .. code-block:: python + + !pip install neural-compressor==1.11.0 + + .. tab:: With Extra ONNXRuntime Acceleration + + .. code-block:: python + + !pip install neural-compressor==1.11.0 onnx onnxruntime onnxruntime_extensions +``` \ No newline at end of file diff --git a/docs/readthedocs/source/doc/Nano/Howto/quantize_pytorch_inference_inc.nblink b/docs/readthedocs/source/doc/Nano/Howto/quantize_pytorch_inference_inc.nblink new file mode 100644 index 00000000..70196d8d --- /dev/null +++ b/docs/readthedocs/source/doc/Nano/Howto/quantize_pytorch_inference_inc.nblink @@ -0,0 +1,3 @@ +{ + "path": "../../../../../../python/nano/tutorial/notebook/inference/pytorch/quantize_pytorch_inference_inc.ipynb" +} \ No newline at end of file diff --git a/docs/readthedocs/source/doc/Nano/Howto/quantize_pytorch_inference_pot.nblink b/docs/readthedocs/source/doc/Nano/Howto/quantize_pytorch_inference_pot.nblink new file mode 100644 index 00000000..02660864 --- /dev/null +++ b/docs/readthedocs/source/doc/Nano/Howto/quantize_pytorch_inference_pot.nblink @@ -0,0 +1,3 @@ +{ + "path": "../../../../../../python/nano/tutorial/notebook/inference/pytorch/quantize_pytorch_inference_pot.ipynb" +} \ No newline at end of file