[Nano] Improve How-to Guides Navigations (#7396)

* Remove deprecated option enable_auto_doc_ref for recommonmark

* Add first level navigation structure for Nano how-to guides

* Update navigation for How-to Training part

* Update navigation for How-to Inference part

* Update navigation for How-to Preprocessing/Install part and other small fixes

* Fix wrong link path caused by position changes of how-to install related guides

* Small fix
This commit is contained in:
Yuwen Hu 2023-02-03 09:37:10 +08:00 committed by GitHub
parent bd29010854
commit c31136df0b
18 changed files with 252 additions and 42 deletions

View file

@ -101,43 +101,87 @@ subtrees:
title: "How-to Guides"
subtrees:
- entries:
- file: doc/Nano/Howto/Preprocessing/PyTorch/accelerate_pytorch_cv_data_pipeline
- file: doc/Nano/Howto/Training/PyTorchLightning/accelerate_pytorch_lightning_training_ipex
- file: doc/Nano/Howto/Training/PyTorchLightning/accelerate_pytorch_lightning_training_multi_instance
- file: doc/Nano/Howto/Training/PyTorchLightning/pytorch_lightning_training_channels_last
- file: doc/Nano/Howto/Training/PyTorchLightning/pytorch_lightning_training_bf16
- file: doc/Nano/Howto/Training/PyTorch/convert_pytorch_training_torchnano
- file: doc/Nano/Howto/Training/PyTorch/use_nano_decorator_pytorch_training
- file: doc/Nano/Howto/Training/PyTorch/accelerate_pytorch_training_ipex
- file: doc/Nano/Howto/Training/PyTorch/accelerate_pytorch_training_multi_instance
- file: doc/Nano/Howto/Training/PyTorch/pytorch_training_channels_last
- file: doc/Nano/Howto/Training/PyTorch/accelerate_pytorch_training_bf16
- file: doc/Nano/Howto/Training/TensorFlow/accelerate_tensorflow_training_multi_instance
- file: doc/Nano/Howto/Training/TensorFlow/tensorflow_training_embedding_sparseadam
- file: doc/Nano/Howto/Training/TensorFlow/tensorflow_training_bf16
- file: doc/Nano/Howto/Training/General/choose_num_processes_training
- file: doc/Nano/Howto/Inference/OpenVINO/openvino_inference
- file: doc/Nano/Howto/Inference/OpenVINO/openvino_inference_async
- file: doc/Nano/Howto/Inference/OpenVINO/accelerate_inference_openvino_gpu
- file: doc/Nano/Howto/Inference/PyTorch/accelerate_pytorch_inference_onnx
- file: doc/Nano/Howto/Inference/PyTorch/accelerate_pytorch_inference_openvino
- file: doc/Nano/Howto/Inference/PyTorch/accelerate_pytorch_inference_jit_ipex
- file: doc/Nano/Howto/Inference/PyTorch/multi_instance_pytorch_inference
- file: doc/Nano/Howto/Inference/PyTorch/quantize_pytorch_inference_inc
- file: doc/Nano/Howto/Inference/PyTorch/quantize_pytorch_inference_pot
- file: doc/Nano/Howto/Inference/PyTorch/pytorch_context_manager
- file: doc/Nano/Howto/Inference/PyTorch/pytorch_save_and_load_ipex
- file: doc/Nano/Howto/Inference/PyTorch/pytorch_save_and_load_jit
- file: doc/Nano/Howto/Inference/PyTorch/pytorch_save_and_load_onnx
- file: doc/Nano/Howto/Inference/PyTorch/pytorch_save_and_load_openvino
- file: doc/Nano/Howto/Inference/PyTorch/inference_optimizer_optimize
- file: doc/Nano/Howto/Inference/TensorFlow/accelerate_tensorflow_inference_onnx
- file: doc/Nano/Howto/Inference/TensorFlow/accelerate_tensorflow_inference_openvino
- file: doc/Nano/Howto/Inference/TensorFlow/tensorflow_inference_bf16
- file: doc/Nano/Howto/Inference/TensorFlow/tensorflow_save_and_load_onnx
- file: doc/Nano/Howto/Inference/TensorFlow/tensorflow_save_and_load_openvino
- file: doc/Nano/Howto/install_in_colab
- file: doc/Nano/Howto/windows_guide
- file: doc/Nano/Howto/Preprocessing/index
subtrees:
- entries:
- file: doc/Nano/Howto/Preprocessing/PyTorch/index
title: "PyTorch"
subtrees:
- entries:
- file: doc/Nano/Howto/Preprocessing/PyTorch/accelerate_pytorch_cv_data_pipeline
- file: doc/Nano/Howto/Training/index
subtrees:
- entries:
- file: doc/Nano/Howto/Training/PyTorchLightning/index
title: "PyTorch Lightning"
subtrees:
- entries:
- file: doc/Nano/Howto/Training/PyTorchLightning/accelerate_pytorch_lightning_training_ipex
- file: doc/Nano/Howto/Training/PyTorchLightning/accelerate_pytorch_lightning_training_multi_instance
- file: doc/Nano/Howto/Training/PyTorchLightning/pytorch_lightning_training_channels_last
- file: doc/Nano/Howto/Training/PyTorchLightning/pytorch_lightning_training_bf16
- file: doc/Nano/Howto/Training/PyTorch/index
title: "PyTorch"
subtrees:
- entries:
- file: doc/Nano/Howto/Training/PyTorch/convert_pytorch_training_torchnano
- file: doc/Nano/Howto/Training/PyTorch/use_nano_decorator_pytorch_training
- file: doc/Nano/Howto/Training/PyTorch/accelerate_pytorch_training_ipex
- file: doc/Nano/Howto/Training/PyTorch/accelerate_pytorch_training_multi_instance
- file: doc/Nano/Howto/Training/PyTorch/pytorch_training_channels_last
- file: doc/Nano/Howto/Training/PyTorch/accelerate_pytorch_training_bf16
- file: doc/Nano/Howto/Training/TensorFlow/index
title: "TensorFlow"
subtrees:
- entries:
- file: doc/Nano/Howto/Training/TensorFlow/accelerate_tensorflow_training_multi_instance
- file: doc/Nano/Howto/Training/TensorFlow/tensorflow_training_embedding_sparseadam
- file: doc/Nano/Howto/Training/TensorFlow/tensorflow_training_bf16
- file: doc/Nano/Howto/Training/General/index
title: "General"
subtrees:
- entries:
- file: doc/Nano/Howto/Training/General/choose_num_processes_training
- file: doc/Nano/Howto/Inference/index
subtrees:
- entries:
- file: doc/Nano/Howto/Inference/OpenVINO/index
title: "OpenVINO"
subtrees:
- entries:
- file: doc/Nano/Howto/Inference/OpenVINO/openvino_inference
- file: doc/Nano/Howto/Inference/OpenVINO/openvino_inference_async
- file: doc/Nano/Howto/Inference/OpenVINO/accelerate_inference_openvino_gpu
- file: doc/Nano/Howto/Inference/PyTorch/index
title: "PyTorch"
subtrees:
- entries:
- file: doc/Nano/Howto/Inference/PyTorch/accelerate_pytorch_inference_onnx
- file: doc/Nano/Howto/Inference/PyTorch/accelerate_pytorch_inference_openvino
- file: doc/Nano/Howto/Inference/PyTorch/accelerate_pytorch_inference_jit_ipex
- file: doc/Nano/Howto/Inference/PyTorch/multi_instance_pytorch_inference
- file: doc/Nano/Howto/Inference/PyTorch/quantize_pytorch_inference_inc
- file: doc/Nano/Howto/Inference/PyTorch/quantize_pytorch_inference_pot
- file: doc/Nano/Howto/Inference/PyTorch/pytorch_context_manager
- file: doc/Nano/Howto/Inference/PyTorch/pytorch_save_and_load_ipex
- file: doc/Nano/Howto/Inference/PyTorch/pytorch_save_and_load_jit
- file: doc/Nano/Howto/Inference/PyTorch/pytorch_save_and_load_onnx
- file: doc/Nano/Howto/Inference/PyTorch/pytorch_save_and_load_openvino
- file: doc/Nano/Howto/Inference/PyTorch/inference_optimizer_optimize
- file: doc/Nano/Howto/Inference/TensorFlow/index
title: "TensorFlow"
subtrees:
- entries:
- file: doc/Nano/Howto/Inference/TensorFlow/accelerate_tensorflow_inference_onnx
- file: doc/Nano/Howto/Inference/TensorFlow/accelerate_tensorflow_inference_openvino
- file: doc/Nano/Howto/Inference/TensorFlow/tensorflow_inference_bf16
- file: doc/Nano/Howto/Inference/TensorFlow/tensorflow_save_and_load_onnx
- file: doc/Nano/Howto/Inference/TensorFlow/tensorflow_save_and_load_openvino
- file: doc/Nano/Howto/Install/index
subtrees:
- entries:
- file: doc/Nano/Howto/Install/install_in_colab
- file: doc/Nano/Howto/Install/windows_guide
- file: doc/Nano/Overview/known_issues
title: "Tips and Known Issues"
- file: doc/Nano/Overview/troubshooting

View file

@ -269,8 +269,7 @@ def setup(app):
'auto_toc_tree_section': 'Contents',
'enable_math': False,
'enable_inline_math': False,
'enable_eval_rst': True,
'enable_auto_doc_ref': True,
'enable_eval_rst': True
}, True)
app.add_transform(AutoStructify)

View file

@ -0,0 +1,6 @@
Inference Optimization: For OpenVINO Users
=============================================
* `How to run inference on OpenVINO model <openvino_inference.html>`_
* `How to run asynchronous inference on OpenVINO model <openvino_inference_async.html>`_
* `How to accelerate a PyTorch / TensorFlow inference pipeline on Intel GPUs through OpenVINO <accelerate_inference_openvino_gpu.html>`_

View file

@ -0,0 +1,18 @@
Inference Optimization: For PyTorch Users
=============================================
* `How to accelerate a PyTorch inference pipeline through ONNXRuntime <accelerate_pytorch_inference_onnx.html>`_
* `How to accelerate a PyTorch inference pipeline through OpenVINO <accelerate_pytorch_inference_openvino.html>`_
* `How to accelerate a PyTorch inference pipeline through JIT/IPEX <accelerate_pytorch_inference_jit_ipex.html>`_
* `How to accelerate a PyTorch inference pipeline through multiple instances <multi_instance_pytorch_inference.html>`_
* `How to quantize your PyTorch model for inference using Intel Neural Compressor <quantize_pytorch_inference_inc.html>`_
* `How to quantize your PyTorch model for inference using OpenVINO Post-training Optimization Tools <quantize_pytorch_inference_pot.html>`_
* |pytorch_inference_context_manager_link|_
* `How to save and load optimized IPEX model <pytorch_save_and_load_ipex.html>`_
* `How to save and load optimized JIT model <pytorch_save_and_load_jit.html>`_
* `How to save and load optimized ONNXRuntime model <pytorch_save_and_load_onnx.html>`_
* `How to save and load optimized OpenVINO model <pytorch_save_and_load_openvino.html>`_
* `How to find accelerated method with minimal latency using InferenceOptimizer <inference_optimizer_optimize.html>`_
.. |pytorch_inference_context_manager_link| replace:: How to use context manager through ``get_context``
.. _pytorch_inference_context_manager_link: pytorch_context_manager.html

View file

@ -0,0 +1,8 @@
Inference Optimization: For TensorFlow Users
=============================================
* `How to accelerate a TensorFlow inference pipeline through ONNXRuntime <accelerate_tensorflow_inference_onnx.html>`_
* `How to accelerate a TensorFlow inference pipeline through OpenVINO <accelerate_tensorflow_inference_openvino.html>`_
* `How to conduct BFloat16 Mixed Precision inference in a TensorFlow Keras application <tensorflow_inference_bf16.html>`_
* `How to save and load optimized ONNXRuntime model in TensorFlow <tensorflow_save_and_load_onnx.html>`_
* `How to save and load optimized OpenVINO model in TensorFlow <tensorflow_save_and_load_openvino.html>`_

View file

@ -0,0 +1,33 @@
Inference Optimization
=========================
Here you could find detailed guides on how to apply BigDL-Nano to optimize your inference workloads. Select your desired use case below for further navigation:
.. grid:: 1 2 2 2
.. grid-item::
.. button-link:: OpenVINO/index.html
:color: primary
:expand:
:outline:
I use **OpenVINO** toolkit.
.. grid-item::
.. button-link:: PyTorch/index.html
:color: primary
:expand:
:outline:
I am a **PyTorch** user.
.. grid-item::
.. button-link:: TensorFlow/index.html
:color: primary
:expand:
:outline:
I am a **TensorFlow** user.

View file

@ -0,0 +1,7 @@
Install
=========================
Here you could find detailed guides on how to install BigDL-Nano for different use cases:
* `How to install BigDL-Nano in Google Colab <install_in_colab.html>`_
* `How to install BigDL-Nano on Windows <windows_guide.html>`_

View file

@ -0,0 +1,4 @@
Preprocessing Optimization: For PyTorch Users
==============================================
* `How to accelerate a computer vision data processing pipeline <accelerate_pytorch_cv_data_pipeline.html>`_

View file

@ -0,0 +1,15 @@
Preprocessing Optimization
===========================
Here you could find detailed guides on how to apply BigDL-Nano to accelerate your data preprocess pipeline. Select your desired use case below for further navigation:
.. grid:: 1 2 2 2
.. grid-item::
.. button-link:: PyTorch/index.html
:color: primary
:expand:
:outline:
I am a **PyTorch** user.

View file

@ -0,0 +1,4 @@
Training Optimization: General Tips
====================================
* `How to choose the number of processes for multi-instance training <choose_num_processes_training.html>`_

View file

@ -0,0 +1,14 @@
Training Optimization: For PyTorch Users
=========================================
* |convert_pytorch_training_torchnano|_
* |use_nano_decorator_pytorch_training|_
* `How to accelerate a PyTorch application on training workloads through Intel® Extension for PyTorch* <accelerate_pytorch_training_ipex.html>`_
* `How to accelerate a PyTorch application on training workloads through multiple instances <accelerate_pytorch_training_multi_instance.html>`_
* `How to use the channels last memory format in your PyTorch application for training <pytorch_training_channels_last.html>`_
* `How to conduct BFloat16 Mixed Precision training in your PyTorch application <accelerate_pytorch_training_bf16.html>`_
.. |use_nano_decorator_pytorch_training| replace:: How to accelerate your PyTorch training loop with ``@nano`` decorator
.. _use_nano_decorator_pytorch_training: use_nano_decorator_pytorch_training.html
.. |convert_pytorch_training_torchnano| replace:: How to convert your PyTorch training loop to use ``TorchNano`` for acceleration
.. _convert_pytorch_training_torchnano: convert_pytorch_training_torchnano.html

View file

@ -0,0 +1,7 @@
Training Optimization: For PyTorch Lightning Users
===================================================
* `How to accelerate a PyTorch Lightning application on training workloads through Intel® Extension for PyTorch* <accelerate_pytorch_lightning_training_ipex.html>`_
* `How to accelerate a PyTorch Lightning application on training workloads through multiple instances <accelerate_pytorch_lightning_training_multi_instance.html>`_
* `How to use the channels last memory format in your PyTorch Lightning application for training <pytorch_lightning_training_channels_last.html>`_
* `How to conduct BFloat16 Mixed Precision training in your PyTorch Lightning application <pytorch_lightning_training_bf16.html>`_

View file

@ -0,0 +1,9 @@
Training Optimization: For TensorFlow Users
============================================
* `How to accelerate a TensorFlow Keras application on training workloads through multiple instances <accelerate_tensorflow_training_multi_instance.html>`_
* |tensorflow_training_embedding_sparseadam_link|_
* `How to conduct BFloat16 Mixed Precision training in your TensorFlow application <tensorflow_training_bf16.html>`_
.. |tensorflow_training_embedding_sparseadam_link| replace:: How to optimize your model with a sparse ``Embedding`` layer and ``SparseAdam`` optimizer
.. _tensorflow_training_embedding_sparseadam_link: tensorflow_training_embedding_sparseadam.html

View file

@ -0,0 +1,42 @@
Training Optimization
=========================
Here you could find detailed guides on how to apply BigDL-Nano to optimize your training workloads. Select your desired use case below for further navigation:
.. grid:: 1 2 2 2
.. grid-item::
.. button-link:: PyTorchLightning/index.html
:color: primary
:expand:
:outline:
I am a **PyTorch Lightning** user.
.. grid-item::
.. button-link:: PyTorch/index.html
:color: primary
:expand:
:outline:
I am a **PyTorch** user.
.. grid-item::
.. button-link:: TensorFlow/index.html
:color: primary
:expand:
:outline:
I am a **TensorFlow** user.
.. grid-item::
.. button-link:: General/index.html
:color: primary
:expand:
:outline:
I want to know general optimization tips.

View file

@ -89,5 +89,5 @@ TensorFlow
Install
-------------------------
* `How to install BigDL-Nano in Google Colab <install_in_colab.html>`_
* `How to install BigDL-Nano on Windows <windows_guide.html>`_
* `How to install BigDL-Nano in Google Colab <Install/install_in_colab.html>`_
* `How to install BigDL-Nano on Windows <Install/windows_guide.html>`_

View file

@ -91,7 +91,7 @@ For Linux, Ubuntu (22.04/20.04/18.04) is recommended.
For Windows OS, users could only run `bigdl-nano-init` every time they open a new cmd terminal.
We recommend using Windows Subsystem for Linux 2 (WSL2) to run BigDL-Nano. Please refer to [Nano Windows install guide](../Howto/windows_guide.md) for instructions.
We recommend using Windows Subsystem for Linux 2 (WSL2) to run BigDL-Nano. Please refer to [Nano Windows install guide](../Howto/Install/windows_guide.md) for instructions.
### Install on MacOS
#### MacOS with Intel Chip