* Add basic guides structure of Training - TensorFlow * Add how-to guides: How to accelerate a TensorFlow Keras application on training workloads through multiple instances * Change import order and add pip install for tensorflow-dataset * Diable other nano tests for now * Add github action tests for how-to guides Tensorflow training * Use jupyter nbconvert to test notebooks for training tensorflow instead to avoid errors * Add how-to guide: How to optimize your model with a sparse Embedding layer and SparseAdam optimizer * Enable other nano tests again * Small Revision: fix typos * Small Revision: refactor some sentences * Revision: refactor contents based on comments * Add How-to guides: How to choose the number of processes for multi-instance training * Small Revision: fix typos and refactor some sentences * Make timeout time for github action longer for TensorFlow, 600s->700s
83 lines
No EOL
3.6 KiB
ReStructuredText
83 lines
No EOL
3.6 KiB
ReStructuredText
Nano How-to Guides
|
|
=========================
|
|
.. note::
|
|
This page is still a work in progress. We are adding more guides.
|
|
|
|
In Nano How-to Guides, you could expect to find multiple task-oriented, bite-sized, and executable examples. These examples will show you various tasks that BigDL-Nano could help you accomplish smoothly.
|
|
|
|
Training Optimization
|
|
-------------------------
|
|
|
|
PyTorch Lightning
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
* `How to accelerate a PyTorch Lightning application on training workloads through Intel® Extension for PyTorch* <Training/PyTorchLightning/accelerate_pytorch_lightning_training_ipex.html>`_
|
|
* `How to accelerate a PyTorch Lightning application on training workloads through multiple instances <Training/PyTorchLightning/accelerate_pytorch_lightning_training_multi_instance.html>`_
|
|
* `How to use the channels last memory format in your PyTorch Lightning application for training <Training/PyTorchLightning/pytorch_lightning_training_channels_last.html>`_
|
|
* `How to conduct BFloat16 Mixed Precision training in your PyTorch Lightning application <Training/PyTorchLightning/pytorch_lightning_training_bf16.html>`_
|
|
* `How to accelerate a computer vision data processing pipeline <Training/PyTorchLightning/pytorch_lightning_cv_data_pipeline.html>`_
|
|
|
|
.. toctree::
|
|
:maxdepth: 1
|
|
:hidden:
|
|
|
|
Training/PyTorchLightning/accelerate_pytorch_lightning_training_ipex
|
|
Training/PyTorchLightning/accelerate_pytorch_lightning_training_multi_instance
|
|
Training/PyTorchLightning/pytorch_lightning_training_channels_last
|
|
Training/PyTorchLightning/pytorch_lightning_training_bf16
|
|
Training/PyTorchLightning/pytorch_lightning_cv_data_pipeline
|
|
|
|
TensorFlow
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
* `How to accelerate a TensorFlow Keras application on training workloads through multiple instances <Training/TensorFlow/accelerate_tensorflow_training_multi_instance.html>`_
|
|
* |tensorflow_training_embedding_sparseadam_link|_
|
|
|
|
.. |tensorflow_training_embedding_sparseadam_link| replace:: How to optimize your model with a sparse ``Embedding`` layer and ``SparseAdam`` optimizer
|
|
.. _tensorflow_training_embedding_sparseadam_link: Training/TensorFlow/tensorflow_training_embedding_sparseadam.html
|
|
|
|
.. toctree::
|
|
:maxdepth: 1
|
|
:hidden:
|
|
|
|
Training/TensorFlow/accelerate_tensorflow_training_multi_instance
|
|
Training/TensorFlow/tensorflow_training_embedding_sparseadam
|
|
|
|
General
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
* `How to choose the number of processes for multi-instance training <Training/General/choose_num_processes_training.html>`_
|
|
|
|
.. toctree::
|
|
:maxdepth: 1
|
|
:hidden:
|
|
|
|
Training/General/choose_num_processes_training
|
|
|
|
|
|
Inference Optimization
|
|
-------------------------
|
|
|
|
PyTorch
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
* `How to accelerate a PyTorch inference pipeline through ONNXRuntime <Inference/PyTorch/accelerate_pytorch_inference_onnx.html>`_
|
|
* `How to accelerate a PyTorch inference pipeline through OpenVINO <Inference/PyTorch/accelerate_pytorch_inference_openvino.html>`_
|
|
* `How to quantize your PyTorch model for inference using Intel Neural Compressor <Inference/PyTorch/quantize_pytorch_inference_inc.html>`_
|
|
* `How to quantize your PyTorch model for inference using OpenVINO Post-training Optimization Tools <Inference/PyTorch/quantize_pytorch_inference_pot.html>`_
|
|
|
|
.. toctree::
|
|
:maxdepth: 1
|
|
:hidden:
|
|
|
|
Inference/PyTorch/accelerate_pytorch_inference_onnx
|
|
Inference/PyTorch/accelerate_pytorch_inference_openvino
|
|
Inference/PyTorch/quantize_pytorch_inference_inc
|
|
Inference/PyTorch/quantize_pytorch_inference_pot
|
|
|
|
Install
|
|
-------------------------
|
|
* `How to install BigDL-Nano in Google Colab <install_in_colab.html>`_
|
|
|
|
.. toctree::
|
|
:maxdepth: 1
|
|
:hidden:
|
|
|
|
install_in_colab |