* Rearrange file structure for PyTorch Inference for docs and add titles for PyTorch-Lightning Training
* Add How-to guide: How to accelerate a PyTorch-Lightning application on training workloads through Intel® Extension for PyTorch*
* Add how-to guide: How to accelerate a PyTorch-Lightning application on training workloads through multiple instances
* Revise: remove '-' in 'PyTorch-Lightning' and some other changes
* Add How-to guides: How to use the channels last memory format in your PyTorch Lightning application for training
* Add how-to guide: Use BFloat16 Mixed Precision for PyTorch Lightning Training
* Add How-to guide: How to accelerate a computer vision data processing pipeline
* Small Revision: change comments in several code cells
* Disable other nano tests temporarily
* Add github action tests for Nano Training PyTorch Lightning tests
* Enable other nano tests again
* Small revisions: typos and explanation texts changes
* Revise: update based on comments
* Create doc tree index for Nano How-to Guides
* Add How to guide for PyTorch Inference using ONNXRuntime
* Add How to guide for PyTorch Inference using OpenVINO
* Update How to guide for PyTorch Inference using OpenVINO/ONNXRuntime
* Change current notebook to md and revise contents to be more concentrated
* Add How-to Guide: Install BigDL-Nano in Google Colab (need further update)
* Revise words in How-to Guide for PyTorch Inference using OpenVINO/ONNXRuntime
* Add How-To Guide: Quantize PyTorch Model for Inference using Intel Neural Compressor
* Add How-To Guide: Quantize PyTorch Model for Inference using Post-training Quantization Tools
* Add API doc links and small revision
* Test: syncronization through marks in py files
* Test: syncronization through notebook with cells hidden from rendering in doc
* Remove test commits for runnable example <-> guides synchronization
* Enable rendering notebook from location out of sphinx source root
* Update guide "How to accelerate a PyTorch inference pipeline through OpenVINO" to notebook under python folder
* Update guide "How to quantize your PyTorch model for inference using Intel Neural Compressor" to notebook under python folder
* Fix bug that markdown will be ignored inside html tags for nbconvert, and notebook revise
* Update guide 'How to quantize your PyTorch model for inference using Post-training Optimization Tools' to notebook under python folder
* Small updates to index and current guides
* Revision based on Junwei's comments
* Update how-to guides: How to install BigDL-Nano in Google Colab, and update index page
* Small typo fix