* Add basic doc structure for bf16 tf training how-to guide, and change the incorrect order of tf inference guides in toc
* Add how-to guide for tf bf16 training
* Add warning box for tf bf16 hardware limitations
* Add a print message to show the default policy of model after unpatch
* Small fixes
* Small github action fixes for tf bf16 training how-to guide
* diable action test for tf bf16 train for now, due to the core dumped problem on platforms without AVX512
* Updated based on comments
* Feat(docs): add how-to-guide for tensorflow inference by onnxruntime and openvino
* fix bugs for index.rst
* revise according to PR comments
* revise minor parts according to PR comments
* revise bugs according to PR comments
* Add how to guide: How to convert your PyTorch code to use TorchNano for training acceleration
* Small nano how-to index format update for openvino inference
* Update based on comments
* Updated based on comments
* Add how-to guide: How to wrap a PyTorch training loop through @nano decorator
* Add reference to TorchNano guide in @nano guide
* Some small fixes and updates
* Small typo fix: bulit -> built
* Updates based on comments
* Remove validation dataloader based on comments
* Order change of two guides
* Update based on comments
* add key feature and how to guide for context manager
* update key feature for multi models
* update based on comment
* update
* update based on comments
* update
* update
* add how to guide:
* acclerate with jit_ipex
* save and load jit, ipex, onnx, openvino
* add these five above .nblink files
;
* add index of sl files
* clear all the output & fix the bug of title
* remove extra blank indent
* format the jupter with prettier
* fix the bug of error words
* add blank line before unorder list
* * remove the normal inference in accelerate using jit/ipex;
* add note to example why we should pass in the orginal model to get the optimized one in sl ipex
* fix:new pip install shell cmd & indent improve
* add nano notebook example for openvino ir
* add basic example for openvino model inference
* add notebook example for sync inference and async inference
* add notebook to documentation
* update explanation for async api
* try to fix code snip
* fix code snip
* simplify async api explanation
* simplify async api explanation
* adapt new theme
* Add basic guides structure of Training - TensorFlow
* Add how-to guides: How to accelerate a TensorFlow Keras application on training workloads through multiple instances
* Change import order and add pip install for tensorflow-dataset
* Diable other nano tests for now
* Add github action tests for how-to guides Tensorflow training
* Use jupyter nbconvert to test notebooks for training tensorflow instead to avoid errors
* Add how-to guide: How to optimize your model with a sparse Embedding layer and SparseAdam optimizer
* Enable other nano tests again
* Small Revision: fix typos
* Small Revision: refactor some sentences
* Revision: refactor contents based on comments
* Add How-to guides: How to choose the number of processes for multi-instance training
* Small Revision: fix typos and refactor some sentences
* Make timeout time for github action longer for TensorFlow, 600s->700s
* Rearrange file structure for PyTorch Inference for docs and add titles for PyTorch-Lightning Training
* Add How-to guide: How to accelerate a PyTorch-Lightning application on training workloads through Intel® Extension for PyTorch*
* Add how-to guide: How to accelerate a PyTorch-Lightning application on training workloads through multiple instances
* Revise: remove '-' in 'PyTorch-Lightning' and some other changes
* Add How-to guides: How to use the channels last memory format in your PyTorch Lightning application for training
* Add how-to guide: Use BFloat16 Mixed Precision for PyTorch Lightning Training
* Add How-to guide: How to accelerate a computer vision data processing pipeline
* Small Revision: change comments in several code cells
* Disable other nano tests temporarily
* Add github action tests for Nano Training PyTorch Lightning tests
* Enable other nano tests again
* Small revisions: typos and explanation texts changes
* Revise: update based on comments
* Create doc tree index for Nano How-to Guides
* Add How to guide for PyTorch Inference using ONNXRuntime
* Add How to guide for PyTorch Inference using OpenVINO
* Update How to guide for PyTorch Inference using OpenVINO/ONNXRuntime
* Change current notebook to md and revise contents to be more concentrated
* Add How-to Guide: Install BigDL-Nano in Google Colab (need further update)
* Revise words in How-to Guide for PyTorch Inference using OpenVINO/ONNXRuntime
* Add How-To Guide: Quantize PyTorch Model for Inference using Intel Neural Compressor
* Add How-To Guide: Quantize PyTorch Model for Inference using Post-training Quantization Tools
* Add API doc links and small revision
* Test: syncronization through marks in py files
* Test: syncronization through notebook with cells hidden from rendering in doc
* Remove test commits for runnable example <-> guides synchronization
* Enable rendering notebook from location out of sphinx source root
* Update guide "How to accelerate a PyTorch inference pipeline through OpenVINO" to notebook under python folder
* Update guide "How to quantize your PyTorch model for inference using Intel Neural Compressor" to notebook under python folder
* Fix bug that markdown will be ignored inside html tags for nbconvert, and notebook revise
* Update guide 'How to quantize your PyTorch model for inference using Post-training Optimization Tools' to notebook under python folder
* Small updates to index and current guides
* Revision based on Junwei's comments
* Update how-to guides: How to install BigDL-Nano in Google Colab, and update index page
* Small typo fix