* Small fixes to (un)patch_tensorflow api doc and make its import path in api doc show as the recommended one
* Add api doc for nano_bf16 decorator
* Move api doc for bigdl.nano.tf.keras.InferenceOptimizer out of bigdl.nano.tf.keras to make it more clear
* Fix python styles
* Fix path in Nano PyTorch API docs
* Add api doc for bigdl.nano.pytorch.patching.patch_encryption
* Add a note box for bigdl.nano.pytorch.patching.patch_encryption api doc
* Fix Python style again
* Fix path in Nano HPO API doc and other small fixes
* feat(docs): add load/save onnx and opnevino model for tensorflow
* fix bugs after previewing
* fix order issues of insertion for toc.yml
* change link title for tensorflow
* Restyle blockquote elements in web
* Add a generalized how-to section for preprocessing, including the data process accelerastion for PyTorch
* Small fix
* Update based on comments and small typo fixes
* Small fixes
* Add basic doc structure for bf16 tf training how-to guide, and change the incorrect order of tf inference guides in toc
* Add how-to guide for tf bf16 training
* Add warning box for tf bf16 hardware limitations
* Add a print message to show the default policy of model after unpatch
* Small fixes
* Small github action fixes for tf bf16 training how-to guide
* diable action test for tf bf16 train for now, due to the core dumped problem on platforms without AVX512
* Updated based on comments
* Feat(docs): add how-to-guide for tensorflow inference by onnxruntime and openvino
* fix bugs for index.rst
* revise according to PR comments
* revise minor parts according to PR comments
* revise bugs according to PR comments
* update howto guide for optimizer
* update export model
* update typo
* update based on comments
* fix bug of get_best_model without validation data
* update ut
* update
* update
* fix 600s
* fix
* Add more key features regarding TorchNano and @nano for pytorch training
* Small fixes
* Remove the Overview title
* Add auto_lr in related notes
* Update based on comments
* Add how to guide: How to convert your PyTorch code to use TorchNano for training acceleration
* Small nano how-to index format update for openvino inference
* Update based on comments
* Updated based on comments
* Add how-to guide: How to wrap a PyTorch training loop through @nano decorator
* Add reference to TorchNano guide in @nano guide
* Some small fixes and updates
* Small typo fix: bulit -> built
* Updates based on comments
* Remove validation dataloader based on comments
* Order change of two guides
* Update based on comments
* upddate installation
* update
* update runtime acceleration
* update link in rst
* add bf16 quantization and optimize()
* update based on comment
* update
* update based on comment