* Restyle blockquote elements in web
* Add a generalized how-to section for preprocessing, including the data process accelerastion for PyTorch
* Small fix
* Update based on comments and small typo fixes
* Small fixes
* Add basic doc structure for bf16 tf training how-to guide, and change the incorrect order of tf inference guides in toc
* Add how-to guide for tf bf16 training
* Add warning box for tf bf16 hardware limitations
* Add a print message to show the default policy of model after unpatch
* Small fixes
* Small github action fixes for tf bf16 training how-to guide
* diable action test for tf bf16 train for now, due to the core dumped problem on platforms without AVX512
* Updated based on comments
* Feat(docs): add how-to-guide for tensorflow inference by onnxruntime and openvino
* fix bugs for index.rst
* revise according to PR comments
* revise minor parts according to PR comments
* revise bugs according to PR comments
* update howto guide for optimizer
* update export model
* update typo
* update based on comments
* fix bug of get_best_model without validation data
* update ut
* update
* update
* fix 600s
* fix
* Add more key features regarding TorchNano and @nano for pytorch training
* Small fixes
* Remove the Overview title
* Add auto_lr in related notes
* Update based on comments
* Add how to guide: How to convert your PyTorch code to use TorchNano for training acceleration
* Small nano how-to index format update for openvino inference
* Update based on comments
* Updated based on comments
* Add how-to guide: How to wrap a PyTorch training loop through @nano decorator
* Add reference to TorchNano guide in @nano guide
* Some small fixes and updates
* Small typo fix: bulit -> built
* Updates based on comments
* Remove validation dataloader based on comments
* Order change of two guides
* Update based on comments
* upddate installation
* update
* update runtime acceleration
* update link in rst
* add bf16 quantization and optimize()
* update based on comment
* update
* update based on comment
* add key feature and how to guide for context manager
* update key feature for multi models
* update based on comment
* update
* update based on comments
* update
* update
* add how to guide:
* acclerate with jit_ipex
* save and load jit, ipex, onnx, openvino
* add these five above .nblink files
;
* add index of sl files
* clear all the output & fix the bug of title
* remove extra blank indent
* format the jupter with prettier
* fix the bug of error words
* add blank line before unorder list
* * remove the normal inference in accelerate using jit/ipex;
* add note to example why we should pass in the orginal model to get the optimized one in sl ipex
* fix:new pip install shell cmd & indent improve