[Nano] Add how-to-guide of load/save API for tensorflow inference (#7180)

* feat(docs): add load/save onnx and opnevino model for tensorflow

* fix bugs after previewing

* fix order issues of insertion for toc.yml

* change link title for tensorflow
This commit is contained in:
Henry Ma 2023-01-10 20:15:49 +08:00 committed by GitHub
parent d950992b91
commit 2858a1b5bf
4 changed files with 10 additions and 0 deletions

View file

@ -132,6 +132,8 @@ subtrees:
- file: doc/Nano/Howto/Inference/PyTorch/inference_optimizer_optimize - file: doc/Nano/Howto/Inference/PyTorch/inference_optimizer_optimize
- file: doc/Nano/Howto/Inference/TensorFlow/accelerate_tensorflow_inference_onnx - file: doc/Nano/Howto/Inference/TensorFlow/accelerate_tensorflow_inference_onnx
- file: doc/Nano/Howto/Inference/TensorFlow/accelerate_tensorflow_inference_openvino - file: doc/Nano/Howto/Inference/TensorFlow/accelerate_tensorflow_inference_openvino
- file: doc/Nano/Howto/Inference/TensorFlow/tensorflow_save_and_load_onnx
- file: doc/Nano/Howto/Inference/TensorFlow/tensorflow_save_and_load_openvino
- file: doc/Nano/Howto/install_in_colab - file: doc/Nano/Howto/install_in_colab
- file: doc/Nano/Howto/windows_guide - file: doc/Nano/Howto/windows_guide
- file: doc/Nano/Overview/known_issues - file: doc/Nano/Overview/known_issues

View file

@ -0,0 +1,3 @@
{
"path": "../../../../../../../../python/nano/tutorial/notebook/inference/tensorflow/tensorflow_save_and_load_onnx.ipynb"
}

View file

@ -0,0 +1,3 @@
{
"path": "../../../../../../../../python/nano/tutorial/notebook/inference/tensorflow/tensorflow_save_and_load_openvino.ipynb"
}

View file

@ -82,6 +82,8 @@ TensorFlow
~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~
* `How to accelerate a TensorFlow inference pipeline through ONNXRuntime <Inference/TensorFlow/accelerate_tensorflow_inference_onnx.html>`_ * `How to accelerate a TensorFlow inference pipeline through ONNXRuntime <Inference/TensorFlow/accelerate_tensorflow_inference_onnx.html>`_
* `How to accelerate a TensorFlow inference pipeline through OpenVINO <Inference/TensorFlow/accelerate_tensorflow_inference_openvino.html>`_ * `How to accelerate a TensorFlow inference pipeline through OpenVINO <Inference/TensorFlow/accelerate_tensorflow_inference_openvino.html>`_
* `How to save and load optimized ONNXRuntime model in TensorFlow <Inference/TensorFlow/tensorflow_save_and_load_onnx.html>`_
* `How to save and load optimized OpenVINO model in TensorFlow <Inference/TensorFlow/tensorflow_save_and_load_openvino.html>`_
Install Install
------------------------- -------------------------