diff --git a/docs/readthedocs/source/doc/Nano/Howto/Inference/PyTorch/inference_optimizer_optimize.nblink b/docs/readthedocs/source/doc/Nano/Howto/Inference/PyTorch/inference_optimizer_optimize.nblink new file mode 100644 index 00000000..46ff598e --- /dev/null +++ b/docs/readthedocs/source/doc/Nano/Howto/Inference/PyTorch/inference_optimizer_optimize.nblink @@ -0,0 +1,3 @@ +{ + "path": "../../../../../../../../python/nano/tutorial/notebook/inference/pytorch/inference_optimizer_optimize.ipynb" +} \ No newline at end of file diff --git a/docs/readthedocs/source/doc/Nano/Howto/index.rst b/docs/readthedocs/source/doc/Nano/Howto/index.rst index bd4026cb..78feda99 100644 --- a/docs/readthedocs/source/doc/Nano/Howto/index.rst +++ b/docs/readthedocs/source/doc/Nano/Howto/index.rst @@ -62,6 +62,7 @@ PyTorch * `How to accelerate a PyTorch inference pipeline through OpenVINO `_ * `How to quantize your PyTorch model for inference using Intel Neural Compressor `_ * `How to quantize your PyTorch model for inference using OpenVINO Post-training Optimization Tools `_ +* `How to find accelerated method with minimal latency using InferenceOptimizer `_ .. toctree:: :maxdepth: 1 @@ -71,6 +72,7 @@ PyTorch Inference/PyTorch/accelerate_pytorch_inference_openvino Inference/PyTorch/quantize_pytorch_inference_inc Inference/PyTorch/quantize_pytorch_inference_pot + Inference/PyTorch/inference_optimizer_optimize Install -------------------------