[Nano] Add how-to-guide for pytorch async pipeline (#8146)

* add how-to-guide for pytorch async pipeline

* revise introduction

* resolve image issues
This commit is contained in:
Pingchuan Ma (Henry) 2023-05-06 22:15:42 +08:00 committed by GitHub
parent e178692c2c
commit 30367f5eb1
4 changed files with 7 additions and 1 deletions

View file

@ -170,6 +170,7 @@ subtrees:
- file: doc/Nano/Howto/Inference/PyTorch/pytorch_save_and_load_openvino
- file: doc/Nano/Howto/Inference/PyTorch/multi_instance_pytorch_inference
- file: doc/Nano/Howto/Inference/PyTorch/accelerate_pytorch_inference_gpu
- file: doc/Nano/Howto/Inference/PyTorch/accelerate_pytorch_inference_async_pipeline
- file: doc/Nano/Howto/Inference/TensorFlow/index
title: "TensorFlow"
subtrees:

View file

@ -0,0 +1,3 @@
{
"path": "../../../../../../../../python/nano/tutorial/notebook/inference/pytorch/accelerate_pytorch_inference_async_pipeline.ipynb"
}

View file

@ -14,3 +14,4 @@ Inference Optimization: For PyTorch Users
* `How to save and load optimized IPEX model <pytorch_save_and_load_ipex.html>`_
* `How to accelerate a PyTorch inference pipeline through multiple instances <multi_instance_pytorch_inference.html>`_
* `How to accelerate a PyTorch inference pipeline using Intel ARC series dGPU <accelerate_pytorch_inference_gpu.html>`_
* `How to accelerate PyTorch inference using async multi-stage pipeline <accelerate_pytorch_inference_async_pipeline.html>`_

View file

@ -77,6 +77,7 @@ PyTorch
* `How to save and load optimized IPEX model <Inference/PyTorch/pytorch_save_and_load_ipex.html>`_
* `How to accelerate a PyTorch inference pipeline through multiple instances <Inference/PyTorch/multi_instance_pytorch_inference.html>`_
* `How to accelerate a PyTorch inference pipeline using Intel ARC series dGPU <Inference/PyTorch/accelerate_pytorch_inference_gpu.html>`_
* `How to accelerate PyTorch inference using async multi-stage pipeline <Inference/PyTorch/accelerate_pytorch_inference_async_pipeline.html>`_
TensorFlow
~~~~~~~~~~~~~~~~~~~~~~~~~