[Nano] Add how-to-guide for pytorch async pipeline (#8146)
* add how-to-guide for pytorch async pipeline * revise introduction * resolve image issues
This commit is contained in:
parent
e178692c2c
commit
30367f5eb1
4 changed files with 7 additions and 1 deletions
|
|
@ -170,6 +170,7 @@ subtrees:
|
||||||
- file: doc/Nano/Howto/Inference/PyTorch/pytorch_save_and_load_openvino
|
- file: doc/Nano/Howto/Inference/PyTorch/pytorch_save_and_load_openvino
|
||||||
- file: doc/Nano/Howto/Inference/PyTorch/multi_instance_pytorch_inference
|
- file: doc/Nano/Howto/Inference/PyTorch/multi_instance_pytorch_inference
|
||||||
- file: doc/Nano/Howto/Inference/PyTorch/accelerate_pytorch_inference_gpu
|
- file: doc/Nano/Howto/Inference/PyTorch/accelerate_pytorch_inference_gpu
|
||||||
|
- file: doc/Nano/Howto/Inference/PyTorch/accelerate_pytorch_inference_async_pipeline
|
||||||
- file: doc/Nano/Howto/Inference/TensorFlow/index
|
- file: doc/Nano/Howto/Inference/TensorFlow/index
|
||||||
title: "TensorFlow"
|
title: "TensorFlow"
|
||||||
subtrees:
|
subtrees:
|
||||||
|
|
|
||||||
|
|
@ -0,0 +1,3 @@
|
||||||
|
{
|
||||||
|
"path": "../../../../../../../../python/nano/tutorial/notebook/inference/pytorch/accelerate_pytorch_inference_async_pipeline.ipynb"
|
||||||
|
}
|
||||||
|
|
@ -13,4 +13,5 @@ Inference Optimization: For PyTorch Users
|
||||||
* `How to save and load optimized JIT model <pytorch_save_and_load_jit.html>`_
|
* `How to save and load optimized JIT model <pytorch_save_and_load_jit.html>`_
|
||||||
* `How to save and load optimized IPEX model <pytorch_save_and_load_ipex.html>`_
|
* `How to save and load optimized IPEX model <pytorch_save_and_load_ipex.html>`_
|
||||||
* `How to accelerate a PyTorch inference pipeline through multiple instances <multi_instance_pytorch_inference.html>`_
|
* `How to accelerate a PyTorch inference pipeline through multiple instances <multi_instance_pytorch_inference.html>`_
|
||||||
* `How to accelerate a PyTorch inference pipeline using Intel ARC series dGPU <accelerate_pytorch_inference_gpu.html>`_
|
* `How to accelerate a PyTorch inference pipeline using Intel ARC series dGPU <accelerate_pytorch_inference_gpu.html>`_
|
||||||
|
* `How to accelerate PyTorch inference using async multi-stage pipeline <accelerate_pytorch_inference_async_pipeline.html>`_
|
||||||
|
|
@ -77,6 +77,7 @@ PyTorch
|
||||||
* `How to save and load optimized IPEX model <Inference/PyTorch/pytorch_save_and_load_ipex.html>`_
|
* `How to save and load optimized IPEX model <Inference/PyTorch/pytorch_save_and_load_ipex.html>`_
|
||||||
* `How to accelerate a PyTorch inference pipeline through multiple instances <Inference/PyTorch/multi_instance_pytorch_inference.html>`_
|
* `How to accelerate a PyTorch inference pipeline through multiple instances <Inference/PyTorch/multi_instance_pytorch_inference.html>`_
|
||||||
* `How to accelerate a PyTorch inference pipeline using Intel ARC series dGPU <Inference/PyTorch/accelerate_pytorch_inference_gpu.html>`_
|
* `How to accelerate a PyTorch inference pipeline using Intel ARC series dGPU <Inference/PyTorch/accelerate_pytorch_inference_gpu.html>`_
|
||||||
|
* `How to accelerate PyTorch inference using async multi-stage pipeline <Inference/PyTorch/accelerate_pytorch_inference_async_pipeline.html>`_
|
||||||
|
|
||||||
TensorFlow
|
TensorFlow
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
|
||||||
Loading…
Reference in a new issue