From 30367f5eb1d4f148e90074dcaa2717a720b987f1 Mon Sep 17 00:00:00 2001 From: "Pingchuan Ma (Henry)" <58333343+HensonMa@users.noreply.github.com> Date: Sat, 6 May 2023 22:15:42 +0800 Subject: [PATCH] [Nano] Add how-to-guide for pytorch async pipeline (#8146) * add how-to-guide for pytorch async pipeline * revise introduction * resolve image issues --- docs/readthedocs/source/_toc.yml | 1 + .../PyTorch/accelerate_pytorch_inference_async_pipeline.nblink | 3 +++ .../source/doc/Nano/Howto/Inference/PyTorch/index.rst | 3 ++- docs/readthedocs/source/doc/Nano/Howto/index.rst | 1 + 4 files changed, 7 insertions(+), 1 deletion(-) create mode 100644 docs/readthedocs/source/doc/Nano/Howto/Inference/PyTorch/accelerate_pytorch_inference_async_pipeline.nblink diff --git a/docs/readthedocs/source/_toc.yml b/docs/readthedocs/source/_toc.yml index d4763c59..268afb4f 100644 --- a/docs/readthedocs/source/_toc.yml +++ b/docs/readthedocs/source/_toc.yml @@ -170,6 +170,7 @@ subtrees: - file: doc/Nano/Howto/Inference/PyTorch/pytorch_save_and_load_openvino - file: doc/Nano/Howto/Inference/PyTorch/multi_instance_pytorch_inference - file: doc/Nano/Howto/Inference/PyTorch/accelerate_pytorch_inference_gpu + - file: doc/Nano/Howto/Inference/PyTorch/accelerate_pytorch_inference_async_pipeline - file: doc/Nano/Howto/Inference/TensorFlow/index title: "TensorFlow" subtrees: diff --git a/docs/readthedocs/source/doc/Nano/Howto/Inference/PyTorch/accelerate_pytorch_inference_async_pipeline.nblink b/docs/readthedocs/source/doc/Nano/Howto/Inference/PyTorch/accelerate_pytorch_inference_async_pipeline.nblink new file mode 100644 index 00000000..4dd6ac54 --- /dev/null +++ b/docs/readthedocs/source/doc/Nano/Howto/Inference/PyTorch/accelerate_pytorch_inference_async_pipeline.nblink @@ -0,0 +1,3 @@ +{ + "path": "../../../../../../../../python/nano/tutorial/notebook/inference/pytorch/accelerate_pytorch_inference_async_pipeline.ipynb" +} \ No newline at end of file diff --git a/docs/readthedocs/source/doc/Nano/Howto/Inference/PyTorch/index.rst b/docs/readthedocs/source/doc/Nano/Howto/Inference/PyTorch/index.rst index fbe4362e..3de72aa1 100644 --- a/docs/readthedocs/source/doc/Nano/Howto/Inference/PyTorch/index.rst +++ b/docs/readthedocs/source/doc/Nano/Howto/Inference/PyTorch/index.rst @@ -13,4 +13,5 @@ Inference Optimization: For PyTorch Users * `How to save and load optimized JIT model `_ * `How to save and load optimized IPEX model `_ * `How to accelerate a PyTorch inference pipeline through multiple instances `_ -* `How to accelerate a PyTorch inference pipeline using Intel ARC series dGPU `_ \ No newline at end of file +* `How to accelerate a PyTorch inference pipeline using Intel ARC series dGPU `_ +* `How to accelerate PyTorch inference using async multi-stage pipeline `_ \ No newline at end of file diff --git a/docs/readthedocs/source/doc/Nano/Howto/index.rst b/docs/readthedocs/source/doc/Nano/Howto/index.rst index 8b83e951..a33f128e 100644 --- a/docs/readthedocs/source/doc/Nano/Howto/index.rst +++ b/docs/readthedocs/source/doc/Nano/Howto/index.rst @@ -77,6 +77,7 @@ PyTorch * `How to save and load optimized IPEX model `_ * `How to accelerate a PyTorch inference pipeline through multiple instances `_ * `How to accelerate a PyTorch inference pipeline using Intel ARC series dGPU `_ +* `How to accelerate PyTorch inference using async multi-stage pipeline `_ TensorFlow ~~~~~~~~~~~~~~~~~~~~~~~~~