From c6eccbfcc28b4907fe56cfad1dd27a51d9d0dd81 Mon Sep 17 00:00:00 2001 From: "Pingchuan Ma (Henry)" <58333343+HensonMa@users.noreply.github.com> Date: Wed, 12 Apr 2023 19:18:16 +0800 Subject: [PATCH] [Nano] add pt dgpu inference how-to-guide (#8026) * docs for arc dgpu how-to-guide * minor adjustment + system info * minor adjustment for appearance * fix bugs * add system info * fix syntax errors * adjust docs according to comments * final adjustment * delete gpu workflow testing --- docs/readthedocs/source/_toc.yml | 1 + .../Inference/PyTorch/accelerate_pytorch_inference_gpu.nblink | 3 +++ .../source/doc/Nano/Howto/Inference/PyTorch/index.rst | 3 ++- docs/readthedocs/source/doc/Nano/Howto/index.rst | 1 + 4 files changed, 7 insertions(+), 1 deletion(-) create mode 100644 docs/readthedocs/source/doc/Nano/Howto/Inference/PyTorch/accelerate_pytorch_inference_gpu.nblink diff --git a/docs/readthedocs/source/_toc.yml b/docs/readthedocs/source/_toc.yml index 764c6353..d4763c59 100644 --- a/docs/readthedocs/source/_toc.yml +++ b/docs/readthedocs/source/_toc.yml @@ -169,6 +169,7 @@ subtrees: - file: doc/Nano/Howto/Inference/PyTorch/pytorch_save_and_load_onnx - file: doc/Nano/Howto/Inference/PyTorch/pytorch_save_and_load_openvino - file: doc/Nano/Howto/Inference/PyTorch/multi_instance_pytorch_inference + - file: doc/Nano/Howto/Inference/PyTorch/accelerate_pytorch_inference_gpu - file: doc/Nano/Howto/Inference/TensorFlow/index title: "TensorFlow" subtrees: diff --git a/docs/readthedocs/source/doc/Nano/Howto/Inference/PyTorch/accelerate_pytorch_inference_gpu.nblink b/docs/readthedocs/source/doc/Nano/Howto/Inference/PyTorch/accelerate_pytorch_inference_gpu.nblink new file mode 100644 index 00000000..9c152362 --- /dev/null +++ b/docs/readthedocs/source/doc/Nano/Howto/Inference/PyTorch/accelerate_pytorch_inference_gpu.nblink @@ -0,0 +1,3 @@ +{ + "path": "../../../../../../../../python/nano/tutorial/notebook/inference/pytorch/accelerate_pytorch_inference_gpu.ipynb" +} \ No newline at end of file diff --git a/docs/readthedocs/source/doc/Nano/Howto/Inference/PyTorch/index.rst b/docs/readthedocs/source/doc/Nano/Howto/Inference/PyTorch/index.rst index 58c5b661..fbe4362e 100644 --- a/docs/readthedocs/source/doc/Nano/Howto/Inference/PyTorch/index.rst +++ b/docs/readthedocs/source/doc/Nano/Howto/Inference/PyTorch/index.rst @@ -12,4 +12,5 @@ Inference Optimization: For PyTorch Users * `How to save and load optimized OpenVINO model `_ * `How to save and load optimized JIT model `_ * `How to save and load optimized IPEX model `_ -* `How to accelerate a PyTorch inference pipeline through multiple instances `_ \ No newline at end of file +* `How to accelerate a PyTorch inference pipeline through multiple instances `_ +* `How to accelerate a PyTorch inference pipeline using Intel ARC series dGPU `_ \ No newline at end of file diff --git a/docs/readthedocs/source/doc/Nano/Howto/index.rst b/docs/readthedocs/source/doc/Nano/Howto/index.rst index 448fe181..8b83e951 100644 --- a/docs/readthedocs/source/doc/Nano/Howto/index.rst +++ b/docs/readthedocs/source/doc/Nano/Howto/index.rst @@ -76,6 +76,7 @@ PyTorch * `How to save and load optimized JIT model `_ * `How to save and load optimized IPEX model `_ * `How to accelerate a PyTorch inference pipeline through multiple instances `_ +* `How to accelerate a PyTorch inference pipeline using Intel ARC series dGPU `_ TensorFlow ~~~~~~~~~~~~~~~~~~~~~~~~~