[Nano] add pt dgpu inference how-to-guide (#8026)
* docs for arc dgpu how-to-guide * minor adjustment + system info * minor adjustment for appearance * fix bugs * add system info * fix syntax errors * adjust docs according to comments * final adjustment * delete gpu workflow testing
This commit is contained in:
parent
2daaa6f7de
commit
c6eccbfcc2
4 changed files with 7 additions and 1 deletions
|
|
@ -169,6 +169,7 @@ subtrees:
|
|||
- file: doc/Nano/Howto/Inference/PyTorch/pytorch_save_and_load_onnx
|
||||
- file: doc/Nano/Howto/Inference/PyTorch/pytorch_save_and_load_openvino
|
||||
- file: doc/Nano/Howto/Inference/PyTorch/multi_instance_pytorch_inference
|
||||
- file: doc/Nano/Howto/Inference/PyTorch/accelerate_pytorch_inference_gpu
|
||||
- file: doc/Nano/Howto/Inference/TensorFlow/index
|
||||
title: "TensorFlow"
|
||||
subtrees:
|
||||
|
|
|
|||
|
|
@ -0,0 +1,3 @@
|
|||
{
|
||||
"path": "../../../../../../../../python/nano/tutorial/notebook/inference/pytorch/accelerate_pytorch_inference_gpu.ipynb"
|
||||
}
|
||||
|
|
@ -12,4 +12,5 @@ Inference Optimization: For PyTorch Users
|
|||
* `How to save and load optimized OpenVINO model <pytorch_save_and_load_openvino.html>`_
|
||||
* `How to save and load optimized JIT model <pytorch_save_and_load_jit.html>`_
|
||||
* `How to save and load optimized IPEX model <pytorch_save_and_load_ipex.html>`_
|
||||
* `How to accelerate a PyTorch inference pipeline through multiple instances <multi_instance_pytorch_inference.html>`_
|
||||
* `How to accelerate a PyTorch inference pipeline through multiple instances <multi_instance_pytorch_inference.html>`_
|
||||
* `How to accelerate a PyTorch inference pipeline using Intel ARC series dGPU <accelerate_pytorch_inference_gpu.html>`_
|
||||
|
|
@ -76,6 +76,7 @@ PyTorch
|
|||
* `How to save and load optimized JIT model <Inference/PyTorch/pytorch_save_and_load_jit.html>`_
|
||||
* `How to save and load optimized IPEX model <Inference/PyTorch/pytorch_save_and_load_ipex.html>`_
|
||||
* `How to accelerate a PyTorch inference pipeline through multiple instances <Inference/PyTorch/multi_instance_pytorch_inference.html>`_
|
||||
* `How to accelerate a PyTorch inference pipeline using Intel ARC series dGPU <Inference/PyTorch/accelerate_pytorch_inference_gpu.html>`_
|
||||
|
||||
TensorFlow
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
|
|
|||
Loading…
Reference in a new issue