[Nano] add how to guide save and load jit, ipex, onnx, openvino (#6659)

* add how to guide:
* acclerate with jit_ipex
* save and load jit, ipex, onnx, openvino
* add these five above .nblink files
;

* add index of sl files

* clear all the output & fix the bug of title

* remove extra blank indent

* format the jupter with prettier

* fix the bug of error words

* add blank line before unorder list

* * remove the normal inference in accelerate using jit/ipex;
* add note to example why we should pass in the orginal model to get the optimized one in sl ipex

* fix:new pip install shell cmd  & indent improve
This commit is contained in:
WangJun 2022-11-25 15:47:15 +08:00 committed by GitHub
parent cd04f7dbdc
commit bf5ccae4ef
7 changed files with 25 additions and 0 deletions

View file

@ -114,8 +114,13 @@ subtrees:
- file: doc/Nano/Howto/Training/General/choose_num_processes_training
- file: doc/Nano/Howto/Inference/PyTorch/accelerate_pytorch_inference_onnx
- file: doc/Nano/Howto/Inference/PyTorch/accelerate_pytorch_inference_openvino
- file: doc/Nano/Howto/Inference/PyTorch/accelerate_pytorch_inference_jit_ipex
- file: doc/Nano/Howto/Inference/PyTorch/quantize_pytorch_inference_inc
- file: doc/Nano/Howto/Inference/PyTorch/quantize_pytorch_inference_pot
- file: doc/Nano/Howto/Inference/PyTorch/pytorch_save_and_load_ipex
- file: doc/Nano/Howto/Inference/PyTorch/pytorch_save_and_load_jit
- file: doc/Nano/Howto/Inference/PyTorch/pytorch_save_and_load_onnx
- file: doc/Nano/Howto/Inference/PyTorch/pytorch_save_and_load_openvino
- file: doc/Nano/Howto/Inference/PyTorch/inference_optimizer_optimize
- file: doc/Nano/Howto/install_in_colab
- file: doc/Nano/Howto/windows_guide

View file

@ -0,0 +1,3 @@
{
"path": "../../../../../../../../python/nano/tutorial/notebook/inference/pytorch/accelerate_pytorch_inference_jit_ipex.ipynb"
}

View file

@ -0,0 +1,3 @@
{
"path": "../../../../../../../../python/nano/tutorial/notebook/inference/pytorch/pytorch_save_and_load_ipex.ipynb"
}

View file

@ -0,0 +1,3 @@
{
"path": "../../../../../../../../python/nano/tutorial/notebook/inference/pytorch/pytorch_save_and_load_jit.ipynb"
}

View file

@ -0,0 +1,3 @@
{
"path": "../../../../../../../../python/nano/tutorial/notebook/inference/pytorch/pytorch_save_and_load_onnx.ipynb"
}

View file

@ -0,0 +1,3 @@
{
"path": "../../../../../../../../python/nano/tutorial/notebook/inference/pytorch/pytorch_save_and_load_openvino.ipynb"
}

View file

@ -49,8 +49,13 @@ PyTorch
* `How to accelerate a PyTorch inference pipeline through ONNXRuntime <Inference/PyTorch/accelerate_pytorch_inference_onnx.html>`_
* `How to accelerate a PyTorch inference pipeline through OpenVINO <Inference/PyTorch/accelerate_pytorch_inference_openvino.html>`_
* `How to accelerate a PyTorch inference pipeline through JIT/IPEX <Inference/PyTorch/accelerate_pytorch_inference_jit_ipex.html>`_
* `How to quantize your PyTorch model for inference using Intel Neural Compressor <Inference/PyTorch/quantize_pytorch_inference_inc.html>`_
* `How to quantize your PyTorch model for inference using OpenVINO Post-training Optimization Tools <Inference/PyTorch/quantize_pytorch_inference_pot.html>`_
* `How to save and load optimized IPEX model <Inference/PyTorch/pytorch_save_and_load_ipex.html>`_
* `How to save and load optimized JIT model <Inference/PyTorch/pytorch_save_and_load_jit.html>`_
* `How to save and load optimized ONNXRuntime model <Inference/PyTorch/pytorch_save_and_load_onnx.html>`_
* `How to save and load optimized OpenVINO model <Inference/PyTorch/pytorch_save_and_load_openvino.html>`_
* `How to find accelerated method with minimal latency using InferenceOptimizer <Inference/PyTorch/inference_optimizer_optimize.html>`_
Install