* add BigDL-Nano PyTorch Quickstart * Update BigDL-Nano PyTorch Quickstart * Update BigDL-Nano PyTorch Quickstart * Add Nano inference quickstart - add inference onnx quickstart - add inference openvino quickstart - add quantization inc quickstart - add quantization inc onnx quickstart - add quantization openvino quickstart * update quickstart * update quickstart * Add Nano quantization quickstart * update nano docs * Update docs * Update Nano OpenVINO tutorial * Update * Update index.md * Resize Images * Resize Images * Update * Update Nano docs * Update nano documents * Update doc & notebook * clear output of cells * add unit tests for tutorial notebooks * fix errors in yaml * fix error in notebook * reduce quantization time * Update yaml * Add unit test for tutorial * Add tests for tutorial * fix shell
53 lines
2.9 KiB
Markdown
53 lines
2.9 KiB
Markdown
# Nano Tutorial
|
|
- [**BigDL-Nano PyTorch Training Quickstart**](./pytorch_train_quickstart.html)
|
|
|
|
> [View source on GitHub][Nano_pytorch_training]
|
|
|
|
In this guide we will describe how to scale out PyTorch programs using Nano
|
|
|
|
---------------------------
|
|
|
|
- [**BigDL-Nano PyTorch ONNXRuntime Acceleration Quickstart**](./pytorch_onnxruntime.html)
|
|
|
|
> [View source on GitHub][Nano_pytorch_onnxruntime]
|
|
|
|
In this guide we will describe how to apply ONNXRuntime Acceleration on inference pipeline with the APIs delivered by BigDL-Nano
|
|
|
|
---------------------------
|
|
|
|
- [**BigDL-Nano PyTorch OpenVINO Acceleration Quickstart**](./pytorch_openvino.html)
|
|
|
|
> [View source on GitHub][Nano_pytorch_openvino]
|
|
|
|
In this guide we will describe how to apply OpenVINO Acceleration on inference pipeline with the APIs delivered by BigDL-Nano
|
|
|
|
---------------------------
|
|
|
|
- [**BigDL-Nano PyTorch Quantization with INC Quickstart**](./pytorch_quantization_inc.html)
|
|
|
|
> [View source on GitHub][Nano_pytorch_Quantization_inc]
|
|
|
|
In this guide we will describe how to obtain a quantized model with the APIs delivered by BigDL-Nano
|
|
|
|
---------------------------
|
|
|
|
- [**BigDL-Nano PyTorch Quantization with ONNXRuntime accelerator Quickstart**](./pytorch_quantization_inc_onnx.html)
|
|
|
|
> [View source on GitHub][Nano_pytorch_quantization_inc_onnx]
|
|
|
|
In this guide we will describe how to obtain a quantized model running inference in the ONNXRuntime engine with the APIs delivered by BigDL-Nano
|
|
|
|
---------------------------
|
|
|
|
- [**BigDL-Nano PyTorch Quantization with POT Quickstart**](./pytorch_quantization_openvino.html)
|
|
|
|
> [View source on GitHub][Nano_pytorch_quantization_openvino]
|
|
|
|
In this guide we will describe how to obtain a quantized model with the APIs delivered by BigDL-Nano
|
|
|
|
[Nano_pytorch_training]: <https://github.com/intel-analytics/BigDL/blob/main/python/nano/notebooks/pytorch/tutorial/pytorch_train.ipynb>
|
|
[Nano_pytorch_onnxruntime]: <https://github.com/intel-analytics/BigDL/blob/main/python/nano/notebooks/pytorch/tutorial/pytorch_inference_onnx.ipynb>
|
|
[Nano_pytorch_openvino]: <https://github.com/intel-analytics/BigDL/blob/main/python/nano/notebooks/pytorch/tutorial/pytorch_inference_openvino.ipynb>
|
|
[Nano_pytorch_Quantization_inc]: <https://github.com/intel-analytics/BigDL/blob/main/python/nano/notebooks/pytorch/tutorial/pytorch_quantization_inc.ipynb>
|
|
[Nano_pytorch_quantization_inc_onnx]: <https://github.com/intel-analytics/BigDL/blob/main/python/nano/notebooks/pytorch/tutorial/pytorch_quantization_inc.ipynb>
|
|
[Nano_pytorch_quantization_openvino]: <https://github.com/intel-analytics/BigDL/blob/main/python/nano/notebooks/pytorch/tutorial/pytorch_quantization_openvino.ipynb>
|