* Create doc tree index for Nano How-to Guides * Add How to guide for PyTorch Inference using ONNXRuntime * Add How to guide for PyTorch Inference using OpenVINO * Update How to guide for PyTorch Inference using OpenVINO/ONNXRuntime * Change current notebook to md and revise contents to be more concentrated * Add How-to Guide: Install BigDL-Nano in Google Colab (need further update) * Revise words in How-to Guide for PyTorch Inference using OpenVINO/ONNXRuntime * Add How-To Guide: Quantize PyTorch Model for Inference using Intel Neural Compressor * Add How-To Guide: Quantize PyTorch Model for Inference using Post-training Quantization Tools * Add API doc links and small revision * Test: syncronization through marks in py files * Test: syncronization through notebook with cells hidden from rendering in doc * Remove test commits for runnable example <-> guides synchronization * Enable rendering notebook from location out of sphinx source root * Update guide "How to accelerate a PyTorch inference pipeline through OpenVINO" to notebook under python folder * Update guide "How to quantize your PyTorch model for inference using Intel Neural Compressor" to notebook under python folder * Fix bug that markdown will be ignored inside html tags for nbconvert, and notebook revise * Update guide 'How to quantize your PyTorch model for inference using Post-training Optimization Tools' to notebook under python folder * Small updates to index and current guides * Revision based on Junwei's comments * Update how-to guides: How to install BigDL-Nano in Google Colab, and update index page * Small typo fix
33 lines
No EOL
1.2 KiB
ReStructuredText
33 lines
No EOL
1.2 KiB
ReStructuredText
Nano How-to Guides
|
|
=========================
|
|
.. note::
|
|
This page is still a work in progress. We are adding more guides.
|
|
|
|
In Nano How-to Guides, you could expect to find multiple task-oriented, bite-sized, and executable examples. These examples will show you various tasks that BigDL-Nano could help you accomplish smoothly.
|
|
|
|
PyTorch Inference
|
|
-------------------------
|
|
|
|
* `How to accelerate a PyTorch inference pipeline through ONNXRuntime <accelerate_pytorch_inference_onnx.html>`_
|
|
* `How to accelerate a PyTorch inference pipeline through OpenVINO <accelerate_pytorch_inference_openvino.html>`_
|
|
* `How to quantize your PyTorch model for inference using Intel Neural Compressor <quantize_pytorch_inference_inc.html>`_
|
|
* `How to quantize your PyTorch model for inference using OpenVINO Post-training Optimization Tools <quantize_pytorch_inference_pot.html>`_
|
|
|
|
.. toctree::
|
|
:maxdepth: 1
|
|
:hidden:
|
|
|
|
accelerate_pytorch_inference_onnx
|
|
accelerate_pytorch_inference_openvino
|
|
quantize_pytorch_inference_inc
|
|
quantize_pytorch_inference_pot
|
|
|
|
Install
|
|
-------------------------
|
|
* `How to install BigDL-Nano in Google Colab <install_in_colab.html>`_
|
|
|
|
.. toctree::
|
|
:maxdepth: 1
|
|
:hidden:
|
|
|
|
install_in_colab |