ipex-llm/docs/readthedocs/source/doc/Nano/QuickStart/index.md
Yuwen Hu a4bbadeb61 [Nano] Integrate ipynb files into ReadtheDocs (#5222)
* Add nbsphinx extension for docs

* Add specific ipynb files to readthedocs folder and add links to Nano Tutorials index page

* Disable execution of ipynb notebook

* Remove outputs of two ipynb files

* Rename titles and add navigation texts to seq_and_func.ipynb under readthedocs folder

* Rename titles and add navigation texts to custom.ipynb file under readthedocs folder
2022-08-01 13:06:03 +08:00

77 lines
No EOL
4.4 KiB
Markdown

# Nano Tutorial
- [**BigDL-Nano PyTorch Training Quickstart**](./pytorch_train_quickstart.html)
> ![](../../../../image/GitHub-Mark-32px.png)[View source on GitHub][Nano_pytorch_training]
In this guide we will describe how to scale out PyTorch programs using Nano
---------------------------
- [**BigDL-Nano PyTorch ONNXRuntime Acceleration Quickstart**](./pytorch_onnxruntime.html)
> ![](../../../../image/GitHub-Mark-32px.png)[View source on GitHub][Nano_pytorch_onnxruntime]
In this guide we will describe how to apply ONNXRuntime Acceleration on inference pipeline with the APIs delivered by BigDL-Nano
---------------------------
- [**BigDL-Nano PyTorch OpenVINO Acceleration Quickstart**](./pytorch_openvino.html)
> ![](../../../../image/GitHub-Mark-32px.png)[View source on GitHub][Nano_pytorch_openvino]
In this guide we will describe how to apply OpenVINO Acceleration on inference pipeline with the APIs delivered by BigDL-Nano
---------------------------
- [**BigDL-Nano PyTorch Quantization with INC Quickstart**](./pytorch_quantization_inc.html)
> ![](../../../../image/GitHub-Mark-32px.png)[View source on GitHub][Nano_pytorch_Quantization_inc]
In this guide we will describe how to obtain a quantized model with the APIs delivered by BigDL-Nano
---------------------------
- [**BigDL-Nano PyTorch Quantization with ONNXRuntime accelerator Quickstart**](./pytorch_quantization_inc_onnx.html)
> ![](../../../../image/GitHub-Mark-32px.png)[View source on GitHub][Nano_pytorch_quantization_inc_onnx]
In this guide we will describe how to obtain a quantized model running inference in the ONNXRuntime engine with the APIs delivered by BigDL-Nano
---------------------------
- [**BigDL-Nano PyTorch Quantization with POT Quickstart**](./pytorch_quantization_openvino.html)
> ![](../../../../image/GitHub-Mark-32px.png)[View source on GitHub][Nano_pytorch_quantization_openvino]
In this guide we will describe how to obtain a quantized model with the APIs delivered by BigDL-Nano
---------------------------
- [**BigDL-Nano Hyperparameter Tuning (Tensorflow Sequential/Functional API) Quickstart**](../Tutorials/seq_and_func.html)
> ![](../../../../image/colab_logo_32px.png)[Run in Google Colab][Nano_hpo_tf_seq_func_colab]  ![](../../../../image/GitHub-Mark-32px.png)[View source on GitHub][Nano_hpo_tf_seq_func]
In this guide we will describe how to use Nano's built-in HPO utils to do hyperparameter tuning for models defined using Tensorflow Sequential or Functional API.
---------------------------
- [**BigDL-Nano Hyperparameter Tuning (Tensorflow Subclassing Model) Quickstart**](../Tutorials/custom.html)
> ![](../../../../image/colab_logo_32px.png)[Run in Google Colab][Nano_hpo_tf_subclassing_colab]  ![](../../../../image/GitHub-Mark-32px.png)[View source on GitHub][Nano_hpo_tf_subclassing]
In this guide we will describe how to use Nano's built-in HPO utils to do hyperparameter tuning for models defined by subclassing tf.keras.Model.
[Nano_pytorch_training]: <https://github.com/intel-analytics/BigDL/blob/main/python/nano/notebooks/pytorch/tutorial/pytorch_train.ipynb>
[Nano_pytorch_onnxruntime]: <https://github.com/intel-analytics/BigDL/blob/main/python/nano/notebooks/pytorch/tutorial/pytorch_inference_onnx.ipynb>
[Nano_pytorch_openvino]: <https://github.com/intel-analytics/BigDL/blob/main/python/nano/notebooks/pytorch/tutorial/pytorch_inference_openvino.ipynb>
[Nano_pytorch_Quantization_inc]: <https://github.com/intel-analytics/BigDL/blob/main/python/nano/notebooks/pytorch/tutorial/pytorch_quantization_inc.ipynb>
[Nano_pytorch_quantization_inc_onnx]: <https://github.com/intel-analytics/BigDL/blob/main/python/nano/notebooks/pytorch/tutorial/pytorch_quantization_inc.ipynb>
[Nano_pytorch_quantization_openvino]: <https://github.com/intel-analytics/BigDL/blob/main/python/nano/notebooks/pytorch/tutorial/pytorch_quantization_openvino.ipynb>
[Nano_hpo_tf_seq_func]: <https://github.com/intel-analytics/BigDL/blob/main/python/nano/notebooks/hpo/seq_and_func.ipynb>
[Nano_hpo_tf_seq_func_colab]: <https://colab.research.google.com/github/intel-analytics/BigDL/blob/main/python/nano/notebooks/hpo/seq_and_func.ipynb>
[Nano_hpo_tf_subclassing]: <https://github.com/intel-analytics/BigDL/blob/main/python/nano/notebooks/hpo/custom.ipynb>
[Nano_hpo_tf_subclassing_colab]: <https://colab.research.google.com/github/intel-analytics/BigDL/blob/main/python/nano/notebooks/hpo/custom.ipynb>