* Add tutorial notebook * Add md * Test on readthedocs * Fix markdown * fix md * update notebooks * update requirements version in doc * update * add and update tutorial * add unit test for tensorflow tutorial * reduce test time * reduce test time * update shell * update action * Update tutorial * reduce ut time * reduce ut time * reduce ut time * reduce ut time * reduce ut time * Update * Fix shell * update * update * rollback requirements * Update * Update Co-authored-by: pinggao187 <ping.gao3@pactera.com>
115 lines
No EOL
6.3 KiB
Markdown
115 lines
No EOL
6.3 KiB
Markdown
# Nano Tutorial
|
|
- [**BigDL-Nano PyTorch Trainer Quickstart**](./pytorch_train_quickstart.html)
|
|
|
|
> [View source on GitHub][Nano_pytorch_training]
|
|
|
|
In this guide we will describe how to scale out PyTorch programs using Nano Trainer
|
|
|
|
---------------------------
|
|
|
|
- [**BigDL-Nano PyTorch TorchNano Quickstart**](./pytorch_nano.html)
|
|
|
|
> [View source on GitHub][Nano_pytorch_nano]
|
|
|
|
In this guide we'll describe how to use BigDL-Nano to accelerate custom training loop easily with very few changes
|
|
|
|
---------------------------
|
|
|
|
- [**BigDL-Nano TensorFlow Training Quickstart**](./tensorflow_train_quickstart.html)
|
|
|
|
> [View source on GitHub][Nano_tensorflow_training]
|
|
|
|
In this guide we will describe how to accelerate TensorFlow Keras applications on training workloads with BigDL-Nano
|
|
|
|
---------------------------
|
|
- [**BigDL-Nano PyTorch ONNXRuntime Acceleration Quickstart**](./pytorch_onnxruntime.html)
|
|
|
|
> [View source on GitHub][Nano_pytorch_onnxruntime]
|
|
|
|
In this guide we will describe how to apply ONNXRuntime Acceleration on inference pipeline with the APIs delivered by BigDL-Nano
|
|
|
|
---------------------------
|
|
|
|
- [**BigDL-Nano PyTorch OpenVINO Acceleration Quickstart**](./pytorch_openvino.html)
|
|
|
|
> [View source on GitHub][Nano_pytorch_openvino]
|
|
|
|
In this guide we will describe how to apply OpenVINO Acceleration on inference pipeline with the APIs delivered by BigDL-Nano
|
|
|
|
---------------------------
|
|
|
|
- [**BigDL-Nano PyTorch Quantization with INC Quickstart**](./pytorch_quantization_inc.html)
|
|
|
|
> [View source on GitHub][Nano_pytorch_Quantization_inc]
|
|
|
|
In this guide we will describe how to obtain a quantized model with the APIs delivered by BigDL-Nano
|
|
|
|
---------------------------
|
|
|
|
- [**BigDL-Nano PyTorch Quantization with ONNXRuntime accelerator Quickstart**](./pytorch_quantization_inc_onnx.html)
|
|
|
|
> [View source on GitHub][Nano_pytorch_quantization_inc_onnx]
|
|
|
|
In this guide we will describe how to obtain a quantized model running inference in the ONNXRuntime engine with the APIs delivered by BigDL-Nano
|
|
|
|
---------------------------
|
|
|
|
- [**BigDL-Nano PyTorch Quantization with POT Quickstart**](./pytorch_quantization_openvino.html)
|
|
|
|
> [View source on GitHub][Nano_pytorch_quantization_openvino]
|
|
|
|
In this guide we will describe how to obtain a quantized model with the APIs delivered by BigDL-Nano
|
|
|
|
|
|
---------------------------
|
|
|
|
- [**BigDL-Nano TensorFlow Quantization with INC Quickstart**](./tensorflow_quantization_quickstart.html)
|
|
> [View source on GitHub][Nano_tensorflow_quantization_inc]
|
|
|
|
In this guide we will demonstrates how to apply Post-training quantization on a keras model with BigDL-Nano.
|
|
|
|
|
|
---------------------------
|
|
|
|
- [**BigDL-Nano TensorFlow SparseEmbedding and SparseAdam**](./tensorflow_embedding.html)
|
|
|
|
> [View source on GitHub][Nano_tensorflow_embedding]
|
|
|
|
In this guide we demonstrates how to use SparseEmbedding and SparseAdam to obtain stroger performance with sparse gradient
|
|
|
|
|
|
-------------------------
|
|
|
|
|
|
- [**BigDL-Nano Hyperparameter Tuning (Tensorflow Sequential/Functional API) Quickstart**](../Tutorials/seq_and_func.html)
|
|
|
|
|
|
> [Run in Google Colab][Nano_hpo_tf_seq_func_colab] [View source on GitHub][Nano_hpo_tf_seq_func]
|
|
|
|
|
|
In this guide we will describe how to use Nano's built-in HPO utils to do hyperparameter tuning for models defined using Tensorflow Sequential or Functional API.
|
|
|
|
|
|
---------------------------
|
|
|
|
- [**BigDL-Nano Hyperparameter Tuning (Tensorflow Subclassing Model) Quickstart**](../Tutorials/custom.html)
|
|
|
|
> [Run in Google Colab][Nano_hpo_tf_subclassing_colab] [View source on GitHub][Nano_hpo_tf_subclassing]
|
|
|
|
In this guide we will describe how to use Nano's built-in HPO utils to do hyperparameter tuning for models defined by subclassing tf.keras.Model.
|
|
|
|
|
|
[Nano_pytorch_training]: <https://github.com/intel-analytics/BigDL/blob/main/python/nano/notebooks/pytorch/tutorial/pytorch_train.ipynb>
|
|
[Nano_pytorch_nano]: <https://github.com/intel-analytics/BigDL/blob/main/python/nano/notebooks/pytorch/tutorial/pytorch_nano.ipynb>
|
|
[Nano_tensorflow_training]: <https://github.com/intel-analytics/BigDL/blob/main/python/nano/notebooks/tensorflow/tutorial/tensorflow_fit.ipynb>
|
|
[Nano_pytorch_onnxruntime]: <https://github.com/intel-analytics/BigDL/blob/main/python/nano/notebooks/pytorch/tutorial/pytorch_inference_onnx.ipynb>
|
|
[Nano_pytorch_openvino]: <https://github.com/intel-analytics/BigDL/blob/main/python/nano/notebooks/pytorch/tutorial/pytorch_inference_openvino.ipynb>
|
|
[Nano_pytorch_Quantization_inc]: <https://github.com/intel-analytics/BigDL/blob/main/python/nano/notebooks/pytorch/tutorial/pytorch_quantization_inc.ipynb>
|
|
[Nano_pytorch_quantization_inc_onnx]: <https://github.com/intel-analytics/BigDL/blob/main/python/nano/notebooks/pytorch/tutorial/pytorch_quantization_inc.ipynb>
|
|
[Nano_pytorch_quantization_openvino]: <https://github.com/intel-analytics/BigDL/blob/main/python/nano/notebooks/pytorch/tutorial/pytorch_quantization_openvino.ipynb>
|
|
[Nano_tensorflow_quantization_inc]: <https://github.com/intel-analytics/BigDL/blob/main/python/nano/notebooks/tensorflow/tutorial/tensorflow_quantization.ipynb>
|
|
[Nano_tensorflow_embedding]: <https://github.com/intel-analytics/BigDL/blob/main/python/nano/notebooks/tensorflow/tutorial/tensorflow_embedding.ipynb>
|
|
[Nano_hpo_tf_seq_func]: <https://github.com/intel-analytics/BigDL/blob/main/python/nano/notebooks/hpo/seq_and_func.ipynb>
|
|
[Nano_hpo_tf_seq_func_colab]: <https://colab.research.google.com/github/intel-analytics/BigDL/blob/main/python/nano/notebooks/hpo/seq_and_func.ipynb>
|
|
[Nano_hpo_tf_subclassing]: <https://github.com/intel-analytics/BigDL/blob/main/python/nano/notebooks/hpo/custom.ipynb>
|
|
[Nano_hpo_tf_subclassing_colab]: <https://colab.research.google.com/github/intel-analytics/BigDL/blob/main/python/nano/notebooks/hpo/custom.ipynb> |