Add Nano doc structure and Nano overview (#4504)

* Add Nano doc structrue and nano overview

* add sections

* add unset environment variable instructions

* restructure

* address comments
This commit is contained in:
Yang Wang 2022-05-09 17:12:44 +08:00 committed by GitHub
parent 24cf032a91
commit fd1da60251
3 changed files with 137 additions and 0 deletions

View file

@ -0,0 +1,92 @@
# Nano User Guide
## **1. Overview**
BigDL Nano is a python package to transparently accelerate PyTorch and TensorFlow applications on Intel hardware. It provides a unified and easy-to-use API for several optimization techniques and tools, so that users can only apply a few lines of code changes to make their PyTorch or TensorFlow code run faster.
---
## **2. Install**
BigDL-Nano can be installed using pip and we recommend installing BigDL-Nano in a conda environment.
For PyTorch Users, you can install bigdl-nano along with some dependencies specific to PyTorch using the following command.
```bash
conda create -n env
conda activate env
pip install bigdl-nano[pytorch]
```
For TensorFlow users, you can install bigdl-nano along with some dependencies specific to TensorFlow using the following command.
```bash
conda create -n env
conda activate env
pip install bigdl-nano[tensorflow]
```
After installing bigdl-nano, you can run the following command to setup a few environment variables.
```bash
source bigdl-nano-init
```
The `bigdl-nano-init` scripts will export a few environment variable according to your hardware to maximize performance.
In a conda environment, this will also add this script to `$CONDA_PREFIX/etc/conda/activate.d/`, which will automaticly run when you activate your current environment.
In a pure pip environment, you need to run `source bigdl-nano-init` every time you open a new shell to get optimal performance and run `source bigdl-nano-unset-env` if you want to unset these environment variables.
---
## **3. Get Started**
#### **3.1 PyTorch**
BigDL-Nano supports both PyTorch and PyTorch Lightning models and most optimizations requires only changing a few "import" lines in your code and adding a few flags.
BigDL-Nano uses a extended version of PyTorch Lightning trainer for integrating our optimizations.
For example, if you are using a LightingModule, you can use the following code enable intel-extension-for-pytorch and multi-instance training.
```python
from bigdl.nano.pytorch import Trainer
net = create_lightning_model()
train_loader = create_training_loader()
trainer = Trainer(max_epochs=1, use_ipex=True, num_processes=4)
trainer.fit(net, train_loader)
```
For more details on the BigDL-Nano's PyTorch usage, please refer to the [PyTorch](../QuickStart/pytorch.md) page.
### **3.2 TensorFlow**
BigDL-Nano supports `tensorflow.keras` API and most optimizations requires only changing a few "import" lines in your code and adding a few flags.
BigDL-Nano uses a extended version of `tf.keras.Model` or `tf.keras.Sequential` for integrating our optimizations.
For example, you can conduct a multi-instance training using the following code:
```python
import tensorflow as tf
from bigdl.nano.tf.keras import Sequential
mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
model = Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=5, num_processes=4)
```
For more details on the BigDL-Nano's PyTorch usage, please refer to the [TensorFlow](../QuickStart//tensorflow.md) page.

View file

@ -0,0 +1,27 @@
# BigDL-Nano PyTorch Overview
BigDL-Nano can be used to accelerate PyTorch or PyTorch-Lightning applications on both training and inference workloads. The optimizations in BigDL-Nano are delivered through a extended version of PyTorch-Lightning `Trainer`. These optimizations are either enabled by default, or can be easily turned on by setting a parameter or calling a method.
## PyTorch Training
### Best Known Configurations
### BigDL-Nano PyTorch Trainer
#### Intel® Extension for PyTorch
#### Multi-instance Training
### Optimized Data pipeline
### Optimizers
### Notebooks
## PyTorch Inference
### Runtime Acceleration
### Quantization
### Notebooks

View file

@ -0,0 +1,18 @@
# Nano TensorFlow Overview
## TensorFlow Training
### Runtime Acceleration
intel-tensorflow, intel-openmp
### Optimized Layers
embedding
### Optimizers
SparseAdam
### Multi-Instance Training
## TensorFlow Inference
### Quantization