Nano: Update Nano PyTorch quick start doc (#4897)

* add BigDL-Nano PyTorch Quickstart

* Update BigDL-Nano PyTorch Quickstart

* Update BigDL-Nano PyTorch Quickstart

* Add Nano inference quickstart
- add inference onnx quickstart
- add inference openvino quickstart
- add quantization inc quickstart
- add quantization inc onnx quickstart
- add quantization openvino quickstart

* update quickstart

* update quickstart

* Add Nano quantization quickstart

* update nano docs

* Update docs

* Update Nano OpenVINO tutorial

* Update

* Update index.md

* Resize Images

* Resize Images

* Update

* Update Nano docs

* Update nano documents

* Update doc & notebook

* clear output of cells

* add unit tests for tutorial notebooks

* fix errors in yaml

* fix error in notebook

* reduce quantization time

* Update yaml

* Add unit test for tutorial

* Add tests for tutorial

* fix shell
This commit is contained in:
Mingzhi Hu 2022-07-04 10:10:21 +08:00 committed by GitHub
parent a6cab83afd
commit 798345123f
9 changed files with 620 additions and 0 deletions

Binary file not shown.

After

Width:  |  Height:  |  Size: 152 KiB

View file

@ -0,0 +1,53 @@
# Nano Tutorial
- [**BigDL-Nano PyTorch Training Quickstart**](./pytorch_train_quickstart.html)
> ![](../../../../image/GitHub-Mark-32px.png)[View source on GitHub][Nano_pytorch_training]
In this guide we will describe how to scale out PyTorch programs using Nano
---------------------------
- [**BigDL-Nano PyTorch ONNXRuntime Acceleration Quickstart**](./pytorch_onnxruntime.html)
> ![](../../../../image/GitHub-Mark-32px.png)[View source on GitHub][Nano_pytorch_onnxruntime]
In this guide we will describe how to apply ONNXRuntime Acceleration on inference pipeline with the APIs delivered by BigDL-Nano
---------------------------
- [**BigDL-Nano PyTorch OpenVINO Acceleration Quickstart**](./pytorch_openvino.html)
> ![](../../../../image/GitHub-Mark-32px.png)[View source on GitHub][Nano_pytorch_openvino]
In this guide we will describe how to apply OpenVINO Acceleration on inference pipeline with the APIs delivered by BigDL-Nano
---------------------------
- [**BigDL-Nano PyTorch Quantization with INC Quickstart**](./pytorch_quantization_inc.html)
> ![](../../../../image/GitHub-Mark-32px.png)[View source on GitHub][Nano_pytorch_Quantization_inc]
In this guide we will describe how to obtain a quantized model with the APIs delivered by BigDL-Nano
---------------------------
- [**BigDL-Nano PyTorch Quantization with ONNXRuntime accelerator Quickstart**](./pytorch_quantization_inc_onnx.html)
> ![](../../../../image/GitHub-Mark-32px.png)[View source on GitHub][Nano_pytorch_quantization_inc_onnx]
In this guide we will describe how to obtain a quantized model running inference in the ONNXRuntime engine with the APIs delivered by BigDL-Nano
---------------------------
- [**BigDL-Nano PyTorch Quantization with POT Quickstart**](./pytorch_quantization_openvino.html)
> ![](../../../../image/GitHub-Mark-32px.png)[View source on GitHub][Nano_pytorch_quantization_openvino]
In this guide we will describe how to obtain a quantized model with the APIs delivered by BigDL-Nano
[Nano_pytorch_training]: <https://github.com/intel-analytics/BigDL/blob/main/python/nano/notebooks/pytorch/tutorial/pytorch_train.ipynb>
[Nano_pytorch_onnxruntime]: <https://github.com/intel-analytics/BigDL/blob/main/python/nano/notebooks/pytorch/tutorial/pytorch_inference_onnx.ipynb>
[Nano_pytorch_openvino]: <https://github.com/intel-analytics/BigDL/blob/main/python/nano/notebooks/pytorch/tutorial/pytorch_inference_openvino.ipynb>
[Nano_pytorch_Quantization_inc]: <https://github.com/intel-analytics/BigDL/blob/main/python/nano/notebooks/pytorch/tutorial/pytorch_quantization_inc.ipynb>
[Nano_pytorch_quantization_inc_onnx]: <https://github.com/intel-analytics/BigDL/blob/main/python/nano/notebooks/pytorch/tutorial/pytorch_quantization_inc.ipynb>
[Nano_pytorch_quantization_openvino]: <https://github.com/intel-analytics/BigDL/blob/main/python/nano/notebooks/pytorch/tutorial/pytorch_quantization_openvino.ipynb>

View file

@ -0,0 +1,89 @@
# BigDL-Nano PyTorch ONNXRuntime Acceleration Quickstart
**In this guide we will describe how to apply ONNXRuntime Acceleration on inference pipeline with the APIs delivered by BigDL-Nano in 4 simple steps**
### **Step 0: Prepare Environment**
We recommend using [conda](https://docs.conda.io/projects/conda/en/latest/user-guide/install/) to prepare the environment. Please refer to the [install guide](../../UserGuide/python.md) for more details.
```bash
conda create py37 python==3.7.10 setuptools==58.0.4
conda activate py37
# nightly bulit version
pip install --pre --upgrade bigdl-nano[pytorch]
# set env variables for your conda environment
source bigdl-nano-init
```
Before you start with onnxruntime accelerator, you need to install some onnx packages as follows to set up your environment with ONNXRuntime acceleration.
```bash
pip install onnx onnxruntime
```
### **Step 1: Load the data**
```python
import torch
from torchvision.io import read_image
from torchvision import transforms
from torchvision.datasets import OxfordIIITPet
from torch.utils.data.dataloader import DataLoader
train_transform = transforms.Compose([transforms.Resize(256),
transforms.RandomCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ColorJitter(brightness=.5, hue=.3),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])])
val_transform = transforms.Compose([transforms.Resize([224, 224]), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])])
# Apply data augmentation to the tarin_dataset
train_dataset = OxfordIIITPet(root = ".", transform=train_transform)
val_dataset = OxfordIIITPet(root=".", transform=val_transform)
# obtain training indices that will be used for validation
indices = torch.randperm(len(train_dataset))
val_size = len(train_dataset) // 4
train_dataset = torch.utils.data.Subset(train_dataset, indices[:-val_size])
val_dataset = torch.utils.data.Subset(val_dataset, indices[-val_size:])
# prepare data loaders
train_dataloader = DataLoader(train_dataset, batch_size=32)
```
### **Step 2: Prepare the Model**
```python
import torch
from torchvision.models import resnet18
from bigdl.nano.pytorch import Trainer
model_ft = resnet18(pretrained=True)
num_ftrs = model_ft.fc.in_features
# Here the size of each output sample is set to 37.
model_ft.fc = torch.nn.Linear(num_ftrs, 37)
loss_ft = torch.nn.CrossEntropyLoss()
optimizer_ft = torch.optim.SGD(model_ft.parameters(), lr=0.01, momentum=0.9, weight_decay=5e-4)
# Compile our model with loss function, optimizer.
model = Trainer.compile(model_ft, loss_ft, optimizer_ft)
trainer = Trainer(max_epochs=5)
trainer.fit(model, train_dataloader=train_dataloader)
# Inference/Prediction
x = torch.stack([val_dataset[0][0], val_dataset[1][0]])
model_ft.eval()
y_hat = model_ft(x)
y_hat.argmax(dim=1)
```
### **Step 3: Apply ONNXRumtime Acceleration**
When you're ready, you can simply append the following part to enable your ONNXRuntime acceleration.
```python
# trace your model as an ONNXRuntime model
# The argument `input_sample` is not required in the following cases:
# you have run `trainer.fit` before trace
# Model has `example_input_array` set
# Model is a LightningModule with any dataloader attached.
from bigdl.nano.pytorch import Trainer
ort_model = Trainer.trace(model_ft, accelerator="onnxruntime", input_sample=torch.rand(1, 3, 224, 224))
# The usage is almost the same with any PyTorch module
y_hat = ort_model(x)
y_hat.argmax(dim=1)
```
- Note
`ort_model` is not trainable any more, so you can't use like trainer.fit(ort_model, dataloader)

View file

@ -0,0 +1,89 @@
# BigDL-Nano PyTorch OpenVINO Acceleration Quickstart
**In this guide we will describe how to apply OpenVINO Acceleration on inference pipeline with the APIs delivered by BigDL-Nano in 4 simple steps**
### **Step 0: Prepare Environment**
We recommend using [conda](https://docs.conda.io/projects/conda/en/latest/user-guide/install/) to prepare the environment. Please refer to the [install guide](../../UserGuide/python.md) for more details.
```bash
conda create py37 python==3.7.10 setuptools==58.0.4
conda activate py37
# nightly bulit version
pip install --pre --upgrade bigdl-nano[pytorch]
# set env variables for your conda environment
source bigdl-nano-init
```
To use OpenVINO acceleration, you have to install the OpenVINO toolkit:
```bash
pip install openvino-dev
```
### **Step 1: Load the data**
```python
import torch
from torchvision.io import read_image
from torchvision import transforms
from torchvision.datasets import OxfordIIITPet
from torch.utils.data.dataloader import DataLoader
train_transform = transforms.Compose([transforms.Resize(256),
transforms.RandomCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ColorJitter(brightness=.5, hue=.3),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])])
val_transform = transforms.Compose([transforms.Resize([224, 224]), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])])
# Apply data augmentation to the tarin_dataset
train_dataset = OxfordIIITPet(root = ".", transform=train_transform, target_transform=transforms.Lambda(lambda label: torch.tensor(label, dtype=torch.long))) # Quantization using POT expect a tensor as label for now
val_dataset = OxfordIIITPet(root=".", transform=val_transform)
# obtain training indices that will be used for validation
indices = torch.randperm(len(train_dataset))
val_size = len(train_dataset) // 4
train_dataset = torch.utils.data.Subset(train_dataset, indices[:-val_size])
val_dataset = torch.utils.data.Subset(val_dataset, indices[-val_size:])
# prepare data loaders
train_dataloader = DataLoader(train_dataset, batch_size=32)
```
### **Step 2: Prepare the Model**
```python
import torch
from torchvision.models import resnet18
from bigdl.nano.pytorch import Trainer
model_ft = resnet18(pretrained=True)
num_ftrs = model_ft.fc.in_features
# Here the size of each output sample is set to 37.
model_ft.fc = torch.nn.Linear(num_ftrs, 37)
loss_ft = torch.nn.CrossEntropyLoss()
optimizer_ft = torch.optim.SGD(model_ft.parameters(), lr=0.01, momentum=0.9, weight_decay=5e-4)
# Compile our model with loss function, optimizer.
model = Trainer.compile(model_ft, loss_ft, optimizer_ft)
trainer = Trainer(max_epochs=5)
trainer.fit(model, train_dataloader=train_dataloader)
# Inference/Prediction
x = torch.stack([val_dataset[0][0], val_dataset[1][0]])
model_ft.eval()
y_hat = model_ft(x)
y_hat.argmax(dim=1)
```
### **Step 3: Apply OpenVINO Acceleration**
When you're ready, you can simply append the following part to enable your OpenVINO acceleration.
```python
# trace your model as an OpenVINO model
# The argument `input_sample` is not required in the following cases:
# you have run `trainer.fit` before trace
# The Model has `example_input_array` set
from bigdl.nano.pytorch import Trainer
ov_model = Trainer.trace(model_ft, accelerator="openvino", input_sample=torch.rand(1, 3, 224, 224))
# The usage is almost the same with any PyTorch module
y_hat = ov_model(x)
y_hat.argmax(dim=1)
```
- Note
The `ov_model` is not trainable any more, so you can't use like trainer.fit(ov_model, dataloader)

View file

@ -0,0 +1,88 @@
# BigDL-Nano PyTorch Quantization with INC Quickstart
**In this guide we will describe how to obtain a quantized model with the APIs delivered by BigDL-Nano in 4 simple steps**
### **Step 0: Prepare Environment**
We recommend using [conda](https://docs.conda.io/projects/conda/en/latest/user-guide/install/) to prepare the environment. Please refer to the [install guide](../../UserGuide/python.md) for more details.
```bash
conda create py37 python==3.7.10 setuptools==58.0.4
conda activate py37
# nightly bulit version
pip install --pre --upgrade bigdl-nano[pytorch]
# set env variables for your conda environment
source bigdl-nano-init
```
By default, Intel Neural Compressor is not installed with BigDL-Nano. So if you determine to use it as your quantization backend, you'll need to install it first:
```bash
pip install neural-compressor==1.11
```
### **Step 1: Load the data**
```python
import torch
from torchvision.io import read_image
from torchvision import transforms
from torchvision.datasets import OxfordIIITPet
from torch.utils.data.dataloader import DataLoader
train_transform = transforms.Compose([transforms.Resize(256),
transforms.RandomCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ColorJitter(brightness=.5, hue=.3),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])])
val_transform = transforms.Compose([transforms.Resize([224, 224]), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])])
# Apply data augmentation to the tarin_dataset
train_dataset = OxfordIIITPet(root = ".", transform=train_transform)
val_dataset = OxfordIIITPet(root=".", transform=val_transform)
# obtain training indices that will be used for validation
indices = torch.randperm(len(train_dataset))
val_size = len(train_dataset) // 4
train_dataset = torch.utils.data.Subset(train_dataset, indices[:-val_size])
val_dataset = torch.utils.data.Subset(val_dataset, indices[-val_size:])
# prepare data loaders
train_dataloader = DataLoader(train_dataset, batch_size=32)
```
### **Step 2: Prepare the Model**
```python
import torch
from torchvision.models import resnet18
from bigdl.nano.pytorch import Trainer
from torchmetrics import Accuracy
model_ft = resnet18(pretrained=True)
num_ftrs = model_ft.fc.in_features
# Here the size of each output sample is set to 37.
model_ft.fc = torch.nn.Linear(num_ftrs, 37)
loss_ft = torch.nn.CrossEntropyLoss()
optimizer_ft = torch.optim.SGD(model_ft.parameters(), lr=0.01, momentum=0.9, weight_decay=5e-4)
# Compile our model with loss function, optimizer.
model = Trainer.compile(model_ft, loss_ft, optimizer_ft, metrics=[Accuracy])
trainer = Trainer(max_epochs=5)
trainer.fit(model, train_dataloader=train_dataloader)
# Inference/Prediction
x = torch.stack([val_dataset[0][0], val_dataset[1][0]])
model_ft.eval()
y_hat = model_ft(x)
y_hat.argmax(dim=1)
```
### **Step 3: Quantization using Intel Neural Compressor**
Quantization is widely used to compress models to a lower precision, which not only reduces the model size but also accelerates inference. BigDL-Nano provides `Trainer.quantize()` API for users to quickly obtain a quantized model with accuracy control by specifying a few arguments.
Without extra accelerator, `Trainer.quantize()` returns a pytorch module with desired precision and accuracy. You can add quantization as below:
```python
from torchmetrics.functional import accuracy
q_model = trainer.quantize(model, calib_dataloader=train_dataloader, metric=accuracy)
# run simple prediction
y_hat = q_model(x)
y_hat.argmax(dim=1)
```
This is a most basic usage to quantize a model with defaults, INT8 precision, and without search tuning space to control accuracy drop.

View file

@ -0,0 +1,86 @@
# BigDL-Nano PyTorch Quantization with ONNXRuntime accelerator Quickstart
**In this guide we will describe how to obtain a quantized model running inference in the ONNXRuntime engine with the APIs delivered by BigDL-Nano in 4 simple steps**
### **Step 0: Prepare Environment**
We recommend using [conda](https://docs.conda.io/projects/conda/en/latest/user-guide/install/) to prepare the environment. Please refer to the [install guide](../../UserGuide/python.md) for more details.
```bash
conda create py37 python==3.7.10 setuptools==58.0.4
conda activate py37
# nightly bulit version
pip install --pre --upgrade bigdl-nano[pytorch]
# set env variables for your conda environment
source bigdl-nano-init
```
To quantize model using ONNXRuntime as backend, it is required to install Intel Neural Compressor, onnxruntime-extensions as a dependency of INC and some onnx packages as below
```python
pip install neural-compress==1.11
pip install onnx onnxruntime onnxruntime-extensions
```
### **Step 1: Load the data**
```python
import torch
from torchvision.io import read_image
from torchvision import transforms
from torchvision.datasets import OxfordIIITPet
from torch.utils.data.dataloader import DataLoader
train_transform = transforms.Compose([transforms.Resize(256),
transforms.RandomCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ColorJitter(brightness=.5, hue=.3),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])])
val_transform = transforms.Compose([transforms.Resize([224, 224]), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])])
# Apply data augmentation to the tarin_dataset
train_dataset = OxfordIIITPet(root = ".", transform=train_transform)
val_dataset = OxfordIIITPet(root=".", transform=val_transform)
# obtain training indices that will be used for validation
indices = torch.randperm(len(train_dataset))
val_size = len(train_dataset) // 4
train_dataset = torch.utils.data.Subset(train_dataset, indices[:-val_size])
val_dataset = torch.utils.data.Subset(val_dataset, indices[-val_size:])
train_dataloader = DataLoader(train_dataset, batch_size=32)
```
### **Step 2: Prepare your Model**
```python
import torch
from torchvision.models import resnet18
from bigdl.nano.pytorch import Trainer
from torchmetrics import Accuracy
model_ft = resnet18(pretrained=True)
num_ftrs = model_ft.fc.in_features
# Here the size of each output sample is set to 37.
model_ft.fc = torch.nn.Linear(num_ftrs, 37)
loss_ft = torch.nn.CrossEntropyLoss()
optimizer_ft = torch.optim.SGD(model_ft.parameters(), lr=0.01, momentum=0.9, weight_decay=5e-4)
# Compile our model with loss function, optimizer.
model = Trainer.compile(model_ft, loss_ft, optimizer_ft, metrics=[Accuracy])
trainer = Trainer(max_epochs=5)
trainer.fit(model, train_dataloader=train_dataloader)
# Inference/Prediction
x = torch.stack([val_dataset[0][0], val_dataset[1][0]])
model_ft.eval()
y_hat = model_ft(x)
y_hat.argmax(dim=1)
```
### **Step 3: Quantization with ONNXRuntime accelerator**
With the ONNXRuntime accelerator, `Trainer.quantize()` will return a model with compressed precision but running inference in the ONNXRuntime engine.
you can add quantization as below:
```python
from torchmetrics.functional import accuracy
ort_q_model = trainer.quantize(model, accelerator='onnxruntime', calib_dataloader=train_dataloader, metric=accuracy)
# run simple prediction
y_hat = ort_q_model(x)
y_hat.argmax(dim=1)
```

View file

@ -0,0 +1,85 @@
# BigDL-Nano PyTorch Quantization with POT Quickstart
**In this guide we will describe how to obtain a quantized model with the APIs delivered by BigDL-Nano in 4 simple steps**
### **Step 0: Prepare Environment**
We recommend using [conda](https://docs.conda.io/projects/conda/en/latest/user-guide/install/) to prepare the environment. Please refer to the [install guide](../../UserGuide/python.md) for more details.
```bash
conda create py37 python==3.7.10 setuptools==58.0.4
conda activate py37
# nightly bulit version
pip install --pre --upgrade bigdl-nano[pytorch]
# set env variables for your conda environment
source bigdl-nano-init
```
The POT(Post-training Optimization Tools) is provided by OpenVINO toolkit. To use POT, you need to install OpenVINO
```python
pip install openvino-dev
```
### **Step 1: Load the data**
```python
import torch
from torchvision.io import read_image
from torchvision import transforms
from torchvision.datasets import OxfordIIITPet
from torch.utils.data.dataloader import DataLoader
train_transform = transforms.Compose([transforms.Resize(256),
transforms.RandomCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ColorJitter(brightness=.5, hue=.3),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])])
val_transform = transforms.Compose([transforms.Resize([224, 224]), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])])
# Apply data augmentation to the tarin_dataset
train_dataset = OxfordIIITPet(root = ".",
transform=train_transform,
target_transform=transforms.Lambda(lambda label: torch.tensor(label, dtype=torch.long))) # Quantization using POT expect a tensor as label
val_dataset = OxfordIIITPet(root=".", transform=val_transform)
# obtain training indices that will be used for validation
indices = torch.randperm(len(train_dataset))
val_size = len(train_dataset) // 4
train_dataset = torch.utils.data.Subset(train_dataset, indices[:-val_size])
val_dataset = torch.utils.data.Subset(val_dataset, indices[-val_size:])
# prepare data loaders
train_dataloader = DataLoader(train_dataset, batch_size=32)
```
### **Step 2: Prepare the Model**
```python
import torch
from torchvision.models import resnet18
from bigdl.nano.pytorch import Trainer
model_ft = resnet18(pretrained=True)
num_ftrs = model_ft.fc.in_features
# Here the size of each output sample is set to 37.
model_ft.fc = torch.nn.Linear(num_ftrs, 37)
loss_ft = torch.nn.CrossEntropyLoss()
optimizer_ft = torch.optim.SGD(model_ft.parameters(), lr=0.01, momentum=0.9, weight_decay=5e-4)
# Compile our model with loss function, optimizer.
model = Trainer.compile(model_ft, loss_ft, optimizer_ft)
trainer = Trainer(max_epochs=5)
trainer.fit(model, train_dataloader=train_dataloader)
# Inference/Prediction
x = torch.stack([val_dataset[0][0], val_dataset[1][0]])
model_ft.eval()
y_hat = model_ft(x)
y_hat.argmax(dim=1)
```
### **Step 3: Quantization using Post-training Optimization Tools**
Accelerator='openvino' means using OpenVINO POT to do quantization. The quantization can be added as below:
```python
from torchmetrics import Accuracy
ov_q_model = trainer.quantize(model, accelerator="openvino", calib_dataloader=data_loader)
# run simple prediction
batch = torch.stack([data_set[0][0], data_set[1][0]])
ov_q_model(batch)
```

View file

@ -0,0 +1,129 @@
# BigDL-Nano PyTorch Training Quickstart
**In this guide we will describe how to scale out PyTorch programs using Nano in 5 simple steps**
### **Step 0: Prepare Environment**
We recommend using [conda](https://docs.conda.io/projects/conda/en/latest/user-guide/install/) to prepare the environment. Please refer to the [install guide](../../UserGuide/python.md) for more details.
```bash
conda create py37 python==3.7.10 setuptools==58.0.4
conda activate py37
# nightly bulit version
pip install --pre --upgrade bigdl-nano[pytorch]
# set env variables for your conda environment
source bigdl-nano-init
pip install lightning-bolts
```
### **Step 1: Import BigDL-Nano**
The PyTorch Trainer (`bigdl.nano.pytorch.Trainer`) is the place where we integrate most optimizations. It extends PyTorch Lightning's Trainer and has a few more parameters and methods specific to BigDL-Nano. The Trainer can be directly used to train a `LightningModule`.
```python
from bigdl.nano.pytorch import Trainer
```
Computer Vision task often needs a data processing pipeline that sometimes constitutes a non-trivial part of the whole training pipeline. Leveraging OpenCV and libjpeg-turbo, BigDL-Nano can accelerate computer vision data pipelines by providing a drop-in replacement of torch_vision's `datasets` and `transforms`.
```python
from bigdl.nano.pytorch.vision import transforms
```
### **Step 2: Load the Data**
You can define the datamodule using standard [LightningDataModule](https://pytorch-lightning.readthedocs.io/en/latest/data/datamodule.html)
```python
from pl_bolts.datamodules import CIFAR10DataModule
train_transforms = transforms.Compose(
[
transforms.RandomCrop(32, 4),
transforms.RandomHorizontalFlip(),
transforms.ToTensor()
]
)
cifar10_dm = CIFAR10DataModule(
data_dir=os.environ.get('DATA_PATH', '.'),
batch_size=64,
train_transforms=train_transforms
)
return cifar10_dm
```
### **Step 3: Define the Model**
You may define your model, loss and optimizer in the same way as in any standard PyTorch Lightning program.
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchvision
from pytorch_lightning import LightningModule
def create_model():
model = torchvision.models.resnet18(pretrained=False, num_classes=10)
model.conv1 = nn.Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
model.maxpool = nn.Identity()
return model
class LitResnet(LightningModule):
def __init__(self, learning_rate=0.05, num_processes=1):
super().__init__()
self.save_hyperparameters()
self.model = create_model()
def forward(self, x):
out = self.model(x)
return F.log_softmax(out, dim=1)
def training_step(self, batch, batch_idx):
x, y = batch
logits = self(x)
loss = F.nll_loss(logits, y)
self.log("train_loss", loss)
return loss
def configure_optimizers(self):
optimizer = torch.optim.SGD(
self.parameters(),
lr=self.hparams.learning_rate,
momentum=0.9,
weight_decay=5e-4,
)
steps_per_epoch = 45000 // BATCH_SIZE // self.hparams.num_processes
scheduler_dict = {
"scheduler": OneCycleLR(
optimizer,
0.1,
epochs=self.trainer.max_epochs,
steps_per_epoch=steps_per_epoch,
),
"interval": "step",
}
return {"optimizer": optimizer, "lr_scheduler": scheduler_dict}
```
For regular PyTorch modules, we also provide a "compile" method, that takes in a PyTorch module, an optimizer, and other PyTorch objects and "compiles" them into a `LightningModule`. You can find more information from [here](https://bigdl.readthedocs.io/en/latest/doc/PythonAPI/Nano/pytorch.html#bigdl-nano-pytorch)
### Step 4: Fit with Nano PyTorch Trainer
```python
model = LitResnet(learning_rate=0.05)
single_trainer = Trainer(max_epochs=30)
single_trainer.fit(model, datamodule=cifar10_dm)
```
At this stage, you may already experience some speedup due to the optimized environment variables set by source bigdl-nano-init. Besides, you can also enable optimizations delivered by BigDL-Nano by setting a paramter or calling a method to accelerate PyTorch or PyTorch Lightning application on training workloads.
#### Increase the number of processes in distributed training to accelerate training.
```python
model = LitResnet(learning_rate=0.1, num_processes=4)
single_trainer = Trainer(max_epochs=30, num_processes=4)
single_trainer.fit(model, datamodule=cifar10_dm)
```
- Note: Here we use linear scaling rule to imporve the performance of model on distributed training. You can find more useful tricks on distributed computing from the [paper](https://arxiv.org/abs/1706.02677) published by Facebook AI research(FAIR).<br>
- Note: If you're using a step related `lr_scheduler`, the value of lr_scheduler's pre_epoch_steps need to be modified accordingly, or the learning rate may not changes as expected. The change in learning_rate is shown in the following figure, where the blue line is the excepted change and the red one is the case when the pre_epoch_steps remain unchanged.
![](../Image/learning_rate.png)
#### Intel Extension for Pytorch (a.k.a. IPEX) link extends PyTorch with optimizations for an extra performance boost on Intel hardware. BigDL-Nano integrates IPEX through the Trainer. Users can turn on IPEX by setting use_ipex=True.
```python
model = LitResnet(learning_rate=0.1, num_processes=4)
single_trainer = Trainer(max_epochs=30, num_processes=4, use_ipex=True)
single_trainer.fit(model, datamodule=cifar10_dm)
```
Get more information about the optimizations from [here](https://bigdl.readthedocs.io/en/latest/doc/PythonAPI/Nano/pytorch.html#bigdl-nano-pytorch)
You can find the detailed result of training from [here](https://github.com/intel-analytics/BigDL/blob/main/python/nano/notebooks/pytorch/tutorial/pytorch_train.ipynb)

View file

@ -52,6 +52,7 @@ BigDL Documentation
doc/Nano/QuickStart/tensorflow_inference.md doc/Nano/QuickStart/tensorflow_inference.md
doc/Nano/QuickStart/hpo.rst doc/Nano/QuickStart/hpo.rst
doc/Nano/Overview/known_issues.md doc/Nano/Overview/known_issues.md
doc/Nano/QuickStart/index.md
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 1