From 6a8cdd71de41f35d04d78ab041ae7a6a96b42955 Mon Sep 17 00:00:00 2001 From: Yishuo Wang Date: Mon, 24 Oct 2022 14:04:02 +0800 Subject: [PATCH] fix torch_nano document link error and small change (#6257) --- .../source/doc/Nano/Overview/pytorch_train.md | 2 -- docs/readthedocs/source/doc/Nano/QuickStart/index.md | 2 +- .../source/doc/Nano/QuickStart/pytorch_nano.md | 10 +++++----- 3 files changed, 6 insertions(+), 8 deletions(-) diff --git a/docs/readthedocs/source/doc/Nano/Overview/pytorch_train.md b/docs/readthedocs/source/doc/Nano/Overview/pytorch_train.md index 1a13e7bf..4dc61658 100644 --- a/docs/readthedocs/source/doc/Nano/Overview/pytorch_train.md +++ b/docs/readthedocs/source/doc/Nano/Overview/pytorch_train.md @@ -74,8 +74,6 @@ class MyNano(TorchNano) : MyNano().train(...) ``` -- note: see [this tutorial](./pytorch_nano.html) for details about our `TorchNano`. - Our `TorchNano` also integrates IPEX and distributed training optimizations. For example, ```python diff --git a/docs/readthedocs/source/doc/Nano/QuickStart/index.md b/docs/readthedocs/source/doc/Nano/QuickStart/index.md index 110dbd92..d3cd9e0f 100644 --- a/docs/readthedocs/source/doc/Nano/QuickStart/index.md +++ b/docs/readthedocs/source/doc/Nano/QuickStart/index.md @@ -11,7 +11,7 @@ > ![](../../../../image/GitHub-Mark-32px.png)[View source on GitHub][Nano_pytorch_nano] - In this guide we'll describe how to use BigDL-Nano to accelerate custom training loop easily with very few changes + In this guide we will describe how to use BigDL-Nano to accelerate custom training loop easily with very few changes --------------------------- diff --git a/docs/readthedocs/source/doc/Nano/QuickStart/pytorch_nano.md b/docs/readthedocs/source/doc/Nano/QuickStart/pytorch_nano.md index d32c1a28..2c4feded 100644 --- a/docs/readthedocs/source/doc/Nano/QuickStart/pytorch_nano.md +++ b/docs/readthedocs/source/doc/Nano/QuickStart/pytorch_nano.md @@ -2,7 +2,7 @@ **In this guide we'll demonstrate how to use BigDL-Nano to accelerate custom train loop easily with very few changes.** -### **Step 0: Prepare Environment** +### Step 0: Prepare Environment We recommend using [conda](https://docs.conda.io/projects/conda/en/latest/user-guide/install/) to prepare the environment. Please refer to the [install guide](../../UserGuide/python.md) for more details. @@ -15,7 +15,7 @@ pip install --pre --upgrade bigdl-nano[pytorch] source bigdl-nano-init ``` -### **Step 1: Load the Data** +### Step 1: Load the Data Import Cifar10 dataset from torch_vision and modify the train transform. You could access [CIFAR10](https://www.cs.toronto.edu/~kriz/cifar.html) for a view of the whole dataset. @@ -49,7 +49,7 @@ def create_dataloader(data_path, batch_size): return train_loader ``` -### **Step 2: Define the Model** +### Step 2: Define the Model You may define your model in the same way as the standard PyTorch models. @@ -70,7 +70,7 @@ class ResNet18(nn.Module): return self.model(x) ``` -### **Step 3: Define Train Loop** +### Step 3: Define Train Loop Suppose the custom train loop is as follows: @@ -149,7 +149,7 @@ class MyNano(TorchNano): print(f'avg_loss: {total_loss / num}') ``` -### **Step 4: Run with Nano TorchNano** +### Step 4: Run with Nano TorchNano ```python MyNano().train()