* refactor toc * refactor toc * Change to pydata-sphinx-theme and update packages requirement list for ReadtheDocs * Remove customized css for old theme * Add index page to each top bar section and limit dropdown maximum to be 4 * Use js to change 'More' to 'Libraries' * Add custom.css to conf.py for further css changes * Add BigDL logo and search bar * refactor toc * refactor toc and add overview * refactor toc and add overview * refactor toc and add overview * refactor get started * add paper and video section * add videos * add grid columns in landing page * add document roadmap to index * reapply search bar and github icon commit * reorg orca and chronos sections * Test: weaken ads by js * update: change left attrbute * update: add comments * update: change opacity to 0.7 * Remove useless theme template override for old theme * Add sidebar releases component in the home page * Remove sidebar search and restore top nav search button * Add BigDL handouts * Add back to homepage button to pages except from the home page * Update releases contents & styles in left sidebar * Add version badge to the top bar * Test: weaken ads by js * update: add comments * remove landing page contents * rfix chronos install * refactor install * refactor chronos section titles * refactor nano index * change chronos landing * revise chronos landing page * add document navigator to nano landing page * revise install landing page * Improve css of versions in sidebar * Make handouts image pointing to a page in new tab * add win guide to install * add dliib installation * revise title bar * rename index files * add index page for user guide * add dllib and orca API * update user guide landing page * refactor side bar * Remove extra style configuration of card components & make different card usage consistent * Remove extra styles for Nano how-to guides * Remove extra styles for Chronos how-to guides * Remove dark mode for now * Update index page description * Add decision tree for choosing BigDL libraries in index page * add dllib models api, revise core layers formats * Change primary & info color in light mode * Restyle card components * Restructure Chronos landing page * Update card style * Update BigDL library selection decision tree * Fix failed Chronos tutorials filter * refactor PPML documents * refactor and add friesian documents * add friesian arch diagram * update landing pages and fill key features guide index page * Restyle link card component * Style video frames in PPML sections * Adjust Nano landing page * put api docs to the last in index for convinience * Make badge horizontal padding smaller & small changes * Change the second letter of all header titles to be small capitalizd * Small changes on Chronos index page * Revise decision tree to make it smaller * Update: try to change the position of ads. * Bugfix: deleted nonexist file config * Update: update ad JS/CSS/config * Update: change ad. * Update: delete my template and change files. * Update: change chronos installation table color. * Update: change table font color to --pst-color-primary-text * Remove old contents in landing page sidebar * Restyle badge for usage in card footer again * Add quicklinks template on landing page sidebar * add quick links * Add scala logo * move tf, pytorch out of the link * change orca key features cards * fix typo * fix a mistake in wording * Restyle badge for card footer * Update decision tree * Remove useless html templates * add more api docs and update tutorials in dllib * update chronos install using new style * merge changes in nano doc from master * fix quickstart links in sidebar quicklinks * Make tables responsive * Fix overflow in api doc * Fix list indents problems in [User guide] section * Further fixes to nested bullets contents in [User Guide] section * Fix strange title in Nano 5-min doc * Fix list indent problems in [DLlib] section * Fix misnumbered list problems and other small fixes for [Chronos] section * Fix list indent problems and other small fixes for [Friesian] section * Fix list indent problem and other small fixes for [PPML] section * Fix list indent problem for developer guide * Fix list indent problem for [Cluster Serving] section * fix dllib links * Fix wrong relative link in section landing page Co-authored-by: Yuwen Hu <yuwen.hu@intel.com> Co-authored-by: Juntao Luo <1072087358@qq.com>
5.4 KiB
PyTorch Training
BigDL-Nano can be used to accelerate PyTorch or PyTorch-Lightning applications on training workloads. The optimizations in BigDL-Nano are delivered through an extended version of PyTorch-Lightning Trainer. These optimizations are either enabled by default or can be easily turned on by setting a parameter or calling a method.
We will briefly describe here the major features in BigDL-Nano for PyTorch training. You can find complete examples here.
Best Known Configurations
When you run source bigdl-nano-init, BigDL-Nano will export a few environment variables, such as OMP_NUM_THREADS and KMP_AFFINITY, according to your current hardware. Empirically, these environment variables work best for most PyTorch applications. After setting these environment variables, you can just run your applications as usual (python app.py) and no additional changes are required.
BigDL-Nano PyTorch Trainer
The PyTorch Trainer (bigdl.nano.pytorch.Trainer) is the place where we integrate most optimizations. It extends PyTorch Lightning's Trainer and has a few more parameters and methods specific to BigDL-Nano. The Trainer can be directly used to train a LightningModule.
For example,
from pytorch_lightning import LightningModule
from bigdl.nano.pytorch import Trainer
class MyModule(LightningModule):
# LightningModule definition
from bigdl.nano.pytorch import Trainer
lightning_module = MyModule()
trainer = Trainer(max_epoch=10)
trainer.fit(lightning_module, train_loader)
For regular PyTorch modules, we also provide a "compile" method, that takes in a PyTorch module, an optimizer, and other PyTorch objects and "compiles" them into a LightningModule.
For example,
from bigdl.nano.pytorch import Trainer
lightning_module = Trainer.compile(pytorch_module, loss, optimizer, scheduler)
trainer = Trainer(max_epoch=10)
trainer.fit(lightning_module, train_loader)
Intel® Extension for PyTorch
Intel Extension for PyTorch (a.k.a. IPEX) extends PyTorch with optimizations for an extra performance boost on Intel hardware. BigDL-Nano integrates IPEX through the Trainer. Users can turn on IPEX by setting use_ipex=True.
from bigdl.nano.pytorch import Trainer
trainer = Trainer(max_epoch=10, use_ipex=True)
Multi-instance Training
When training on a server with dozens of CPU cores, it is often beneficial to use multiple training instances in a data-parallel fashion to make full use of the CPU cores. However, using PyTorch's DDP API is a little cumbersome and error-prone, and if not configured correctly, it will make the training even slow.
BigDL-Nano makes it very easy to conduct multi-instance training. You can just set the num_processes parameter in the Trainer constructor and BigDL-Nano will launch the specific number of processes to perform data-parallel training. Each process will be automatically pinned to a different subset of CPU cores to avoid conflict and maximize training throughput.
from bigdl.nano.pytorch import Trainer
trainer = Trainer(max_epoch=10, num_processes=4)
Note that the effective batch size in multi-instance training is the batch_size in your dataloader times num_processes so the number of iterations of each epoch will be reduced num_processes fold. A common practice to compensate for that is to gradually increase the learning rate to num_processes times. You can find more details of this trick in this paper published by Facebook.
BigDL-Nano PyTorch TorchNano
The TorchNano (bigdl.nano.pytorch.TorchNano) class is what we use to accelerate raw pytorch code. By using it, we only need to make very few changes to accelerate custom training loop. For example,
from bigdl.nano.pytorch import TorchNano
class MyNano(TorchNano) :
def train(self, ...):
# copy your train loop here and make a few changes
MyNano().train(...)
- note: see this tutorial for details about our
TorchNano.
Our TorchNano also integrates IPEX and distributed training optimizations. For example,
from bigdl.nano.pytorch import TorchNano
class MyNano(TorchNano):
def train(self, ...):
# define train loop
# enable IPEX optimizaiton
MyNano(use_ipex=True).train(...)
# enable IPEX and distributed training, using subprocess strategy
MyNano(use_ipex=True, num_processes=2, strategy="subprocess").train(...)
Optimized Data Pipeline
Computer Vision task often needs a data processing pipeline that sometimes constitutes a non-trivial part of the whole training pipeline. Leveraging OpenCV and libjpeg-turbo, BigDL-Nano can accelerate computer vision data pipelines by providing a drop-in replacement of torch_vision's datasets and transforms.
from bigdl.nano.pytorch.vision.datasets import ImageFolder
from bigdl.nano.pytorch.vision import transforms
data_transform = transforms.Compose([
transforms.Resize(256),
transforms.ColorJitter(),
transforms.RandomCrop(224),
transforms.RandomHorizontalFlip(),
transforms.Resize(128),
transforms.ToTensor()
])
train_set = ImageFolder(train_path, data_transform)
train_loader = DataLoader(train_set, batch_size=batch_size, shuffle=True)
trainer.fit(module, train_loader)