Nano: Rename strategy parameter of TorchNano to distributed_backend (#6126)

This commit is contained in:
Yishuo Wang 2022-10-20 09:16:55 +08:00 committed by GitHub
parent d36e7c4ff3
commit 7b28b3fd20
2 changed files with 5 additions and 5 deletions

View file

@ -88,8 +88,8 @@ class MyNano(TorchNano):
# enable IPEX optimizaiton
MyNano(use_ipex=True).train(...)
# enable IPEX and distributed training, using subprocess strategy
MyNano(use_ipex=True, num_processes=2, strategy="subprocess").train(...)
# enable IPEX and distributed training, using 'subprocess' backend
MyNano(use_ipex=True, num_processes=2, distributed_backend="subprocess").train(...)
```
### Optimized Data Pipeline

View file

@ -160,15 +160,15 @@ At this stage, you may already experience some speedup due to the optimized envi
#### Increase the number of processes in distributed training to accelerate training.
```python
MyNano(num_processes=2, strategy="subprocess").train()
MyNano(num_processes=2, distributed_backend="subprocess").train()
```
- Note: BigDL-Nano now support 'spawn', 'subprocess' and 'ray' strategies for distributed training, but only the 'subprocess' strategy can be used in interactive environment.
- Note: BigDL-Nano now support 'spawn', 'subprocess' and 'ray' backends for distributed training, but only the 'subprocess' backend can be used in interactive environment.
#### Intel Extension for Pytorch (a.k.a. [IPEX](https://github.com/intel/intel-extension-for-pytorch))
IPEX extends Pytorch with optimizations on intel hardware. BigDL-Nano also integrates IPEX into the `TorchNano`, you can turn on IPEX optimization by setting `use_ipex=True`.
```python
MyNano(use_ipex=True, num_processes=2, strategy="subprocess").train()
MyNano(use_ipex=True, num_processes=2, distributed_backend="subprocess").train()
```