From 145216bfc1ead82d2b05e97f126edd1c52fdcb01 Mon Sep 17 00:00:00 2001 From: Mingzhi Hu <49382651+y199387@users.noreply.github.com> Date: Fri, 24 Jun 2022 16:38:48 +0800 Subject: [PATCH] Update pytorch to 1.11 (#4854) * update setup and action * fix yml * update yml * test yml * update docs * update notebook requirements * test notebooks unit test * reset yml * delete comments * fix yml * fix errors in setup.py * fix setup.py * specify the version of IPEX --- .../source/doc/Nano/QuickStart/pytorch_train.md | 7 ------- 1 file changed, 7 deletions(-) diff --git a/docs/readthedocs/source/doc/Nano/QuickStart/pytorch_train.md b/docs/readthedocs/source/doc/Nano/QuickStart/pytorch_train.md index b9a1968e..874dd4a4 100644 --- a/docs/readthedocs/source/doc/Nano/QuickStart/pytorch_train.md +++ b/docs/readthedocs/source/doc/Nano/QuickStart/pytorch_train.md @@ -47,13 +47,6 @@ from bigdl.nano.pytorch import Trainer trainer = Trainer(max_epoch=10, use_ipex=True) ``` -Note: BigDL-Nano does not install IPEX by default. You can install IPEX using the following command: - -```bash -python -m pip install torch_ipex==1.9.0 -f https://software.intel.com/ipex-whl-stable -python -m pip install torchvision==0.10.0+cpu -f https://download.pytorch.org/whl/torch_stable.html -``` - #### Multi-instance Training When training on a server with dozens of CPU cores, it is often beneficial to use multiple training instances in a data-parallel fashion to make full use of the CPU cores. However, using PyTorch's DDP API is a little cumbersome and error-prone, and if not configured correctly, it will make the training even slow.