diff --git a/docs/readthedocs/source/_toc.yml b/docs/readthedocs/source/_toc.yml index d81ec6ca..d7565bed 100644 --- a/docs/readthedocs/source/_toc.yml +++ b/docs/readthedocs/source/_toc.yml @@ -79,6 +79,7 @@ subtrees: - entries: - file: doc/Nano/Overview/pytorch_train - file: doc/Nano/Overview/pytorch_inference + - file: doc/Nano/Overview/pytorch_cuda_patch - file: doc/Nano/Overview/tensorflow_train - file: doc/Nano/Overview/tensorflow_inference - file: doc/Nano/Overview/hpo diff --git a/docs/readthedocs/source/doc/Nano/Overview/index.rst b/docs/readthedocs/source/doc/Nano/Overview/index.rst index 14cdd4c0..e4d8a29c 100644 --- a/docs/readthedocs/source/doc/Nano/Overview/index.rst +++ b/docs/readthedocs/source/doc/Nano/Overview/index.rst @@ -3,6 +3,7 @@ Nano Key Features * `PyTorch Training `_ * `PyTorch Inference `_ +* `PyTorch CUDA patch `_ * `Tensorflow Training `_ * `Tensorflow Inference `_ -* `AutoML `_ \ No newline at end of file +* `AutoML `_ diff --git a/docs/readthedocs/source/doc/Nano/Overview/pytorch_cuda_patch.md b/docs/readthedocs/source/doc/Nano/Overview/pytorch_cuda_patch.md new file mode 100644 index 00000000..b20d3c86 --- /dev/null +++ b/docs/readthedocs/source/doc/Nano/Overview/pytorch_cuda_patch.md @@ -0,0 +1,29 @@ +# PyTorch CUDA Patch + +BigDL-Nano also provides CUDA patch (`bigdl.nano.pytorch.patching.patch_cuda`) to help you run CUDA code without GPU. This patch will replace CUDA operations with equivalent CPU operations, so after applying it, you can run CUDA code on your CPU without changing any code. + +```eval_rst +.. tip:: + There is also ``bigdl.nano.pytorch.patching.unpatch_cuda`` to unpatch it. +``` + +You can use it as following: +```python +from bigdl.nano.pytorch.patching import patch_cuda, unpatch_cuda +patch_cuda() + +# Then you can run CUDA code directly even without GPU +model = torchvision.models.resnet50(pretrained=True).cuda() +inputs = torch.rand((1, 3, 128, 128)).cuda() +with torch.no_grad(): + outputs = model(inputs) + +unpatch_cuda() +``` + +```eval_rst +.. note:: + - You should apply this patch at the beginning of your code, because it can only affect the code after calling it. + - This CUDA patch is incompatible with JIT, applying it will disable JIT automatically. +``` +