Nano How-to Guides ========================= .. note:: This page is still a work in progress. We are adding more guides. In Nano How-to Guides, you could expect to find multiple task-oriented, bite-sized, and executable examples. These examples will show you various tasks that BigDL-Nano could help you accomplish smoothly. Training Optimization ------------------------- PyTorch Lightning ~~~~~~~~~~~~~~~~~~~~~~~~~ * `How to accelerate a PyTorch Lightning application on training workloads through IntelĀ® Extension for PyTorch* `_ * `How to accelerate a PyTorch Lightning application on training workloads through multiple instances `_ * `How to use the channels last memory format in your PyTorch Lightning application for training `_ * `How to conduct BFloat16 Mixed Precision training in your PyTorch Lightning application `_ * `How to accelerate a computer vision data processing pipeline `_ TensorFlow ~~~~~~~~~~~~~~~~~~~~~~~~~ * `How to accelerate a TensorFlow Keras application on training workloads through multiple instances `_ * |tensorflow_training_embedding_sparseadam_link|_ .. |tensorflow_training_embedding_sparseadam_link| replace:: How to optimize your model with a sparse ``Embedding`` layer and ``SparseAdam`` optimizer .. _tensorflow_training_embedding_sparseadam_link: Training/TensorFlow/tensorflow_training_embedding_sparseadam.html General ~~~~~~~~~~~~~~~~~~~~~~~~~ * `How to choose the number of processes for multi-instance training `_ Inference Optimization ------------------------- OpenVINO ~~~~~~~~~~~~~~~~~~~~~~~~~ * `How to run inference on OpenVINO model `_ * `How to run asynchronous inference on OpenVINO model `_ .. toctree:: :maxdepth: 1 :hidden: Inference/OpenVINO/openvino_inference Inference/OpenVINO/openvino_inference_async PyTorch ~~~~~~~~~~~~~~~~~~~~~~~~~ * `How to accelerate a PyTorch inference pipeline through ONNXRuntime `_ * `How to accelerate a PyTorch inference pipeline through OpenVINO `_ * `How to accelerate a PyTorch inference pipeline through JIT/IPEX `_ * `How to accelerate a PyTorch inference pipeline through multiple instances `_ * `How to quantize your PyTorch model for inference using Intel Neural Compressor `_ * `How to quantize your PyTorch model for inference using OpenVINO Post-training Optimization Tools `_ * `How to save and load optimized IPEX model `_ * `How to save and load optimized JIT model `_ * `How to save and load optimized ONNXRuntime model `_ * `How to save and load optimized OpenVINO model `_ * `How to find accelerated method with minimal latency using InferenceOptimizer `_ Install ------------------------- * `How to install BigDL-Nano in Google Colab `_ * `How to install BigDL-Nano on Windows `_