Nano: add OS support table (#7429)
This commit is contained in:
		
							parent
							
								
									8911c69302
								
							
						
					
					
						commit
						f4700422e1
					
				
					 3 changed files with 62 additions and 3 deletions
				
			
		| 
						 | 
					@ -186,6 +186,8 @@ subtrees:
 | 
				
			||||||
              title: "Tips and Known Issues"
 | 
					              title: "Tips and Known Issues"
 | 
				
			||||||
            - file: doc/Nano/Overview/troubshooting
 | 
					            - file: doc/Nano/Overview/troubshooting
 | 
				
			||||||
              title: "Troubleshooting Guide"
 | 
					              title: "Troubleshooting Guide"
 | 
				
			||||||
 | 
					            - file: doc/Nano/Overview/support
 | 
				
			||||||
 | 
					              title: "OS Support"
 | 
				
			||||||
            - file: doc/PythonAPI/Nano/index
 | 
					            - file: doc/PythonAPI/Nano/index
 | 
				
			||||||
              title: "API Reference"
 | 
					              title: "API Reference"
 | 
				
			||||||
 | 
					
 | 
				
			||||||
| 
						 | 
					
 | 
				
			||||||
| 
						 | 
					@ -85,15 +85,15 @@ We support a wide range of PyTorch and Tensorflow. We only care the MAJOR.MINOR
 | 
				
			||||||
Some specific note should be awared of when installing `bigdl-nano`.`
 | 
					Some specific note should be awared of when installing `bigdl-nano`.`
 | 
				
			||||||
 | 
					
 | 
				
			||||||
### Install on Linux
 | 
					### Install on Linux
 | 
				
			||||||
For Linux, Ubuntu (22.04/20.04/18.04) is recommended.
 | 
					For Linux, Ubuntu (22.04/20.04) is recommended.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
### Install on Windows
 | 
					### Install on Windows (experimental support)
 | 
				
			||||||
 | 
					
 | 
				
			||||||
For Windows OS, users could only run `bigdl-nano-init` every time they open a new cmd terminal.
 | 
					For Windows OS, users could only run `bigdl-nano-init` every time they open a new cmd terminal.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
We recommend using Windows Subsystem for Linux 2 (WSL2) to run BigDL-Nano. Please refer to [Nano Windows install guide](../Howto/Install/windows_guide.md) for instructions.
 | 
					We recommend using Windows Subsystem for Linux 2 (WSL2) to run BigDL-Nano. Please refer to [Nano Windows install guide](../Howto/Install/windows_guide.md) for instructions.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
### Install on MacOS
 | 
					### Install on MacOS (experimental support)
 | 
				
			||||||
#### MacOS with Intel Chip
 | 
					#### MacOS with Intel Chip
 | 
				
			||||||
Same usage as Linux, while some of the funcions now rely on lower version dependencies.
 | 
					Same usage as Linux, while some of the funcions now rely on lower version dependencies.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
| 
						 | 
					
 | 
				
			||||||
							
								
								
									
										57
									
								
								docs/readthedocs/source/doc/Nano/Overview/support.md
									
									
									
									
									
										Normal file
									
								
							
							
						
						
									
										57
									
								
								docs/readthedocs/source/doc/Nano/Overview/support.md
									
									
									
									
									
										Normal file
									
								
							| 
						 | 
					@ -0,0 +1,57 @@
 | 
				
			||||||
 | 
					# BigDL-Nano Features
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					| Feature               | Meaning                                                                 |
 | 
				
			||||||
 | 
					| --------------------- | ----------------------------------------------------------------------- |
 | 
				
			||||||
 | 
					| **Intel-openmp**      | Use Intel-openmp library to improve performance of multithread programs |
 | 
				
			||||||
 | 
					| **Jemalloc**          | Use jemalloc as allocator                                               |
 | 
				
			||||||
 | 
					| **Tcmalloc**          | Use tcmalloc as allocator                                               |
 | 
				
			||||||
 | 
					| **Neural-Compressor** | Neural-Compressor int8 quantization                                     |
 | 
				
			||||||
 | 
					| **OpenVINO**          | OpenVINO fp32/bf16/fp16/int8 acceleration on CPU/GPU/VPU                |
 | 
				
			||||||
 | 
					| **ONNXRuntime**       | ONNXRuntime fp32/int8 acceleration                                      |
 | 
				
			||||||
 | 
					| **CUDA patch**        | Run CUDA code even without GPU                                          |
 | 
				
			||||||
 | 
					| **JIT**               | PyTorch JIT optimization                                                |
 | 
				
			||||||
 | 
					| **Channel last**      | Channel last memory format                                              |
 | 
				
			||||||
 | 
					| **BF16**              | BFloat16 mixed precision training and inference                         |
 | 
				
			||||||
 | 
					| **IPEX**              | Intel-extension-for-pytorch optimization                                |
 | 
				
			||||||
 | 
					| **Multi-instance**    | Multi-process training and inference                                    |
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					## Common Feature Support (Can be used in both PyTorch and TensorFlow)
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					| Feature               | Ubuntu (20.04/22.04) | CentOS7 | MacOS (Intel chip) | MacOS (M-series chip) | Windows |
 | 
				
			||||||
 | 
					| --------------------- | -------------------- | ------- | ------------------ | --------------------- | ------- |
 | 
				
			||||||
 | 
					| **Intel-openmp**      | ✅                    | ✅       | ✅                  | ②                     | ✅       |
 | 
				
			||||||
 | 
					| **Jemalloc**          | ✅                    | ✅       | ✅                  | ❌                     | ❌       |
 | 
				
			||||||
 | 
					| **Tcmalloc**          | ✅                    | ❌       | ❌                  | ❌                     | ❌       |
 | 
				
			||||||
 | 
					| **Neural-Compressor** | ✅                    | ✅       | ❌                  | ❌                     | ?       |
 | 
				
			||||||
 | 
					| **OpenVINO**          | ✅                    | ①       | ❌                  | ❌                     | ?       |
 | 
				
			||||||
 | 
					| **ONNXRuntime**       | ✅                    | ①       | ✅                  | ❌                     | ?       |
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					## PyTorch Feature Support
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					| Feature            | Ubuntu (20.04/22.04) | CentOS7 | MacOS (Intel chip) | MacOS (M-series chip) | Windows |
 | 
				
			||||||
 | 
					| ------------------ | -------------------- | ------- | ------------------ | --------------------- | ------- |
 | 
				
			||||||
 | 
					| **CUDA patch**     | ✅                    | ✅       | ✅                  | ?                     | ✅       |
 | 
				
			||||||
 | 
					| **JIT**            | ✅                    | ✅       | ✅                  | ?                     | ✅       |
 | 
				
			||||||
 | 
					| **Channel last**   | ✅                    | ✅       | ✅                  | ?                     | ✅       |
 | 
				
			||||||
 | 
					| **BF16**           | ✅                    | ✅       | ⭕                  | ⭕                     | ?       |
 | 
				
			||||||
 | 
					| **IPEX**           | ✅                    | ✅       | ❌                  | ❌                     | ❌       |
 | 
				
			||||||
 | 
					| **Multi-instance** | ✅                    | ✅       | ②                  | ②                     | ?       |
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					## TensorFlow Feature Support
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					| Feature            | Ubuntu (20.04/22.04) | CentOS7 | MacOS (Intel chip) | MacOS (M-series chip) | Windows |
 | 
				
			||||||
 | 
					| ------------------ | -------------------- | ------- | ------------------ | --------------------- | ------- |
 | 
				
			||||||
 | 
					| **BF16**           | ✅                    | ✅       | ⭕                  | ⭕                     | ?       |
 | 
				
			||||||
 | 
					| **Multi-instance** | ③                    | ③       | ②③                 | ②③                    | ?       |
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					## Symbol Meaning
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					| Symbol | Meaning                                                                                                  |
 | 
				
			||||||
 | 
					| ------ | -------------------------------------------------------------------------------------------------------- |
 | 
				
			||||||
 | 
					| ✅      | Supported                                                                                                |
 | 
				
			||||||
 | 
					| ❌      | Not supported                                                                                            |
 | 
				
			||||||
 | 
					| ⭕      | All Mac machines (Intel/M-series chip) do not support bf16 instruction set, so this feature is pointless |
 | 
				
			||||||
 | 
					| ①      | This feature is only supported when used together with jemalloc                                          |
 | 
				
			||||||
 | 
					| ②      | This feature is supported but without any performance guarantee                                          |
 | 
				
			||||||
 | 
					| ③      | Only Multi-instance training is supported for now                                                        |
 | 
				
			||||||
 | 
					| ?      | Not tested                                                                                               |
 | 
				
			||||||
		Loading…
	
		Reference in a new issue