Nano: update nano known issue (#7063)
This commit is contained in:
		
							parent
							
								
									0a62df22aa
								
							
						
					
					
						commit
						3b6c56b505
					
				
					 1 changed files with 9 additions and 1 deletions
				
			
		| 
						 | 
					@ -48,10 +48,18 @@ spec:
 | 
				
			||||||
 | 
					
 | 
				
			||||||
## TensorFlow Issues
 | 
					## TensorFlow Issues
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					### ValueError: Calling `Model.xxx` in graph mode is not supported when the `Model` instance was constructed with eager mode enabled.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					Nano keras only supports running in eager mode, if you are using graph mode, please make sure not to import anything from `bigdl.nano.tf`.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
### Nano keras multi-instance training currently does not suport tensorflow dataset.from_generators, numpy_function, py_function
 | 
					### Nano keras multi-instance training currently does not suport tensorflow dataset.from_generators, numpy_function, py_function
 | 
				
			||||||
 | 
					
 | 
				
			||||||
Nano keras multi-instance training will serialize TensorFlow dataset object into a `graph.pb` file, which does not work with `dataset.from_generators`, `dataset.numpy_function`, `dataset.py_function` due to limitations in TensorFlow.
 | 
					Nano keras multi-instance training will serialize TensorFlow dataset object into a `graph.pb` file, which does not work with `dataset.from_generators`, `dataset.numpy_function`, `dataset.py_function` due to limitations in TensorFlow.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					### RuntimeError: A keras.Model for quantization must include Input layers.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					You may meet this error when running quantization, INC quantization doesn't support model without `Input` layer, you can use OpenVINO or ONNXRuntime in this case, i.e. `InferenceOptimizer.quantize(model, accelerator="openvino", ...)` or `InferenceOptimizer.quantize(model, accelerator="onnxruntime", ...)`
 | 
				
			||||||
 | 
					
 | 
				
			||||||
### RuntimeError: Inter op parallelism cannot be modified after initialization
 | 
					### RuntimeError: Inter op parallelism cannot be modified after initialization
 | 
				
			||||||
 | 
					
 | 
				
			||||||
If you meet this error when import `bigdl.nano.tf`, it could be that you have already run some TensorFlow code that set the inter/intra op parallelism, such as `tfds.load`. You can try to workaround this issue by trying to import `bigdl.nano.tf` first before running TensorFlow code. See https://github.com/tensorflow/tensorflow/issues/57812 for more information.
 | 
					If you meet this error when import `bigdl.nano.tf`, it could be that you have already run some TensorFlow code that set the inter/intra op parallelism, such as `tfds.load`. You can try to workaround this issue by trying to import `bigdl.nano.tf` first before running TensorFlow code. See https://github.com/tensorflow/tensorflow/issues/57812 for more information.
 | 
				
			||||||
| 
						 | 
					@ -60,4 +68,4 @@ If you meet this error when import `bigdl.nano.tf`, it could be that you have al
 | 
				
			||||||
 | 
					
 | 
				
			||||||
### protobuf version error
 | 
					### protobuf version error
 | 
				
			||||||
 | 
					
 | 
				
			||||||
Now `pip install ray[default]==1.11.0` will install `google-api-core==2.10.0`, which depends on `protobuf>=3.20.1`. However, nano depends on `protobuf==3.19.4`, so if we install `ray` after installing `bigdl-nano`, pip will reinstall `protobuf==4.21.5`, which causes error.
 | 
					Now `pip install ray[default]==1.11.0` will install `google-api-core>=2.10.0`, which depends on `protobuf>=3.20.1`. However, nano depends on `protobuf==3.19.4`, so you will meet this error if you install `ray` after `bigdl-nano`. The solution is `pip install google-api-core==2.8.2` before installing `ray`.
 | 
				
			||||||
| 
						 | 
					
 | 
				
			||||||
		Loading…
	
		Reference in a new issue