From 3b6c56b505b945ef3eed06bddd5f2afdf84d94e5 Mon Sep 17 00:00:00 2001 From: Yishuo Wang Date: Thu, 22 Dec 2022 09:53:05 +0800 Subject: [PATCH] Nano: update nano known issue (#7063) --- .../source/doc/Nano/Overview/known_issues.md | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/docs/readthedocs/source/doc/Nano/Overview/known_issues.md b/docs/readthedocs/source/doc/Nano/Overview/known_issues.md index f8807945..c0828b1e 100644 --- a/docs/readthedocs/source/doc/Nano/Overview/known_issues.md +++ b/docs/readthedocs/source/doc/Nano/Overview/known_issues.md @@ -48,10 +48,18 @@ spec: ## TensorFlow Issues +### ValueError: Calling `Model.xxx` in graph mode is not supported when the `Model` instance was constructed with eager mode enabled. + +Nano keras only supports running in eager mode, if you are using graph mode, please make sure not to import anything from `bigdl.nano.tf`. + ### Nano keras multi-instance training currently does not suport tensorflow dataset.from_generators, numpy_function, py_function Nano keras multi-instance training will serialize TensorFlow dataset object into a `graph.pb` file, which does not work with `dataset.from_generators`, `dataset.numpy_function`, `dataset.py_function` due to limitations in TensorFlow. +### RuntimeError: A keras.Model for quantization must include Input layers. + +You may meet this error when running quantization, INC quantization doesn't support model without `Input` layer, you can use OpenVINO or ONNXRuntime in this case, i.e. `InferenceOptimizer.quantize(model, accelerator="openvino", ...)` or `InferenceOptimizer.quantize(model, accelerator="onnxruntime", ...)` + ### RuntimeError: Inter op parallelism cannot be modified after initialization If you meet this error when import `bigdl.nano.tf`, it could be that you have already run some TensorFlow code that set the inter/intra op parallelism, such as `tfds.load`. You can try to workaround this issue by trying to import `bigdl.nano.tf` first before running TensorFlow code. See https://github.com/tensorflow/tensorflow/issues/57812 for more information. @@ -60,4 +68,4 @@ If you meet this error when import `bigdl.nano.tf`, it could be that you have al ### protobuf version error -Now `pip install ray[default]==1.11.0` will install `google-api-core==2.10.0`, which depends on `protobuf>=3.20.1`. However, nano depends on `protobuf==3.19.4`, so if we install `ray` after installing `bigdl-nano`, pip will reinstall `protobuf==4.21.5`, which causes error. +Now `pip install ray[default]==1.11.0` will install `google-api-core>=2.10.0`, which depends on `protobuf>=3.20.1`. However, nano depends on `protobuf==3.19.4`, so you will meet this error if you install `ray` after `bigdl-nano`. The solution is `pip install google-api-core==2.8.2` before installing `ray`.