From 62694b420ed0deddca1c6179fe5fd31bb62a2ce5 Mon Sep 17 00:00:00 2001 From: Yishuo Wang Date: Thu, 17 Nov 2022 15:34:45 +0800 Subject: [PATCH] Nano: fix onnx quantization document issue (#6662) --- .../source/doc/Nano/QuickStart/pytorch_quantization_inc_onnx.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/readthedocs/source/doc/Nano/QuickStart/pytorch_quantization_inc_onnx.md b/docs/readthedocs/source/doc/Nano/QuickStart/pytorch_quantization_inc_onnx.md index 1df32e84..d6fb3767 100644 --- a/docs/readthedocs/source/doc/Nano/QuickStart/pytorch_quantization_inc_onnx.md +++ b/docs/readthedocs/source/doc/Nano/QuickStart/pytorch_quantization_inc_onnx.md @@ -16,7 +16,7 @@ source bigdl-nano-init To quantize model using ONNXRuntime as backend, it is required to install Intel Neural Compressor, onnxruntime-extensions as a dependency of INC and some onnx packages as below ```python -pip install neural-compress==1.11 +pip install neural-compressor==1.11 pip install onnx onnxruntime onnxruntime-extensions ``` ### Step 1: Load the data