Nano: fix onnx quantization document issue (#6662)

This commit is contained in:
Yishuo Wang 2022-11-17 15:34:45 +08:00 committed by GitHub
parent 8d6c43dd09
commit 62694b420e

View file

@ -16,7 +16,7 @@ source bigdl-nano-init
To quantize model using ONNXRuntime as backend, it is required to install Intel Neural Compressor, onnxruntime-extensions as a dependency of INC and some onnx packages as below
```python
pip install neural-compress==1.11
pip install neural-compressor==1.11
pip install onnx onnxruntime onnxruntime-extensions
```
### Step 1: Load the data