[PPML] Readthedoc fix path and branch (#4431)

* scala/docker-graphene to python/docker-graphene
* Branch 2.0 to main
This commit is contained in:
Qiyuan Gong 2022-04-18 19:41:47 +08:00 committed by GitHub
parent 658a4286f2
commit 0e0fbaf3d6
2 changed files with 7 additions and 7 deletions

View file

@ -93,7 +93,7 @@ docker pull intelanalytics/bigdl-ppml-trusted-big-data-ml-scala-graphene:2.1.0-S
Alternatively, you can build Docker image from Dockerfile (this will take some time): Alternatively, you can build Docker image from Dockerfile (this will take some time):
```bash ```bash
cd trusted-big-data-ml/scala/docker-graphene cd trusted-big-data-ml/python/docker-graphene
./build-docker-image.sh ./build-docker-image.sh
``` ```
@ -101,11 +101,11 @@ cd trusted-big-data-ml/scala/docker-graphene
##### 2.2.2.1 Start PPML Container ##### 2.2.2.1 Start PPML Container
Enter `BigDL/ppml/trusted-big-data-ml/scala/docker-graphene` dir. Enter `BigDL/ppml/trusted-big-data-ml/python/docker-graphene` dir.
1. Copy `keys` and `password` 1. Copy `keys` and `password`
```bash ```bash
cd trusted-big-data-ml/scala/docker-graphene cd trusted-big-data-ml/python/docker-graphene
# copy keys and password into the current directory # copy keys and password into the current directory
cp -r ../.././../scripts/keys/ . cp -r ../.././../scripts/keys/ .
cp -r ../.././../scripts/password/ . cp -r ../.././../scripts/password/ .
@ -124,8 +124,8 @@ Enter `BigDL/ppml/trusted-big-data-ml/scala/docker-graphene` dir.
./init.sh ./init.sh
``` ```
**ENCLAVE_KEY_PATH** means the absolute path to the "enclave-key.pem", according to the above commands, the path would be like "BigDL/ppml/scripts/enclave-key.pem". <br> **ENCLAVE_KEY_PATH** means the absolute path to the "enclave-key.pem", according to the above commands, the path would be like "BigDL/ppml/scripts/enclave-key.pem". <br>
**DATA_PATH** means the absolute path to the data(like mnist) that would use later in the spark program. According to the above commands, the path would be like "BigDL/ppml/trusted-big-data-ml/scala/docker-graphene/mnist" <br> **DATA_PATH** means the absolute path to the data(like mnist) that would use later in the spark program. According to the above commands, the path would be like "BigDL/ppml/trusted-big-data-ml/python/docker-graphene/mnist" <br>
**KEYS_PATH** means the absolute path to the keys you just created and copied to. According to the above commands, the path would be like "BigDL/ppml/trusted-big-data-ml/scala/docker-graphene/keys" <br> **KEYS_PATH** means the absolute path to the keys you just created and copied to. According to the above commands, the path would be like "BigDL/ppml/trusted-big-data-ml/python/docker-graphene/keys" <br>
**LOCAL_IP** means your local IP address. <br> **LOCAL_IP** means your local IP address. <br>
##### 2.2.2.2 Run Your Spark Program with BigDL PPML on SGX ##### 2.2.2.2 Run Your Spark Program with BigDL PPML on SGX
@ -282,7 +282,7 @@ Enter `BigDL/ppml/trusted-big-data-ml/python/docker-graphene` directory.
1. Copy `keys` and `password` to the current directory 1. Copy `keys` and `password` to the current directory
```bash ```bash
cd ppml/trusted-big-data-ml/scala/docker-graphene cd ppml/trusted-big-data-ml/python/docker-graphene
# copy keys and password into the current directory # copy keys and password into the current directory
cp -r ../keys . cp -r ../keys .
cp -r ../password . cp -r ../password .

View file

@ -56,7 +56,7 @@ Please ensure SGX is properly enabled, and SGX driver is installed. If not, plea
If run in container, please modify `KEYS_PATH` to `keys/` you generated in last step in `deploy_fl_container.sh`. This dir will mount to container's `/ppml/trusted-big-data-ml/work/keys`, then modify the `privateKeyFilePath` and `certChainFilePath` in `ppml-conf.yaml` with container's absolute path. If not in container, just modify the `privateKeyFilePath` and `certChainFilePath` in `ppml-conf.yaml` with your local path. If you don't want to build tls channel with certificate, just delete the `privateKeyFilePath` and `certChainFilePath` in `ppml-conf.yaml`. If run in container, please modify `KEYS_PATH` to `keys/` you generated in last step in `deploy_fl_container.sh`. This dir will mount to container's `/ppml/trusted-big-data-ml/work/keys`, then modify the `privateKeyFilePath` and `certChainFilePath` in `ppml-conf.yaml` with container's absolute path. If not in container, just modify the `privateKeyFilePath` and `certChainFilePath` in `ppml-conf.yaml` with your local path. If you don't want to build tls channel with certificate, just delete the `privateKeyFilePath` and `certChainFilePath` in `ppml-conf.yaml`.
3. Prepare dataset for FL training. For demo purposes, we have added a public dataset in [BigDL PPML Demo data](https://github.com/intel-analytics/BigDL/tree/branch-2.0/scala/ppml/demo/data). Please download these data into your local machine. Then modify `DATA_PATH` to `./data` with absolute path in your machine and your local ip in `deploy_fl_container.sh`. The `./data` path will mount to container's `/ppml/trusted-big-data-ml/work/data`, so if you don't run in container, you need to modify the data path in `runH_VflClient1_2.sh`. 3. Prepare dataset for FL training. For demo purposes, we have added a public dataset in [BigDL PPML Demo data](https://github.com/intel-analytics/BigDL/tree/main/scala/ppml/demo/data). Please download these data into your local machine. Then modify `DATA_PATH` to `./data` with absolute path in your machine and your local ip in `deploy_fl_container.sh`. The `./data` path will mount to container's `/ppml/trusted-big-data-ml/work/data`, so if you don't run in container, you need to modify the data path in `runH_VflClient1_2.sh`.
### Prepare Docker Image ### Prepare Docker Image