[PPML] Remove XGBoost from PPML guide
This commit is contained in:
parent
3a19ebbfbf
commit
528ff064f5
1 changed files with 2 additions and 98 deletions
|
|
@ -579,103 +579,7 @@ The result should look something like this:
|
|||
>
|
||||
> 2021-06-18 01:46:20 INFO DistriOptimizer$:180 - [Epoch 2 60032/60000][Iteration 938][Wall Clock 845.747782s] Top1Accuracy is Accuracy(correct: 9696, count: 10000, accuracy: 0.9696)
|
||||
|
||||
##### 2.3.2.3.7 Run Trusted Spark XGBoost Regressor
|
||||
|
||||
This example shows how to run trusted Spark XGBoost Regressor.
|
||||
|
||||
First, make sure that `Boston_Housing.csv` is under `work/data` directory or the same path in the `start-spark-local-xgboost-regressor-sgx.sh`. Replace the value of `RABIT_TRACKER_IP` with your own IP address in the script.
|
||||
|
||||
Run the script to run trusted Spark XGBoost Regressor and it would take some time to show the final results:
|
||||
|
||||
```bash
|
||||
bash work/start-scripts/start-spark-local-xgboost-regressor-sgx.sh
|
||||
```
|
||||
|
||||
Open another terminal and check the log:
|
||||
|
||||
```bash
|
||||
sudo docker exec -it spark-local cat /ppml/trusted-big-data-ml/test-bigdl-xgboost-regressor-sgx.log | egrep "prediction" -A19
|
||||
```
|
||||
|
||||
The result should look something like this:
|
||||
|
||||
> | features|label| prediction|
|
||||
>
|
||||
> +--------------------+-----+------------------+
|
||||
>
|
||||
> |[41.5292,0.0,18.1...| 8.5| 8.51994514465332|
|
||||
>
|
||||
> |[67.9208,0.0,18.1...| 5.0| 5.720333099365234|
|
||||
>
|
||||
> |[20.7162,0.0,18.1...| 11.9|10.601168632507324|
|
||||
>
|
||||
> |[11.9511,0.0,18.1...| 27.9| 26.19390106201172|
|
||||
>
|
||||
> |[7.40389,0.0,18.1...| 17.2|16.112293243408203|
|
||||
>
|
||||
> |[14.4383,0.0,18.1...| 27.5|25.952226638793945|
|
||||
>
|
||||
> |[51.1358,0.0,18.1...| 15.0| 14.67484188079834|
|
||||
>
|
||||
> |[14.0507,0.0,18.1...| 17.2|16.112293243408203|
|
||||
>
|
||||
> |[18.811,0.0,18.1,...| 17.9| 17.42863655090332|
|
||||
>
|
||||
> |[28.6558,0.0,18.1...| 16.3| 16.0191593170166|
|
||||
>
|
||||
> |[45.7461,0.0,18.1...| 7.0| 5.300708770751953|
|
||||
>
|
||||
> |[18.0846,0.0,18.1...| 7.2| 6.346951007843018|
|
||||
>
|
||||
> |[10.8342,0.0,18.1...| 7.5| 6.571983814239502|
|
||||
>
|
||||
> |[25.9406,0.0,18.1...| 10.4|10.235769271850586|
|
||||
>
|
||||
> |[73.5341,0.0,18.1...| 8.8| 8.460335731506348|
|
||||
>
|
||||
> |[11.8123,0.0,18.1...| 8.4| 9.193297386169434|
|
||||
>
|
||||
> |[11.0874,0.0,18.1...| 16.7|16.174896240234375|
|
||||
>
|
||||
> |[7.02259,0.0,18.1...| 14.2| 13.38729190826416|
|
||||
|
||||
##### 2.3.2.3.8 Run Trusted Spark XGBoost Classifier
|
||||
|
||||
This example shows how to run trusted Spark XGBoost Classifier.
|
||||
|
||||
Before running the example, download the sample dataset from [pima-indians-diabetes](https://raw.githubusercontent.com/jbrownlee/Datasets/master/pima-indians-diabetes.data.csv) dataset. After downloading the dataset, make sure that `pima-indians-diabetes.data.csv` is under `work/data` directory or the same path in the `start-spark-local-xgboost-classifier-sgx.sh`. Replace `path_of_pima_indians_diabetes_csv` with your path of `pima-indians-diabetes.data.csv` and the value of `RABIT_TRACKER_IP` with your own IP address in the script.
|
||||
|
||||
Run the script to run trusted Spark XGBoost Classifier and it would take some time to show the final results:
|
||||
|
||||
```bash
|
||||
bash start-spark-local-xgboost-classifier-sgx.sh
|
||||
```
|
||||
|
||||
Open another terminal and check the log:
|
||||
|
||||
```bash
|
||||
sudo docker exec -it spark-local cat /ppml/trusted-big-data-ml/test-xgboost-classifier-sgx.log | egrep "prediction" -A7
|
||||
```
|
||||
|
||||
The result should look something like this:
|
||||
|
||||
> | f1| f2| f3| f4| f5| f6| f7| f8|label| rawPrediction| probability|prediction|
|
||||
>
|
||||
> +----+-----+----+----+-----+----+-----+----+-----+--------------------+--------------------+----------+
|
||||
>
|
||||
> |11.0|138.0|74.0|26.0|144.0|36.1|0.557|50.0| 1.0|[-0.8209581375122...|[0.17904186248779...| 1.0|
|
||||
>
|
||||
> | 3.0|106.0|72.0| 0.0| 0.0|25.8|0.207|27.0| 0.0|[-0.0427864193916...|[0.95721358060836...| 0.0|
|
||||
>
|
||||
> | 6.0|117.0|96.0| 0.0| 0.0|28.7|0.157|30.0| 0.0|[-0.2336160838603...|[0.76638391613960...| 0.0|
|
||||
>
|
||||
> | 2.0| 68.0|62.0|13.0| 15.0|20.1|0.257|23.0| 0.0|[-0.0315906107425...|[0.96840938925743...| 0.0|
|
||||
>
|
||||
> | 9.0|112.0|82.0|24.0| 0.0|28.2|1.282|50.0| 1.0|[-0.7087597250938...|[0.29124027490615...| 1.0|
|
||||
>
|
||||
> | 0.0|119.0| 0.0| 0.0| 0.0|32.4|0.141|24.0| 1.0|[-0.4473398327827...|[0.55266016721725...| 0.0|
|
||||
|
||||
##### 2.3.2.3.9 Run Trusted Spark Orca Data
|
||||
##### 2.3.2.3.7 Run Trusted Spark Orca Data
|
||||
|
||||
This example shows how to run trusted Spark Orca Data.
|
||||
|
||||
|
|
@ -745,7 +649,7 @@ The result should contain the content look like this:
|
|||
>
|
||||
>Stopping orca context
|
||||
|
||||
##### 2.3.2.3.10 Run Trusted Spark Orca Learn Tensorflow Basic Text Classification
|
||||
##### 2.3.2.3.8 Run Trusted Spark Orca Learn Tensorflow Basic Text Classification
|
||||
|
||||
This example shows how to run Trusted Spark Orca learn Tensorflow basic text classification.
|
||||
|
||||
|
|
|
|||
Loading…
Reference in a new issue