From b886ee86a7c8f9ea179fbc9d0769532529df3fc6 Mon Sep 17 00:00:00 2001 From: dding3 Date: Thu, 10 Mar 2022 09:45:28 -0800 Subject: [PATCH] add python getting started and add python freeze support (#4162) --- .../DLlib/Overview/python-getting-started.md | 216 ++++++++++++++++++ ...ng-started.md => scala-getting-started.md} | 84 ++++--- 2 files changed, 268 insertions(+), 32 deletions(-) create mode 100644 docs/readthedocs/source/doc/DLlib/Overview/python-getting-started.md rename docs/readthedocs/source/doc/DLlib/Overview/{getting-started.md => scala-getting-started.md} (80%) diff --git a/docs/readthedocs/source/doc/DLlib/Overview/python-getting-started.md b/docs/readthedocs/source/doc/DLlib/Overview/python-getting-started.md new file mode 100644 index 00000000..c03a990c --- /dev/null +++ b/docs/readthedocs/source/doc/DLlib/Overview/python-getting-started.md @@ -0,0 +1,216 @@ +# Python DLLib Getting Start Guide + +## 1. Code initialization +```nncontext``` is the main entry for provisioning the dllib program on the underlying cluster (such as K8s or Hadoop cluster), or just on a single laptop. + +It is recommended to initialize `nncontext` at the beginning of your program: +``` +from bigdl.dllib.nncontext import * +sc = init_nncontext() +``` +For more information about ```nncontext```, please refer to [nncontext](https://bigdl.readthedocs.io/en/latest/doc/DLlib/Overview/dllib.html#nn-context) + +## 3. Distributed Data Loading + +#### Using Spark Dataframe APIs +DLlib supports Spark Dataframes as the input to the distributed training, and as +the input/output of the distributed inference. Consequently, the user can easily +process large-scale dataset using Apache Spark, and directly apply AI models on +the distributed (and possibly in-memory) Dataframes without data conversion or serialization + +We create Spark session so we can use Spark API to load and process the data +``` +spark = SQLContext(sc) +``` + +1. We can use Spark API to load the data into Spark DataFrame, eg. read csv file into Spark DataFrame +``` +path = "pima-indians-diabetes.data.csv" +spark.read.csv(path) +``` + +If the feature column for the model is a Spark ML Vector. Please assemble related columns into a Vector and pass it to the model. eg. +``` +from pyspark.ml.feature import VectorAssembler +vecAssembler = VectorAssembler(outputCol="features") +vecAssembler.setInputCols(["num_times_pregrant", "plasma_glucose", "blood_pressure", "skin_fold_thickness", "2-hour_insulin", "body_mass_index", "diabetes_pedigree_function", "age"]) +assemble_df = vecAssembler.transform(df) +assemble_df.withColumn("label", col("class").cast(DoubleType) + lit(1)) +``` + +2. If the training data is image, we can use DLLib api to load image into Spark DataFrame. Eg. +``` +imgPath = "cats_dogs/" +imageDF = NNImageReader.readImages(imgPath, sc) +``` + +It will load the images and generate feature tensors automatically. Also we need generate labels ourselves. eg: +``` +labelDF = imageDF.withColumn("name", getName(col("image"))) \ + .withColumn("label", getLabel(col('name'))) +``` + +Then split the Spark DataFrame into traing part and validation part +``` +(trainingDF, validationDF) = labelDF.randomSplit([0.9, 0.1]) +``` + +## 4. Model Definition + +#### Using Keras-like APIs + +To define a model, you can use the [Keras Style API](https://bigdl.readthedocs.io/en/latest/doc/DLlib/Overview/keras-api.html). +``` +x1 = Input(shape=[8]) +dense1 = Dense(12, activation="relu")(x1) +dense2 = Dense(8, activation="relu")(dense1) +dense3 = Dense(2)(dense2) +dmodel = Model(input=x1, output=dense3) +``` + +After creating the model, you will have to decide which loss function to use in training. + +Now you can use `compile` function of the model to set the loss function, optimization method. +``` +dmodel.compile(optimizer = "adam", loss = "sparse_categorical_crossentropy") +``` + +Now the model is built and ready to train. + +## 5. Distributed Model Training +Now you can use 'fit' begin the training, please set the label columns. Model Evaluation can be performed periodically during a training. +1. If the dataframe is generated using Spark apis, you also need set the feature columns. eg. +``` +model.fit(df, feature_cols=["features"], label_cols=["label"], batch_size=4, nb_epoch=1) +``` +Note: Above model accepts single input(column `features`) and single output(column `label`). + +If your model accepts multiple inputs(eg. column `f1`, `f2`, `f3`), please set the features as below: +``` +model.fit(df, feature_cols=["f1", "f2"], label_cols=["label"], batch_size=4, nb_epoch=1) +``` + +Similarly, if the model accepts multiple outputs(eg. column `label1`, `label2`), please set the label columns as below: +``` +model.fit(df, feature_cols=["features"], label_cols=["l1", "l2"], batch_size=4, nb_epoch=1) +``` + +2. If the dataframe is generated using DLLib `NNImageReader`, we don't need set `feature_cols`, we can set `transform` to config how to process the images before training. Eg. +``` +from bigdl.dllib.feature.image import transforms +transformers = transforms.Compose([ImageResize(50, 50), ImageMirror()]) +model.fit(image_df, label_cols=["label"], batch_size=1, nb_epoch=1, transform=transformers) +``` +For more details about how to use DLLib keras api to train image data, you may want to refer [ImageClassification](https://github.com/intel-analytics/BigDL/tree/main/python/dllib/examples/keras/image_classification.py) + +## 6. Model saving and loading +When training is finished, you may need to save the final model for later use. + +BigDL allows you to save your BigDL model on local filesystem, HDFS, or Amazon s3. +- **save** +``` +modelPath = "/tmp/demo/keras.model" +dmodel.saveModel(modelPath) +``` + +- **load** +``` +loadModel = Model.loadModel(modelPath) +preDF = loadModel.predict(df, feature_cols=["features"], prediction_col="predict") +``` + +You may want to refer [Save/Load](https://bigdl.readthedocs.io/en/latest/doc/DLlib/Overview/keras-api.html#save) + +## 7. Distributed evaluation and inference +After training finishes, you can then use the trained model for prediction or evaluation. + +- **inference** +1. For dataframe generated by Spark API, please set `feature_cols` and `prediction_col` +``` +dmodel.predict(df, feature_cols=["features"], prediction_col="predict") +``` +2. For dataframe generated by `NNImageReader`, please set `prediction_col` and you can set `transform` if needed +``` +model.predict(df, prediction_col="predict", transform=transformers) +``` + +- **evaluation** +Similary for dataframe generated by Spark API, the code is as below: +``` +dmodel.evaluate(df, batch_size=4, feature_cols=["features"], label_cols=["label"]) +``` + +For dataframe generated by `NNImageReader`: +``` +model.evaluate(image_df, batch_size=1, label_cols=["label"], transform=transformers) +``` + +## 8. Checkpointing and resuming training +You can configure periodically taking snapshots of the model. +``` +cpPath = "/tmp/demo/cp" +dmodel.set_checkpoint(cpPath) +``` +You can also set ```over_write``` to ```true``` to enable overwriting any existing snapshot files + +After training stops, you can resume from any saved point. Choose one of the model snapshots to resume (saved in checkpoint path, details see Checkpointing). Use Models.loadModel to load the model snapshot into an model object. +``` +loadModel = Model.loadModel(path) +``` + +## 9. Monitor your training + +- **Tensorboard** + +BigDL provides a convenient way to monitor/visualize your training progress. It writes the statistics collected during training/validation. Saved summary can be viewed via TensorBoard. + +In order to take effect, it needs to be called before fit. +``` +dmodel.set_tensorboard("./", "dllib_demo") +``` +For more details, please refer [visulization](visualization.md) + +## 10. Transfer learning and finetuning + +- **freeze and trainable** +BigDL DLLib supports exclude some layers of model from training. +``` +dmodel.freeze(layer_names) +``` +Layers that match the given names will be freezed. If a layer is freezed, its parameters(weight/bias, if exists) are not changed in training process. + +BigDL DLLib also support unFreeze operations. The parameters for the layers that match the given names will be trained(updated) in training process +``` +dmodel.unFreeze(layer_names) +``` +For more information, you may refer [freeze](freeze.md) + +## 11. Hyperparameter tuning +- **optimizer** + +DLLib supports a list of optimization methods. +For more details, please refer [optimization](optim-Methods.md) + +- **learning rate scheduler** + +DLLib supports a list of learning rate scheduler. +For more details, please refer [lr_scheduler](learningrate-Scheduler.md) + +- **batch size** + +DLLib supports set batch size during training and prediction. We can adjust the batch size to tune the model's accuracy. + +- **regularizer** + +DLLib supports a list of regularizers. +For more details, please refer [regularizer](regularizers.md) + +- **clipping** + +DLLib supports gradient clipping operations. +For more details, please refer [gradient_clip](clipping.md) + +## 12. Running program +``` +python you_app_code.py +``` diff --git a/docs/readthedocs/source/doc/DLlib/Overview/getting-started.md b/docs/readthedocs/source/doc/DLlib/Overview/scala-getting-started.md similarity index 80% rename from docs/readthedocs/source/doc/DLlib/Overview/getting-started.md rename to docs/readthedocs/source/doc/DLlib/Overview/scala-getting-started.md index 1aea760c..00c40350 100644 --- a/docs/readthedocs/source/doc/DLlib/Overview/getting-started.md +++ b/docs/readthedocs/source/doc/DLlib/Overview/scala-getting-started.md @@ -60,24 +60,46 @@ the input/output of the distributed inference. Consequently, the user can easily process large-scale dataset using Apache Spark, and directly apply AI models on the distributed (and possibly in-memory) Dataframes without data conversion or serialization -We used [Pima Indians onset of diabetes](https://raw.githubusercontent.com/jbrownlee/Datasets/master/pima-indians-diabetes.data.csv) as dataset for the demo. It's a standard machine learning dataset from the UCI Machine Learning repository. It describes patient medical record data for Pima Indians and whether they had an onset of diabetes within five years. -The dataset can be download with: -``` -wget https://raw.githubusercontent.com/jbrownlee/Datasets/master/pima-indians-diabetes.data.csv -``` - We create Spark session so we can use Spark API to load and process the data ``` val spark = new SQLContext(sc) ``` -Load the data into Spark DataFrame +1. We can use Spark API to load the data into Spark DataFrame, eg. read csv file into Spark DataFrame ``` val path = "pima-indians-diabetes.data.csv" val df = spark.read.options(Map("inferSchema"->"true","delimiter"->",")).csv(path) .toDF("num_times_pregrant", "plasma_glucose", "blood_pressure", "skin_fold_thickness", "2-hour_insulin", "body_mass_index", "diabetes_pedigree_function", "age", "class") ``` +If the feature column for the model is a Spark ML Vector. Please assemble related columns into a Vector and pass it to the model. eg. +``` +val assembler = new VectorAssembler() + .setInputCols(Array("num_times_pregrant", "plasma_glucose", "blood_pressure", "skin_fold_thickness", "2-hour_insulin", "body_mass_index", "diabetes_pedigree_function", "age")) + .setOutputCol("features") +val assembleredDF = assembler.transform(df) +val df2 = assembleredDF.withColumn("label",col("class").cast(DoubleType) + lit(1)) +``` + +2. If the training data is image, we can use DLLib api to load image into Spark DataFrame. Eg. +``` +val createLabel = udf { row: Row => +if (new Path(row.getString(0)).getName.contains("cat")) 1 else 2 +} +val imagePath = "cats_dogs/" +val imgDF = NNImageReader.readImages(imagePath, sc) +``` + +It will load the images and generate feature tensors automatically. Also we need generate labels ourselves. eg: +``` +val df = imgDF.withColumn("label", createLabel(col("image"))) +``` + +Then split the Spark DataFrame into traing part and validation part +``` +val Array(trainDF, valDF) = df.randomSplit(Array(0.8, 0.2)) +``` + ## 4. Model Definition #### Using Keras-like APIs @@ -95,35 +117,25 @@ After creating the model, you will have to decide which loss function to use in Now you can use `compile` function of the model to set the loss function, optimization method. ``` -dmodel.compile(optimizer = new Adam(), - loss = ClassNLLCriterion()) +dmodel.compile(optimizer = new Adam(), loss = ClassNLLCriterion()) ``` Now the model is built and ready to train. ## 5. Distributed Model Training -Now you can use 'fit' begin the training, please set the feature columns and label columns. Model Evaluation can be performed periodically during a training. -If the model accepts single input(eg. column `feature1`) and single output(eg. column `label`), please set the feature columns of the model as : +Now you can use 'fit' begin the training, please set the label columns. Model Evaluation can be performed periodically during a training. +1. If the dataframe is generated using Spark apis, you also need set the feature columns. eg. ``` -model.fit(x=dataframe, batchSize=4, nbEpoch = 2, - featureCols = Array("feature1"), labelCols = Array("label")) -``` - -If the feature column for the model is a Spark ML Vector. Please assemble related columns into a Vector and pass it to the model. eg. -``` -val assembler = new VectorAssembler() - .setInputCols(Array("num_times_pregrant", "plasma_glucose", "blood_pressure", "skin_fold_thickness", "2-hour_insulin", "body_mass_index", "diabetes_pedigree_function", "age")) - .setOutputCol("features") -val assembleredDF = assembler.transform(df) -val df2 = assembleredDF.withColumn("label",col("class").cast(DoubleType) + lit(1)) +model.fit(x=trainDF, batchSize=4, nbEpoch = 2, + featureCols = Array("feature1"), labelCols = Array("label"), valX=valDF) ``` +Note: Above model accepts single input(column `feature1`) and single output(column `label`). If your model accepts multiple inputs(eg. column `f1`, `f2`, `f3`), please set the features as below: ``` model.fit(x=dataframe, batchSize=4, nbEpoch = 2, featureCols = Array("f1", "f2", "f3"), labelCols = Array("label")) ``` -If one of the inputs is a Spark ML Vector, please assemble it before pass the data to the model. Similarly, if the model accepts multiple outputs(eg. column `label1`, `label2`), please set the label columns as below: ``` @@ -131,17 +143,14 @@ model.fit(x=dataframe, batchSize=4, nbEpoch = 2, featureCols = Array("f1", "f2", "f3"), labelCols = Array("label1", "label2")) ``` -Then split it into traing part and validation part +2. If the dataframe is generated using DLLib `NNImageReader`, we don't need set `featureCols`, we can set `transform` to config how to process the images before training. Eg. ``` -val Array(trainDF, valDF) = df2.randomSplit(Array(0.8, 0.2)) -``` - -The model is ready to train. -``` -dmodel.fit(x=trainDF, batchSize=4, nbEpoch = 2, - featureCols = Array("features"), labelCols = Array("label"), valX = valDF -) +val transformers = transforms.Compose(Array(ImageResize(50, 50), + ImageMirror())) +model.fit(x=dataframe, batchSize=4, nbEpoch = 2, + labelCols = Array("label"), transform = transformers) ``` +For more details about how to use DLLib keras api to train image data, you may want to refer [ImageClassification](https://github.com/intel-analytics/BigDL/blob/main/scala/dllib/src/main/scala/com/intel/analytics/bigdl/dllib/example/keras/ImageClassification.scala) ## 6. Model saving and loading When training is finished, you may need to save the final model for later use. @@ -166,16 +175,27 @@ You may want to refer [Save/Load](https://bigdl.readthedocs.io/en/latest/doc/DLl After training finishes, you can then use the trained model for prediction or evaluation. - **inference** +1. For dataframe generated by Spark API, please set `featureCols` ``` dmodel.predict(trainDF, featureCols = Array("features"), predictionCol = "predict") ``` +2. For dataframe generated by `NNImageReader`, no need to set `featureCols` and you can set `transform` if needed +``` +model.predict(imgDF, predictionCol = "predict", transform = transformers) +``` - **evaluation** +Similary for dataframe generated by Spark API, the code is as below: ``` dmodel.evaluate(trainDF, batchSize = 4, featureCols = Array("features"), labelCols = Array("label")) ``` +For dataframe generated by `NNImageReader`: +``` +model.evaluate(imgDF, batchSize = 1, labelCols = Array("label"), transform = transformers) +``` + ## 8. Checkpointing and resuming training You can configure periodically taking snapshots of the model. ```