* refactor toc * refactor toc * Change to pydata-sphinx-theme and update packages requirement list for ReadtheDocs * Remove customized css for old theme * Add index page to each top bar section and limit dropdown maximum to be 4 * Use js to change 'More' to 'Libraries' * Add custom.css to conf.py for further css changes * Add BigDL logo and search bar * refactor toc * refactor toc and add overview * refactor toc and add overview * refactor toc and add overview * refactor get started * add paper and video section * add videos * add grid columns in landing page * add document roadmap to index * reapply search bar and github icon commit * reorg orca and chronos sections * Test: weaken ads by js * update: change left attrbute * update: add comments * update: change opacity to 0.7 * Remove useless theme template override for old theme * Add sidebar releases component in the home page * Remove sidebar search and restore top nav search button * Add BigDL handouts * Add back to homepage button to pages except from the home page * Update releases contents & styles in left sidebar * Add version badge to the top bar * Test: weaken ads by js * update: add comments * remove landing page contents * rfix chronos install * refactor install * refactor chronos section titles * refactor nano index * change chronos landing * revise chronos landing page * add document navigator to nano landing page * revise install landing page * Improve css of versions in sidebar * Make handouts image pointing to a page in new tab * add win guide to install * add dliib installation * revise title bar * rename index files * add index page for user guide * add dllib and orca API * update user guide landing page * refactor side bar * Remove extra style configuration of card components & make different card usage consistent * Remove extra styles for Nano how-to guides * Remove extra styles for Chronos how-to guides * Remove dark mode for now * Update index page description * Add decision tree for choosing BigDL libraries in index page * add dllib models api, revise core layers formats * Change primary & info color in light mode * Restyle card components * Restructure Chronos landing page * Update card style * Update BigDL library selection decision tree * Fix failed Chronos tutorials filter * refactor PPML documents * refactor and add friesian documents * add friesian arch diagram * update landing pages and fill key features guide index page * Restyle link card component * Style video frames in PPML sections * Adjust Nano landing page * put api docs to the last in index for convinience * Make badge horizontal padding smaller & small changes * Change the second letter of all header titles to be small capitalizd * Small changes on Chronos index page * Revise decision tree to make it smaller * Update: try to change the position of ads. * Bugfix: deleted nonexist file config * Update: update ad JS/CSS/config * Update: change ad. * Update: delete my template and change files. * Update: change chronos installation table color. * Update: change table font color to --pst-color-primary-text * Remove old contents in landing page sidebar * Restyle badge for usage in card footer again * Add quicklinks template on landing page sidebar * add quick links * Add scala logo * move tf, pytorch out of the link * change orca key features cards * fix typo * fix a mistake in wording * Restyle badge for card footer * Update decision tree * Remove useless html templates * add more api docs and update tutorials in dllib * update chronos install using new style * merge changes in nano doc from master * fix quickstart links in sidebar quicklinks * Make tables responsive * Fix overflow in api doc * Fix list indents problems in [User guide] section * Further fixes to nested bullets contents in [User Guide] section * Fix strange title in Nano 5-min doc * Fix list indent problems in [DLlib] section * Fix misnumbered list problems and other small fixes for [Chronos] section * Fix list indent problems and other small fixes for [Friesian] section * Fix list indent problem and other small fixes for [PPML] section * Fix list indent problem for developer guide * Fix list indent problem for [Cluster Serving] section * fix dllib links * Fix wrong relative link in section landing page Co-authored-by: Yuwen Hu <yuwen.hu@intel.com> Co-authored-by: Juntao Luo <1072087358@qq.com>
218 lines
8.1 KiB
Markdown
218 lines
8.1 KiB
Markdown
# DLLib Python Getting Start Guide
|
|
|
|
## 1. Code initialization
|
|
```nncontext``` is the main entry for provisioning the dllib program on the underlying cluster (such as K8s or Hadoop cluster), or just on a single laptop.
|
|
|
|
It is recommended to initialize `nncontext` at the beginning of your program:
|
|
```
|
|
from bigdl.dllib.nncontext import *
|
|
sc = init_nncontext()
|
|
```
|
|
For more information about ```nncontext```, please refer to [nncontext](../Overview/dllib.md#initialize-nn-context)
|
|
|
|
## 2. Distributed Data Loading
|
|
|
|
#### Using Spark Dataframe APIs
|
|
DLlib supports Spark Dataframes as the input to the distributed training, and as
|
|
the input/output of the distributed inference. Consequently, the user can easily
|
|
process large-scale dataset using Apache Spark, and directly apply AI models on
|
|
the distributed (and possibly in-memory) Dataframes without data conversion or serialization
|
|
|
|
We create Spark session so we can use Spark API to load and process the data
|
|
```
|
|
spark = SQLContext(sc)
|
|
```
|
|
|
|
1. We can use Spark API to load the data into Spark DataFrame, eg. read csv file into Spark DataFrame
|
|
```
|
|
path = "pima-indians-diabetes.data.csv"
|
|
spark.read.csv(path)
|
|
```
|
|
|
|
If the feature column for the model is a Spark ML Vector. Please assemble related columns into a Vector and pass it to the model. eg.
|
|
```
|
|
from pyspark.ml.feature import VectorAssembler
|
|
vecAssembler = VectorAssembler(outputCol="features")
|
|
vecAssembler.setInputCols(["num_times_pregrant", "plasma_glucose", "blood_pressure", "skin_fold_thickness", "2-hour_insulin", "body_mass_index", "diabetes_pedigree_function", "age"])
|
|
assemble_df = vecAssembler.transform(df)
|
|
assemble_df.withColumn("label", col("class").cast(DoubleType) + lit(1))
|
|
```
|
|
|
|
2. If the training data is image, we can use DLLib api to load image into Spark DataFrame. Eg.
|
|
```
|
|
imgPath = "cats_dogs/"
|
|
imageDF = NNImageReader.readImages(imgPath, sc)
|
|
```
|
|
|
|
It will load the images and generate feature tensors automatically. Also we need generate labels ourselves. eg:
|
|
```
|
|
labelDF = imageDF.withColumn("name", getName(col("image"))) \
|
|
.withColumn("label", getLabel(col('name')))
|
|
```
|
|
|
|
Then split the Spark DataFrame into traing part and validation part
|
|
```
|
|
(trainingDF, validationDF) = labelDF.randomSplit([0.9, 0.1])
|
|
```
|
|
|
|
## 3. Model Definition
|
|
|
|
#### Using Keras-like APIs
|
|
|
|
To define a model, you can use the [Keras Style API](../Overview/keras-api.md).
|
|
```
|
|
x1 = Input(shape=[8])
|
|
dense1 = Dense(12, activation="relu")(x1)
|
|
dense2 = Dense(8, activation="relu")(dense1)
|
|
dense3 = Dense(2)(dense2)
|
|
dmodel = Model(input=x1, output=dense3)
|
|
```
|
|
|
|
After creating the model, you will have to decide which loss function to use in training.
|
|
|
|
Now you can use `compile` function of the model to set the loss function, optimization method.
|
|
```
|
|
dmodel.compile(optimizer = "adam", loss = "sparse_categorical_crossentropy")
|
|
```
|
|
|
|
Now the model is built and ready to train.
|
|
|
|
## 4. Distributed Model Training
|
|
Now you can use 'fit' begin the training, please set the label columns. Model Evaluation can be performed periodically during a training.
|
|
1. If the dataframe is generated using Spark apis, you also need set the feature columns. eg.
|
|
```
|
|
model.fit(df, feature_cols=["features"], label_cols=["label"], batch_size=4, nb_epoch=1)
|
|
```
|
|
Note: Above model accepts single input(column `features`) and single output(column `label`).
|
|
|
|
If your model accepts multiple inputs(eg. column `f1`, `f2`, `f3`), please set the features as below:
|
|
```
|
|
model.fit(df, feature_cols=["f1", "f2"], label_cols=["label"], batch_size=4, nb_epoch=1)
|
|
```
|
|
|
|
Similarly, if the model accepts multiple outputs(eg. column `label1`, `label2`), please set the label columns as below:
|
|
```
|
|
model.fit(df, feature_cols=["features"], label_cols=["l1", "l2"], batch_size=4, nb_epoch=1)
|
|
```
|
|
|
|
2. If the dataframe is generated using DLLib `NNImageReader`, we don't need set `feature_cols`, we can set `transform` to config how to process the images before training. Eg.
|
|
```
|
|
from bigdl.dllib.feature.image import transforms
|
|
transformers = transforms.Compose([ImageResize(50, 50), ImageMirror()])
|
|
model.fit(image_df, label_cols=["label"], batch_size=1, nb_epoch=1, transform=transformers)
|
|
```
|
|
For more details about how to use DLLib keras api to train image data, you may want to refer [ImageClassification](https://github.com/intel-analytics/BigDL/tree/main/python/dllib/examples/keras/image_classification.py)
|
|
|
|
## 5. Model saving and loading
|
|
When training is finished, you may need to save the final model for later use.
|
|
|
|
BigDL allows you to save your BigDL model on local filesystem, HDFS, or Amazon s3.
|
|
- **save**
|
|
```
|
|
modelPath = "/tmp/demo/keras.model"
|
|
dmodel.saveModel(modelPath)
|
|
```
|
|
|
|
- **load**
|
|
```
|
|
loadModel = Model.loadModel(modelPath)
|
|
preDF = loadModel.predict(df, feature_cols=["features"], prediction_col="predict")
|
|
```
|
|
|
|
You may want to refer [Save/Load](../Overview/keras-api.html#save)
|
|
|
|
## 6. Distributed evaluation and inference
|
|
After training finishes, you can then use the trained model for prediction or evaluation.
|
|
|
|
- **inference**
|
|
1. For dataframe generated by Spark API, please set `feature_cols` and `prediction_col`
|
|
```
|
|
dmodel.predict(df, feature_cols=["features"], prediction_col="predict")
|
|
```
|
|
2. For dataframe generated by `NNImageReader`, please set `prediction_col` and you can set `transform` if needed
|
|
```
|
|
model.predict(df, prediction_col="predict", transform=transformers)
|
|
```
|
|
|
|
- **evaluation**
|
|
|
|
Similary for dataframe generated by Spark API, the code is as below:
|
|
```
|
|
dmodel.evaluate(df, batch_size=4, feature_cols=["features"], label_cols=["label"])
|
|
```
|
|
|
|
For dataframe generated by `NNImageReader`:
|
|
```
|
|
model.evaluate(image_df, batch_size=1, label_cols=["label"], transform=transformers)
|
|
```
|
|
|
|
## 7. Checkpointing and resuming training
|
|
You can configure periodically taking snapshots of the model.
|
|
```
|
|
cpPath = "/tmp/demo/cp"
|
|
dmodel.set_checkpoint(cpPath)
|
|
```
|
|
You can also set ```over_write``` to ```true``` to enable overwriting any existing snapshot files
|
|
|
|
After training stops, you can resume from any saved point. Choose one of the model snapshots to resume (saved in checkpoint path, details see Checkpointing). Use Models.loadModel to load the model snapshot into an model object.
|
|
```
|
|
loadModel = Model.loadModel(path)
|
|
```
|
|
|
|
## 8. Monitor your training
|
|
|
|
- **Tensorboard**
|
|
|
|
BigDL provides a convenient way to monitor/visualize your training progress. It writes the statistics collected during training/validation. Saved summary can be viewed via TensorBoard.
|
|
|
|
In order to take effect, it needs to be called before fit.
|
|
```
|
|
dmodel.set_tensorboard("./", "dllib_demo")
|
|
```
|
|
For more details, please refer [visulization](../Overview/visualization.md)
|
|
|
|
## 9. Transfer learning and finetuning
|
|
|
|
- **freeze and trainable**
|
|
|
|
BigDL DLLib supports exclude some layers of model from training.
|
|
```
|
|
dmodel.freeze(layer_names)
|
|
```
|
|
Layers that match the given names will be freezed. If a layer is freezed, its parameters(weight/bias, if exists) are not changed in training process.
|
|
|
|
BigDL DLLib also support unFreeze operations. The parameters for the layers that match the given names will be trained(updated) in training process
|
|
```
|
|
dmodel.unFreeze(layer_names)
|
|
```
|
|
For more information, you may refer [freeze](../../PythonAPI/DLlib/freeze.md)
|
|
|
|
## 10. Hyperparameter tuning
|
|
- **optimizer**
|
|
|
|
DLLib supports a list of optimization methods.
|
|
For more details, please refer [optimization](../../PythonAPI/DLlib/optim-Methods.md)
|
|
|
|
- **learning rate scheduler**
|
|
|
|
DLLib supports a list of learning rate scheduler.
|
|
For more details, please refer [lr_scheduler](../../PythonAPI/DLlib/learningrate-Scheduler.md)
|
|
|
|
- **batch size**
|
|
|
|
DLLib supports set batch size during training and prediction. We can adjust the batch size to tune the model's accuracy.
|
|
|
|
- **regularizer**
|
|
|
|
DLLib supports a list of regularizers.
|
|
For more details, please refer [regularizer](../../PythonAPI/DLlib/regularizers.md)
|
|
|
|
- **clipping**
|
|
|
|
DLLib supports gradient clipping operations.
|
|
For more details, please refer [gradient_clip](../../PythonAPI/DLlib/clipping.md)
|
|
|
|
## 11. Running program
|
|
```
|
|
python you_app_code.py
|
|
```
|