* add preTopology to boost performance of rnn
* require only one Cell added
* require logic revision, modules.length == 1 || modules.length == 2
* add Javadoc for preTopology in Cell and Recurrent
* Allow layer regulazier set after model definition
* add regulazier support for cadd and cmul
* meet code review
* add python set API, add regularizer to python cmul and cadd
* add test for setBRegularizer
* add parameters to create method, add doc
* # This is a combination of 5 commits.
# The first commit's message is:
add Regularizer to PythonBidgDL.scala and optimizer.py
# This is the 2nd commit message:
update regularizer in layer.py
# This is the 3rd commit message:
some fix
# This is the 4th commit message:
some fix 2
# This is the 5th commit message:
adjust according to the test
* Resolve Conflicts; fix; fix RNN; fix; pass compilation; fix; fix on the missmatch on the parameter list
* fix style
* fix style
* fix style
* fix style
* nn refactor
* nn refactor
* fix code style issue
* change back the layers
* code refactor
* change tests to automatic test
* add more test for model save
* remove some useless unit test
* add more save test
* add more writer test
* rnn test case automation
* refine save test
* refine save unit test
* remove NHWC
* meet code review
* meet code review
* use MulConst in MulTF
* add a flatten node for tf 1.1
* fix code style and failed unit test
* mv tf model layers to another package
* add a python example to show how to define model in tensorflow and run in BigDL
* move tf example python code to example folder
* Add backward test in LeNet and AlexNet (#19)
* add unit test of lenet backward
* add some print
* add backward test in lenet and alexnet
* seperate testModel into forward and backward methods
* using set instead of copy for output and gradInput
define private method
* revise set method
* add condition before set method
* grammar revision
* set cellAppendStartIdx to cells.length
* GRU, last layer CAddTable(false)
* share memory in CAddTable for LSTM, GRU, SimpleRNN; share memory for Reverse layer
* revise javadoc
* revert GRU
* set default inPlace value to be false in Reverse layer
* make linear layerwise regurization work
* add regularizer option to layers with weights
fix a bug
* new version
* include accRegularization in accGradparameters
* meet code review
* add layerwise regualrization performance tests
* fix to meet code review
* make Regularizer.isRegularized private and optimize L1 regularization implementation
* fix jenkins tests failed
* change l1 implementation
* improve layerwise regularization performance test
* improve layerwise regularization performance test
* delete layerwise regularization perform test