update orca automl doc with bigdl (#3297)

* update orca automl doc

* typo
This commit is contained in:
Yu Shan 2021-10-27 14:25:29 +08:00 committed by GitHub
parent 0d0cff81fd
commit cda4102745

View file

@ -68,12 +68,12 @@ Note that the `optimizer` argument in Pytorch `AutoEstimator` constructor could
#### **2.4 Create and Fit Pytorch AutoEstimator**
User could create a Pytorch `AutoEstimator` as below.
```python
from zoo.orca.automl.auto_estimator import AutoEstimator
from bigdl.orca.automl.auto_estimator import AutoEstimator
auto_est = AutoEstimator.from_torch(model_creator=model_creator,
optimizer=optim_creator,
loss=nn.NLLLoss(),
logs_dir="/tmp/zoo_automl_logs",
logs_dir="/tmp/orca_automl_logs",
resources_per_trial={"cpu": 2},
name="lenet_mnist")
```
@ -91,7 +91,7 @@ Finally, user can get the best learned model and the best hyper-parameters for f
best_model = auto_est.get_best_model() # a `torch.nn.Module` object
best_config = auto_est.get_best_config() # a dictionary of hyper-parameter names and values.
```
View the related [Python API doc](https://analytics-zoo.readthedocs.io/en/latest/doc/PythonAPI/AutoML/automl.html#orca-automl-auto-estimator) for more details.
View the related [Python API doc](https://bigdl.readthedocs.io/en/latest/doc/PythonAPI/AutoML/automl.html#orca-automl-auto-estimator) for more details.
### **3. TensorFlow/Keras AutoEstimator**
Users can create an `AutoEstimator` for TensorFlow Keras from a `tf.keras` model (using a *Model Creator Function*). For example:
@ -107,7 +107,7 @@ def model_creator(config):
return model
auto_est = AutoEstimator.from_keras(model_creator=model_creator,
logs_dir="/tmp/zoo_automl_logs",
logs_dir="/tmp/orca_automl_logs",
resources_per_trial={"cpu": 2},
name="auto_keras")
```
@ -128,20 +128,20 @@ Finally, user can get the best learned model and the best hyper-parameters for f
best_model = auto_est.get_best_model() # a `torch.nn.Module` object
best_config = auto_est.get_best_config() # a dictionary of hyper-parameter names and values.
```
View the related [Python API doc](https://analytics-zoo.readthedocs.io/en/latest/doc/PythonAPI/AutoML/automl.html#orca-automl-auto-estimator) for more details.
View the related [Python API doc](https://bigdl.readthedocs.io/en/latest/doc/PythonAPI/AutoML/automl.html#orca-automl-auto-estimator) for more details.
### **4. Search Space and Search Algorithms**
For Hyper-parameter Optimization, user should define the search space of various hyper-parameter values for neural network training, as well as how to search through the chosen hyper-parameter space.
#### **4.1 Basic Search Algorithms**
For basic search algorithms like **Grid Search** and **Random Search**, we provide several sampling functions with `automl.hp`. See [API doc](https://analytics-zoo.readthedocs.io/en/latest/doc/PythonAPI/AutoML/automl.html#orca-automl-hp) for more details.
For basic search algorithms like **Grid Search** and **Random Search**, we provide several sampling functions with `automl.hp`. See [API doc](https://bigdl.readthedocs.io/en/latest/doc/PythonAPI/AutoML/automl.html#orca-automl-hp) for more details.
`AutoEstimator` requires a dictionary for the `search_space` argument in `fit`.
In the dictionary, the keys are the hyper-parameter names, and the values specify how to sample the search spaces for the hyper-parameters.
```python
from zoo.orca.automl import hp
from bigdl.orca.automl import hp
search_space = {
"fc1_hidden_size": hp.grid_search([500, 600]),
@ -163,7 +163,7 @@ pip install bayesian-optimization
And pass the search algorithm name to `search_alg` in `AutoEstimator.fit`.
```python
from zoo.orca.automl import hp
from bigdl.orca.automl import hp
search_space = {
"width": hp.uniform(0, 20),
@ -178,7 +178,7 @@ auto_estimator.fit(
search_alg="bayesopt",
)
```
See [API Doc](https://analytics-zoo.readthedocs.io/en/latest/doc/PythonAPI/AutoML/automl.html#orca-automl-auto-estimator) for more details.
See [API Doc](https://bigdl.readthedocs.io/en/latest/doc/PythonAPI/AutoML/automl.html#orca-automl-auto-estimator) for more details.
### **4. Scheduler**
*Scheduler* can stop/pause/tweak the hyper-parameters of running trials, making the hyper-parameter tuning process much efficient.