From b622d600487caa7d2ab686bdae7037b76465ac13 Mon Sep 17 00:00:00 2001 From: Yu Shan Date: Thu, 28 Oct 2021 14:45:16 +0800 Subject: [PATCH] Add installation guide in Orca AutoML Doc (#3304) * add install guide in orca.automl doc * typo * update * add sklearn and tensorboard --- .../doc/Orca/Overview/distributed-tuning.md | 21 ++++++++++++++++++- 1 file changed, 20 insertions(+), 1 deletion(-) diff --git a/docs/readthedocs/source/doc/Orca/Overview/distributed-tuning.md b/docs/readthedocs/source/doc/Orca/Overview/distributed-tuning.md index d7869ab6..173f61f0 100644 --- a/docs/readthedocs/source/doc/Orca/Overview/distributed-tuning.md +++ b/docs/readthedocs/source/doc/Orca/Overview/distributed-tuning.md @@ -4,6 +4,25 @@ **Orca `AutoEstimator` provides similar APIs as Orca `Estimator` for distributed hyper-parameter tuning.** +### **Install** +We recommend using [conda](https://docs.conda.io/projects/conda/en/latest/user-guide/install/) to prepare the Python environment. +```bash +conda create -n bigdl-orca-automl python=3.7 # "bigdl-orca-automl" is conda environment name, you can use any name you like. +conda activate bigdl-orca-automl +pip install bigdl-orca[automl] +```` +You can install the latest release version of BigDL Orca as follows: +```bash +pip install --pre --upgrade bigdl-orca[automl] +``` +_Note that with extra key of [automl], `pip` will automatically install the additional dependencies for distributed hyper-parameter tuning, +including `ray[tune]==1.2.0`, `psutil`, `aiohttp==3.7.0`, `aioredis==1.1.0`, `setproctitle`, `hiredis==1.1.0`, `async-timeout==3.0.1`, `scikit-learn`, `tensorboard`, `xgboost`._ + +To use [Pytorch Estimator](#pytorch-autoestimator), you need to install Pytorch with `pip install torch==1.8.1`. + +To use [TensorFlow/Keras AutoEstimator](#tensorflow-keras-autoestimator), you need to install Tensorflow with `pip install tensorflow==1.15.0`. + + ### **1. AutoEstimator** To perform distributed hyper-parameter tuning, user can first create an Orca `AutoEstimator` from standard TensorFlow Keras or PyTorch model, and then call `AutoEstimator.fit`. @@ -180,7 +199,7 @@ auto_estimator.fit( ``` See [API Doc](https://bigdl.readthedocs.io/en/latest/doc/PythonAPI/AutoML/automl.html#orca-automl-auto-estimator) for more details. -### **4. Scheduler** +### **5. Scheduler** *Scheduler* can stop/pause/tweak the hyper-parameters of running trials, making the hyper-parameter tuning process much efficient. We support all *Schedulers* in [Ray Tune](https://docs.ray.io/en/master/index.html). See [Ray Tune Schedulers](https://docs.ray.io/en/master/tune/api_docs/schedulers.html#schedulers-ref) for more details.