Orca: Document polishing (#6382)

* fix: delete redundant quick examples.

* feat: add How-to Guides with use cases.

* fix: _toc.yml

* fix: fix typo.

* fix: fix typo and file location.

* fix: add quickstarts to _toc.yml
This commit is contained in:
Cengguang Zhang 2022-11-04 15:02:37 +08:00 committed by GitHub
parent 29e9c18c70
commit 916fdecd27
6 changed files with 23 additions and 9 deletions

View file

@ -39,15 +39,22 @@ subtrees:
- file: doc/Orca/Overview/distributed-tuning - file: doc/Orca/Overview/distributed-tuning
- file: doc/Orca/Overview/ray - file: doc/Orca/Overview/ray
- file: doc/Orca/QuickStart/index - file: doc/Orca/QuickStart/index
title: "Quick Examples" title: "Quickstarts"
subtrees: subtrees:
- entries: - entries:
- file: doc/UseCase/spark-dataframe - file: doc/Orca/Quickstart/orca-tf-quickstart
- file: doc/UseCase/xshards-pandas - file: doc/Orca/Quickstart/orca-tf2keras-quickstart
- file: doc/Orca/QuickStart/ray-quickstart - file: doc/Orca/Quickstart/orca-keras-quickstart
- file: doc/Orca/QuickStart/orca-pytorch-distributed-quickstart - file: doc/Orca/Quickstart/orca-pytorch-quickstart
- file: doc/Orca/QuickStart/orca-autoestimator-pytorch-quickstart - file: doc/Orca/Quickstart/ray-quickstart
- file: doc/Orca/QuickStart/orca-autoxgboost-quickstart - file: doc/Orca/Howto/index
title: "How-to Guides"
subtrees:
- entries:
- file: doc/Orca/Howto/spark-dataframe
- file: doc/Orca/Howto/xshards-pandas
- file: doc/Orca/Howto/orca-autoestimator-pytorch-quickstart
- file: doc/Orca/Howto/orca-autoxgboost-quickstart
- file: doc/Orca/Tutorial/index - file: doc/Orca/Tutorial/index
title: "Tutorials" title: "Tutorials"
subtrees: subtrees:

View file

@ -0,0 +1,7 @@
Orca How-to Guides
=========================
* `Use Spark DataFrames for Deep Learning <spark-dataframe.html>`__
* `Use Distributed Pandas for Deep Learning <xshards-pandas.html>`__
* `Enable AutoML for PyTorch <orca-autoestimator-pytorch-quickstart.html>`__
* `Use AutoXGBoost to auto-tune XGBoost parameters <orca-autoxgboost-quickstart.html>`__

View file

@ -6,7 +6,7 @@
--- ---
**In this guide we will describe how to use Apache Spark Dataframes to scale-out data processing for distribtued deep learning.** **In this guide we will describe how to use Apache Spark Dataframes to scale-out data processing for distributed deep learning.**
The dataset used in this guide is [movielens-1M](https://grouplens.org/datasets/movielens/1m/), which contains 1 million ratings of 5 levels from 6000 users on 4000 movies. We will read the data into Spark Dataframe and directly use the Spark Dataframe as the input to the distributed training. The dataset used in this guide is [movielens-1M](https://grouplens.org/datasets/movielens/1m/), which contains 1 million ratings of 5 levels from 6000 users on 4000 movies. We will read the data into Spark Dataframe and directly use the Spark Dataframe as the input to the distributed training.

View file

@ -6,7 +6,7 @@
--- ---
**In this guide we will describe how to use [XShards](../Orca/Overview/data-parallel-processing.md) to scale-out Pandas data processing for distribtued deep learning.** **In this guide we will describe how to use [XShards](../Orca/Overview/data-parallel-processing.md) to scale-out Pandas data processing for distributed deep learning.**
### 1. Read input data into XShards of Pandas DataFrame ### 1. Read input data into XShards of Pandas DataFrame