Docs: Bigdl known issues docs (#5204)
* docs: add dynamic allocation to FAQ/known issues in docs * style: reformat the code * docs: add known_issues to _toc.yml * docs: remove known-issues.md * fix: delete unnecessary issues * fix: refine known_issue format and add barrier mode configuration. * fix: reformat known_issues.md * fix: update known_issues.md * fix: modify known_issues.md * fix: amend the wording
This commit is contained in:
parent
928b016d88
commit
50c180520e
2 changed files with 42 additions and 1 deletions
|
|
@ -18,6 +18,7 @@ subtrees:
|
|||
- file: doc/UserGuide/k8s
|
||||
- file: doc/UserGuide/databricks
|
||||
- file: doc/UserGuide/develop
|
||||
- file: doc/UserGuide/known_issues
|
||||
|
||||
- caption: Nano
|
||||
entries:
|
||||
|
|
@ -99,4 +100,4 @@ subtrees:
|
|||
entries:
|
||||
- file: doc/Application/presentations
|
||||
- file: doc/Application/blogs
|
||||
- file: doc/Application/powered-by
|
||||
- file: doc/Application/powered-by
|
||||
|
|
|
|||
40
docs/readthedocs/source/doc/UserGuide/known_issues.md
Normal file
40
docs/readthedocs/source/doc/UserGuide/known_issues.md
Normal file
|
|
@ -0,0 +1,40 @@
|
|||
# BigDL Known Issues
|
||||
|
||||
## Spark Dynamic Allocation
|
||||
|
||||
By design, BigDL does not support Spark Dynamic Allocation mode, and needs to allocate fixed resources for deep learning model training. Thus if your environment has already configured Spark Dynamic Allocation, or stipulated that Spark Dynamic Allocation must be used, you may encounter the following error:
|
||||
|
||||
> **requirement failed: Engine.init: spark.dynamicAllocation.maxExecutors and spark.dynamicAllocation.minExecutors must be identical in dynamic allocation for BigDL**
|
||||
>
|
||||
|
||||
Here we provide a workaround for running BigDL under Spark Dynamic Allocation mode.
|
||||
|
||||
For `spark-submit` cluster mode, the first solution is to disable the Spark Dynamic Allocation mode in `SparkConf` when you submit your application as follows:
|
||||
|
||||
```bash
|
||||
spark-submit --conf spark.dynamicAllocation.enabled=false
|
||||
```
|
||||
|
||||
Otherwise, if you can not set this configuration due to your cluster settings, you can set `spark.dynamicAllocation.minExecutors` to be equal to `spark.dynamicAllocation.maxExecutors` as follows:
|
||||
|
||||
```bash
|
||||
spark-submit --conf spark.dynamicAllocation.enabled=true \
|
||||
--conf spark.dynamicAllocation.minExecutors 2 \
|
||||
--conf spark.dynamicAllocation.maxExecutors 2
|
||||
```
|
||||
|
||||
For other cluster modes, such as `yarn` and `k8s`, our program will initiate `SparkContext` for you, and the Spark Dynamic Allocation mode is disabled by default. Thus, generally you wouldn't encounter such problem.
|
||||
|
||||
If you are using Spark Dynamic Allocation, you have to disable barrier execution mode at the very beginning of your application as follows:
|
||||
|
||||
```python
|
||||
from bigdl.orca import OrcaContext
|
||||
|
||||
OrcaContext.barrier_mode = False
|
||||
```
|
||||
|
||||
For Spark Dynamic Allocation mode, you are also recommended to manually set `num_ray_nodes` and `ray_node_cpu_cores` equal to `spark.dynamicAllocation.minExecutors` and `spark.executor.cores` respectively. You can specify `num_ray_nodes` and `ray_node_cpu_cores` in `init_orca_context` as follows:
|
||||
|
||||
```python
|
||||
init_orca_context(..., num_ray_nodes=2, ray_node_cpu_cores=4)
|
||||
```
|
||||
Loading…
Reference in a new issue