* refactor toc * refactor toc * Change to pydata-sphinx-theme and update packages requirement list for ReadtheDocs * Remove customized css for old theme * Add index page to each top bar section and limit dropdown maximum to be 4 * Use js to change 'More' to 'Libraries' * Add custom.css to conf.py for further css changes * Add BigDL logo and search bar * refactor toc * refactor toc and add overview * refactor toc and add overview * refactor toc and add overview * refactor get started * add paper and video section * add videos * add grid columns in landing page * add document roadmap to index * reapply search bar and github icon commit * reorg orca and chronos sections * Test: weaken ads by js * update: change left attrbute * update: add comments * update: change opacity to 0.7 * Remove useless theme template override for old theme * Add sidebar releases component in the home page * Remove sidebar search and restore top nav search button * Add BigDL handouts * Add back to homepage button to pages except from the home page * Update releases contents & styles in left sidebar * Add version badge to the top bar * Test: weaken ads by js * update: add comments * remove landing page contents * rfix chronos install * refactor install * refactor chronos section titles * refactor nano index * change chronos landing * revise chronos landing page * add document navigator to nano landing page * revise install landing page * Improve css of versions in sidebar * Make handouts image pointing to a page in new tab * add win guide to install * add dliib installation * revise title bar * rename index files * add index page for user guide * add dllib and orca API * update user guide landing page * refactor side bar * Remove extra style configuration of card components & make different card usage consistent * Remove extra styles for Nano how-to guides * Remove extra styles for Chronos how-to guides * Remove dark mode for now * Update index page description * Add decision tree for choosing BigDL libraries in index page * add dllib models api, revise core layers formats * Change primary & info color in light mode * Restyle card components * Restructure Chronos landing page * Update card style * Update BigDL library selection decision tree * Fix failed Chronos tutorials filter * refactor PPML documents * refactor and add friesian documents * add friesian arch diagram * update landing pages and fill key features guide index page * Restyle link card component * Style video frames in PPML sections * Adjust Nano landing page * put api docs to the last in index for convinience * Make badge horizontal padding smaller & small changes * Change the second letter of all header titles to be small capitalizd * Small changes on Chronos index page * Revise decision tree to make it smaller * Update: try to change the position of ads. * Bugfix: deleted nonexist file config * Update: update ad JS/CSS/config * Update: change ad. * Update: delete my template and change files. * Update: change chronos installation table color. * Update: change table font color to --pst-color-primary-text * Remove old contents in landing page sidebar * Restyle badge for usage in card footer again * Add quicklinks template on landing page sidebar * add quick links * Add scala logo * move tf, pytorch out of the link * change orca key features cards * fix typo * fix a mistake in wording * Restyle badge for card footer * Update decision tree * Remove useless html templates * add more api docs and update tutorials in dllib * update chronos install using new style * merge changes in nano doc from master * fix quickstart links in sidebar quicklinks * Make tables responsive * Fix overflow in api doc * Fix list indents problems in [User guide] section * Further fixes to nested bullets contents in [User Guide] section * Fix strange title in Nano 5-min doc * Fix list indent problems in [DLlib] section * Fix misnumbered list problems and other small fixes for [Chronos] section * Fix list indent problems and other small fixes for [Friesian] section * Fix list indent problem and other small fixes for [PPML] section * Fix list indent problem for developer guide * Fix list indent problem for [Cluster Serving] section * fix dllib links * Fix wrong relative link in section landing page Co-authored-by: Yuwen Hu <yuwen.hu@intel.com> Co-authored-by: Juntao Luo <1072087358@qq.com>
6 KiB
Orca Known Issues
Estimator Issues
UnkownError: Could not start gRPC server
This error occurs while running Orca TF2 Estimator with spark backend, which may because the previous pyspark tensorflow job was not cleaned completely. You can retry later or you can set spark config spark.python.worker.reuse=false in your application.
If you are using init_orca_context(cluster_mode="yarn-client"):
conf = {"spark.python.worker.reuse": "false"}
init_orca_context(cluster_mode="yarn-client", conf=conf)
If you are using init_orca_context(cluster_mode="spark-submit"):
spark-submit --conf spark.python.worker.reuse=false
RuntimeError: Inter op parallelism cannot be modified after initialization
This error occurs if you build your TensorFlow model on the driver rather than on workers. You should build the complete model in model_creator which runs on each worker node. You can refer to the following examples:
Wrong Example
model = ...
def model_creator(config):
model.compile(...)
return model
estimator = Estimator.from_keras(model_creator=model_creator,...)
...
Correct Example
def model_creator(config):
model = ...
model.compile(...)
return model
estimator = Estimator.from_keras(model_creator=model_creator,...)
...
OrcaContext Issues
Exception: Failed to read dashbord log: [Errno 2] No such file or directory: '/tmp/ray/.../dashboard.log'
This error occurs when initialize an orca context with init_ray_on_spark=True. We have not locate the root cause of this problem, but it might be caused by an atypical python environment.
You could follow below steps to workaround:
-
If you only need to use functions in ray (e.g.
bigdl.orca.learnwithbackend="ray",bigdl.orca.automlfor pytorch/tensorflow model,bigdl.chronos.autotsfor time series model's auto-tunning), we may use ray as the first-class.- Start a ray cluster by
ray start --head. if you already have a ray cluster started, please direcetly jump to step 2. - Initialize an orca context with
runtime="ray"andinit_ray_on_spark=False, please refer to detailed information here. - If you are using
bigdl.orca.automlorbigdl.chronos.autotson a single node, please set:ray_ctx = OrcaContext.get_ray_context() ray_ctx.is_local=True
- Start a ray cluster by
-
If you really need to use ray on spark, please install bigdl-orca under a conda environment. Detailed information please refer to here.
Other Issues
OSError: Unable to load libhdfs: ./libhdfs.so: cannot open shared object file: No such file or directory
This error is because PyArrow fails to locate libhdfs.so in default path of $HADOOP_HOME/lib/native when you run with YARN on Cloudera.
To solve this issue, you need to set the path of libhdfs.so in Cloudera to the environment variable of ARROW_LIBHDFS_DIR on Spark driver and executors with the following steps:
- Run
locate libhdfs.soon the client node to findlibhdfs.so export ARROW_LIBHDFS_DIR=/opt/cloudera/parcels/CDH-5.15.2-1.cdh5.15.2.p0.3/lib64(replace with the result oflocate libhdfs.soin your environment)- If you are using
init_orca_context(cluster_mode="yarn-client"):
If you are usingconf = {"spark.executorEnv.ARROW_LIBHDFS_DIR": "/opt/cloudera/parcels/CDH-5.15.2-1.cdh5.15.2.p0.3/lib64"} init_orca_context(cluster_mode="yarn-client", conf=conf)init_orca_context(cluster_mode="spark-submit"):# For yarn-client mode spark-submit --conf spark.executorEnv.ARROW_LIBHDFS_DIR=/opt/cloudera/parcels/CDH-5.15.2-1.cdh5.15.2.p0.3/lib64 # For yarn-cluster mode spark-submit --conf spark.executorEnv.ARROW_LIBHDFS_DIR=/opt/cloudera/parcels/CDH-5.15.2-1.cdh5.15.2.p0.3/lib64 \ --conf spark.yarn.appMasterEnv.ARROW_LIBHDFS_DIR=/opt/cloudera/parcels/CDH-5.15.2-1.cdh5.15.2.p0.3/lib64
Spark Dynamic Allocation
By design, BigDL does not support Spark Dynamic Allocation mode, and needs to allocate fixed resources for deep learning model training. Thus if your environment has already configured Spark Dynamic Allocation, or stipulated that Spark Dynamic Allocation must be used, you may encounter the following error:
requirement failed: Engine.init: spark.dynamicAllocation.maxExecutors and spark.dynamicAllocation.minExecutors must be identical in dynamic allocation for BigDL
Here we provide a workaround for running BigDL under Spark Dynamic Allocation mode.
For spark-submit cluster mode, the first solution is to disable the Spark Dynamic Allocation mode in SparkConf when you submit your application as follows:
spark-submit --conf spark.dynamicAllocation.enabled=false
Otherwise, if you can not set this configuration due to your cluster settings, you can set spark.dynamicAllocation.minExecutors to be equal to spark.dynamicAllocation.maxExecutors as follows:
spark-submit --conf spark.dynamicAllocation.enabled=true \
--conf spark.dynamicAllocation.minExecutors 2 \
--conf spark.dynamicAllocation.maxExecutors 2
For other cluster modes, such as yarn and k8s, our program will initiate SparkContext for you, and the Spark Dynamic Allocation mode is disabled by default. Thus, generally you wouldn't encounter such problem.
If you are using Spark Dynamic Allocation, you have to disable barrier execution mode at the very beginning of your application as follows:
from bigdl.orca import OrcaContext
OrcaContext.barrier_mode = False
For Spark Dynamic Allocation mode, you are also recommended to manually set num_ray_nodes and ray_node_cpu_cores equal to spark.dynamicAllocation.minExecutors and spark.executor.cores respectively. You can specify num_ray_nodes and ray_node_cpu_cores in init_orca_context as follows:
init_orca_context(..., num_ray_nodes=2, ray_node_cpu_cores=4)