From 16b2ef49c6e70e1e2d84345ef4711e7a40e0f07b Mon Sep 17 00:00:00 2001
From: "Wang, Jian4" <61138589+hzjane@users.noreply.github.com>
Date: Mon, 25 Mar 2024 10:06:02 +0800
Subject: [PATCH] Update_document by heyang (#30)
---
README.md | 607 ++-----------
.../source/doc/Application/blogs.md | 49 -
.../source/doc/Application/index.rst | 2 -
.../source/doc/Application/powered-by.md | 93 --
.../source/doc/Application/presentations.md | 99 --
.../Chronos/Howto/docker_guide_single_node.md | 139 ---
.../Howto/how_to_choose_forecasting_alg.md | 48 -
.../Howto/how_to_create_forecaster.nblink | 3 -
.../Howto/how_to_evaluate_a_forecaster.nblink | 3 -
..._processing_pipeline_to_torchscript.nblink | 3 -
.../Howto/how_to_export_onnx_files.nblink | 3 -
.../Howto/how_to_export_openvino_files.nblink | 3 -
.../how_to_export_torchscript_files.nblink | 3 -
..._confidence_interval_for_prediction.nblink | 3 -
.../Howto/how_to_optimize_a_forecaster.nblink | 3 -
.../Howto/how_to_preprocess_my_data.nblink | 3 -
...cess_data_in_production_environment.nblink | 3 -
.../how_to_save_and_load_forecaster.nblink | 3 -
...e_of_forecaster_through_ONNXRuntime.nblink | 3 -
...ence_of_forecaster_through_OpenVINO.nblink | 3 -
...how_to_train_forecaster_on_one_node.nblink | 3 -
.../Howto/how_to_tune_forecaster_model.nblink | 3 -
.../Howto/how_to_use_benchmark_tool.md | 174 ----
.../Howto/how_to_use_built-in_datasets.nblink | 3 -
...e_forecaster_to_predict_future_data.nblink | 3 -
.../source/doc/Chronos/Howto/index.rst | 52 --
.../source/doc/Chronos/Howto/windows_guide.md | 91 --
.../doc/Chronos/Image/aiops-workflow.png | Bin 48659 -> 0 bytes
.../doc/Chronos/Image/anomaly_detection.svg | 1 -
.../doc/Chronos/Image/automl_hparams.png | Bin 158097 -> 0 bytes
.../doc/Chronos/Image/automl_monitor.png | Bin 181588 -> 0 bytes
.../doc/Chronos/Image/automl_scalars.png | Bin 249945 -> 0 bytes
.../source/doc/Chronos/Image/forecast-RR.png | Bin 49999 -> 0 bytes
.../source/doc/Chronos/Image/forecast-TS.png | Bin 24661 -> 0 bytes
.../source/doc/Chronos/Image/forecasting.svg | 1 -
.../source/doc/Chronos/Image/simulation.svg | 1 -
.../source/doc/Chronos/Overview/aiops.md | 87 --
.../doc/Chronos/Overview/anomaly_detection.md | 34 -
.../Chronos/Overview/chronos_known_issue.md | 71 --
.../data_processing_feature_engineering.md | 276 ------
.../source/doc/Chronos/Overview/deep_dive.rst | 10 -
.../doc/Chronos/Overview/forecasting.md | 287 ------
.../source/doc/Chronos/Overview/install.md | 151 ---
.../doc/Chronos/Overview/quick-tour.rst | 289 ------
.../source/doc/Chronos/Overview/simulation.md | 18 -
.../source/doc/Chronos/Overview/speed_up.md | 143 ---
.../Overview/useful_functionalities.md | 33 -
.../doc/Chronos/Overview/visualization.md | 49 -
.../QuickStart/chronos-anomaly-detector.md | 50 -
.../chronos-autotsest-quickstart.md | 119 ---
...chronos-tsdataset-forecaster-quickstart.md | 92 --
.../source/doc/Chronos/QuickStart/index.md | 372 --------
docs/readthedocs/source/doc/Chronos/index.rst | 89 --
.../doc/DLlib/Image/tensorboard-histo1.png | Bin 173592 -> 0 bytes
.../doc/DLlib/Image/tensorboard-histo2.png | Bin 146351 -> 0 bytes
.../doc/DLlib/Image/tensorboard-scalar.png | Bin 87638 -> 0 bytes
.../source/doc/DLlib/Overview/dllib.md | 139 ---
.../source/doc/DLlib/Overview/index.rst | 6 -
.../source/doc/DLlib/Overview/install.md | 41 -
.../source/doc/DLlib/Overview/keras-api.md | 187 ----
.../source/doc/DLlib/Overview/nnframes.md | 441 ---------
.../doc/DLlib/Overview/visualization.md | 40 -
.../doc/DLlib/QuickStart/dllib-quickstart.md | 70 --
.../source/doc/DLlib/QuickStart/index.md | 9 -
.../QuickStart/python-getting-started.md | 218 -----
.../DLlib/QuickStart/scala-getting-started.md | 303 -------
docs/readthedocs/source/doc/DLlib/index.rst | 62 --
.../source/doc/Friesian/examples.md | 70 --
.../readthedocs/source/doc/Friesian/index.rst | 66 --
.../readthedocs/source/doc/Friesian/intro.rst | 17 -
.../source/doc/Friesian/serving.md | 600 ------------
.../source/doc/GetStarted/index.rst | 6 -
.../source/doc/GetStarted/install.rst | 2 -
.../source/doc/GetStarted/paper.md | 28 -
.../source/doc/GetStarted/usecase.rst | 2 -
.../source/doc/GetStarted/videos.md | 0
.../Inference/Self_Speculative_Decoding.md | 6 +-
.../source/doc/LLM/Overview/FAQ/faq.md | 34 +-
.../doc/LLM/Overview/KeyFeatures/cli.md | 2 +-
.../doc/LLM/Overview/KeyFeatures/finetune.md | 8 +-
.../LLM/Overview/KeyFeatures/gpu_supports.rst | 2 +-
.../KeyFeatures/hugging_face_format.md | 6 +-
.../doc/LLM/Overview/KeyFeatures/index.rst | 4 +-
.../Overview/KeyFeatures/inference_on_gpu.md | 14 +-
.../LLM/Overview/KeyFeatures/langchain_api.md | 14 +-
.../LLM/Overview/KeyFeatures/native_format.md | 4 +-
.../Overview/KeyFeatures/optimize_model.md | 12 +-
.../KeyFeatures/transformers_style_api.rst | 2 +-
.../source/doc/LLM/Overview/examples.rst | 6 +-
.../source/doc/LLM/Overview/examples_cpu.md | 64 +-
.../source/doc/LLM/Overview/examples_gpu.md | 66 +-
.../source/doc/LLM/Overview/install.rst | 4 +-
.../source/doc/LLM/Overview/install_cpu.md | 16 +-
.../source/doc/LLM/Overview/install_gpu.md | 66 +-
.../source/doc/LLM/Overview/known_issues.md | 2 +-
.../source/doc/LLM/Overview/llm.md | 14 +-
.../LLM/Quickstart/benchmark_quickstart.md | 16 +-
.../doc/LLM/Quickstart/docker_windows_gpu.md | 16 +-
.../source/doc/LLM/Quickstart/index.rst | 12 +-
.../doc/LLM/Quickstart/install_linux_gpu.md | 20 +-
.../doc/LLM/Quickstart/install_windows_gpu.md | 32 +-
.../LLM/Quickstart/llama_cpp_quickstart.md | 28 +-
.../doc/LLM/Quickstart/webui_quickstart.md | 18 +-
docs/readthedocs/source/doc/LLM/index.rst | 18 +-
.../accelerate_inference_openvino_gpu.nblink | 3 -
.../Nano/Howto/Inference/OpenVINO/index.rst | 6 -
.../OpenVINO/openvino_inference.nblink | 3 -
.../OpenVINO/openvino_inference_async.nblink | 3 -
...te_pytorch_inference_async_pipeline.nblink | 3 -
.../accelerate_pytorch_inference_gpu.nblink | 3 -
...celerate_pytorch_inference_jit_ipex.nblink | 3 -
.../accelerate_pytorch_inference_onnx.nblink | 3 -
...celerate_pytorch_inference_openvino.nblink | 3 -
.../Nano/Howto/Inference/PyTorch/index.rst | 17 -
.../inference_optimizer_optimize.nblink | 3 -
.../multi_instance_pytorch_inference.nblink | 3 -
.../PyTorch/pytorch_context_manager.nblink | 3 -
.../PyTorch/pytorch_save_and_load_ipex.nblink | 3 -
.../PyTorch/pytorch_save_and_load_jit.nblink | 3 -
.../PyTorch/pytorch_save_and_load_onnx.nblink | 3 -
.../pytorch_save_and_load_openvino.nblink | 3 -
.../quantize_pytorch_inference_inc.nblink | 3 -
.../quantize_pytorch_inference_pot.nblink | 3 -
...ccelerate_tensorflow_inference_onnx.nblink | 3 -
...erate_tensorflow_inference_openvino.nblink | 3 -
.../Nano/Howto/Inference/TensorFlow/index.rst | 8 -
.../tensorflow_inference_bf16.nblink | 3 -
.../tensorflow_save_and_load_onnx.nblink | 3 -
.../tensorflow_save_and_load_openvino.nblink | 3 -
.../source/doc/Nano/Howto/Inference/index.rst | 33 -
.../source/doc/Nano/Howto/Install/index.rst | 7 -
.../Nano/Howto/Install/install_in_colab.md | 84 --
.../doc/Nano/Howto/Install/windows_guide.md | 37 -
...accelerate_pytorch_cv_data_pipeline.nblink | 3 -
.../Howto/Preprocessing/PyTorch/index.rst | 4 -
.../doc/Nano/Howto/Preprocessing/index.rst | 15 -
.../General/choose_num_processes_training.md | 42 -
.../doc/Nano/Howto/Training/General/index.rst | 4 -
.../accelerate_pytorch_training_bf16.nblink | 3 -
.../accelerate_pytorch_training_ipex.nblink | 3 -
...ate_pytorch_training_multi_instance.nblink | 3 -
.../convert_pytorch_training_torchnano.nblink | 3 -
.../doc/Nano/Howto/Training/PyTorch/index.rst | 14 -
.../pytorch_training_channels_last.nblink | 3 -
...use_nano_decorator_pytorch_training.nblink | 3 -
...ate_pytorch_lightning_training_ipex.nblink | 3 -
...h_lightning_training_multi_instance.nblink | 3 -
.../Howto/Training/PyTorchLightning/index.rst | 7 -
.../pytorch_lightning_training_bf16.nblink | 3 -
...ch_lightning_training_channels_last.nblink | 3 -
..._tensorflow_training_multi_instance.nblink | 3 -
.../Nano/Howto/Training/TensorFlow/index.rst | 10 -
...flow_custom_training_multi_instance.nblink | 3 -
.../tensorflow_training_bf16.nblink | 3 -
...rflow_training_embedding_sparseadam.nblink | 3 -
.../source/doc/Nano/Howto/Training/index.rst | 42 -
.../source/doc/Nano/Howto/index.rst | 93 --
.../source/doc/Nano/Image/learning_rate.png | Bin 155542 -> 0 bytes
.../source/doc/Nano/Overview/hpo.rst | 708 ---------------
.../source/doc/Nano/Overview/index.rst | 9 -
.../source/doc/Nano/Overview/install.md | 105 ---
.../source/doc/Nano/Overview/known_issues.md | 71 --
.../source/doc/Nano/Overview/nano.md | 70 --
.../doc/Nano/Overview/pytorch_cuda_patch.md | 29 -
.../doc/Nano/Overview/pytorch_inference.md | 443 ---------
.../source/doc/Nano/Overview/pytorch_train.md | 315 -------
.../source/doc/Nano/Overview/support.md | 60 --
.../doc/Nano/Overview/tensorflow_inference.md | 215 -----
.../doc/Nano/Overview/tensorflow_train.md | 90 --
.../source/doc/Nano/Overview/troubshooting.md | 77 --
.../source/doc/Nano/Overview/userguide.rst | 0
.../source/doc/Nano/QuickStart/index.md | 115 ---
.../doc/Nano/QuickStart/pytorch_nano.md | 174 ----
.../Nano/QuickStart/pytorch_onnxruntime.md | 92 --
.../doc/Nano/QuickStart/pytorch_openvino.md | 89 --
.../QuickStart/pytorch_quantization_inc.md | 89 --
.../pytorch_quantization_inc_onnx.md | 87 --
.../pytorch_quantization_openvino.md | 85 --
.../QuickStart/pytorch_train_quickstart.md | 129 ---
.../Nano/QuickStart/tensorflow_embedding.md | 130 ---
.../tensorflow_quantization_quickstart.md | 89 --
.../QuickStart/tensorflow_train_quickstart.md | 130 ---
.../source/doc/Nano/Tutorials/custom.nblink | 3 -
.../doc/Nano/Tutorials/seq_and_func.nblink | 3 -
docs/readthedocs/source/doc/Nano/index.rst | 63 --
.../Howto/autoestimator-pytorch-quickstart.md | 161 ----
.../doc/Orca/Howto/autoxgboost-quickstart.md | 82 --
.../source/doc/Orca/Howto/index.rst | 26 -
.../Orca/Howto/pytorch-quickstart-bigdl.md | 149 ---
.../doc/Orca/Howto/pytorch-quickstart-ray.md | 147 ---
.../doc/Orca/Howto/pytorch-quickstart.md | 148 ---
.../source/doc/Orca/Howto/ray-quickstart.md | 129 ---
.../source/doc/Orca/Howto/spark-dataframe.md | 111 ---
.../source/doc/Orca/Howto/tf1-quickstart.md | 120 ---
.../doc/Orca/Howto/tf1keras-quickstart.md | 111 ---
.../doc/Orca/Howto/tf2keras-quickstart.md | 147 ---
.../source/doc/Orca/Howto/xshards-pandas.md | 121 ---
.../Orca/Overview/data-parallel-processing.md | 143 ---
.../distributed-training-inference.md | 346 -------
.../doc/Orca/Overview/distributed-tuning.md | 213 -----
.../source/doc/Orca/Overview/index.rst | 8 -
.../source/doc/Orca/Overview/install.md | 145 ---
.../source/doc/Orca/Overview/known_issues.md | 226 -----
.../source/doc/Orca/Overview/orca-context.md | 82 --
.../source/doc/Orca/Overview/orca.md | 104 ---
.../source/doc/Orca/Overview/ray.md | 142 ---
.../source/doc/Orca/Tutorial/index.rst | 7 -
.../source/doc/Orca/Tutorial/k8s.md | 707 ---------------
.../source/doc/Orca/Tutorial/yarn.md | 402 --------
docs/readthedocs/source/doc/Orca/index.rst | 64 --
.../source/doc/PPML/Dev/python_test.md | 61 --
.../source/doc/PPML/Dev/scala_test.md | 0
.../doc/PPML/Overview/ali_ecs_occlum_cn.md | 535 -----------
.../doc/PPML/Overview/attestation_basic.md | 97 --
.../source/doc/PPML/Overview/azure_ppml.md | 543 -----------
.../doc/PPML/Overview/azure_ppml_occlum.md | 149 ---
.../source/doc/PPML/Overview/devguide.md | 537 -----------
.../source/doc/PPML/Overview/examples.rst | 14 -
.../source/doc/PPML/Overview/install.md | 85 --
.../source/doc/PPML/Overview/intro.md | 35 -
.../source/doc/PPML/Overview/misc.rst | 16 -
.../source/doc/PPML/Overview/ppml.md | 826 -----------------
.../source/doc/PPML/Overview/quicktour.md | 92 --
.../PPML/Overview/secure_lightgbm_on_spark.md | 123 ---
.../trusted_big_data_analytics_and_ml.md | 30 -
.../source/doc/PPML/Overview/trusted_fl.md | 149 ---
..._intel_sgx_device_plugin_for_kubernetes.md | 27 -
.../QuickStart/deploy_ppml_in_production.md | 91 --
.../source/doc/PPML/QuickStart/end-to-end.md | 175 ----
.../doc/PPML/QuickStart/install_sgx_driver.md | 115 ---
.../PPML/QuickStart/secure_your_services.md | 62 --
.../QuickStart/tpc-ds_with_sparksql_on_k8s.md | 221 -----
.../QuickStart/tpc-h_with_sparksql_on_k8s.md | 205 -----
.../trusted-serving-on-k8s-guide.md | 153 ----
.../source/doc/PPML/VFL/overview.md | 23 -
.../source/doc/PPML/VFL/user_guide.md | 29 -
.../readthedocs/source/doc/PPML/VFL/vfl_he.md | 16 -
.../doc/PPML/images/fl_architecture.png | Bin 177303 -> 0 bytes
.../source/doc/PPML/images/fl_ckks.PNG | Bin 140415 -> 0 bytes
.../source/doc/PPML/images/occlum_maa.png | Bin 145552 -> 0 bytes
.../doc/PPML/images/ppml_azure_latest.png | Bin 372806 -> 0 bytes
.../doc/PPML/images/ppml_azure_workflow.png | Bin 280491 -> 0 bytes
.../doc/PPML/images/ppml_build_deploy.png | Bin 74654 -> 0 bytes
.../source/doc/PPML/images/ppml_dev_basic.png | Bin 90852 -> 0 bytes
.../source/doc/PPML/images/ppml_scope.png | Bin 76338 -> 0 bytes
.../doc/PPML/images/ppml_sgx_enclave.png | Bin 73828 -> 0 bytes
.../source/doc/PPML/images/ppml_test_dev.png | Bin 46375 -> 0 bytes
.../doc/PPML/images/spark_sgx_azure.png | Bin 361116 -> 0 bytes
.../doc/PPML/images/spark_sgx_occlum.png | Bin 355239 -> 0 bytes
docs/readthedocs/source/doc/PPML/index.rst | 71 --
.../source/doc/PythonAPI/LLM/index.rst | 2 +-
.../source/doc/PythonAPI/LLM/langchain.rst | 22 +-
.../source/doc/PythonAPI/LLM/optimize.rst | 4 +-
.../source/doc/PythonAPI/LLM/transformers.rst | 4 +-
.../cluster-serving-http-example.ipynb | 857 ------------------
.../source/doc/Serving/Example/example.md | 124 ---
.../keras-to-cluster-serving-example.ipynb | 719 ---------------
.../Example/l08c08_forecasting_with_lstm.py | 75 --
..._nlp_constructing_text_generation_model.py | 75 --
.../tf1-to-cluster-serving-example.ipynb | 571 ------------
.../doc/Serving/Example/transfer_learning.py | 40 -
.../doc/Serving/FAQ/contribute-guide.md | 118 ---
.../readthedocs/source/doc/Serving/FAQ/faq.md | 53 --
.../Overview/cluster_serving_overview.jpg | Bin 106923 -> 0 bytes
.../Overview/cluster_serving_steps.jpg | Bin 62426 -> 0 bytes
.../source/doc/Serving/Overview/serving.md | 49 -
.../ProgrammingGuide/serving-inference.md | 185 ----
.../ProgrammingGuide/serving-installation.md | 154 ----
.../Serving/ProgrammingGuide/serving-start.md | 87 --
.../Serving/QuickStart/serving-quickstart.md | 50 -
docs/readthedocs/source/doc/Serving/index.rst | 66 --
.../source/doc/UseCase/tensorboard.md | 0
.../readthedocs/source/doc/UserGuide/colab.md | 61 --
.../source/doc/UserGuide/contributor.rst | 5 -
.../source/doc/UserGuide/databricks.md | 175 ----
.../source/doc/UserGuide/develop.md | 173 ----
.../source/doc/UserGuide/docker.md | 141 ---
.../source/doc/UserGuide/documentation.md | 642 -------------
.../source/doc/UserGuide/hadoop.md | 202 -----
.../doc/UserGuide/images/Databricks5.PNG | Bin 38560 -> 0 bytes
.../source/doc/UserGuide/images/apply-all.png | Bin 198500 -> 0 bytes
.../source/doc/UserGuide/images/cluster.png | Bin 108258 -> 0 bytes
.../UserGuide/images/config-init-script.png | Bin 79630 -> 0 bytes
.../doc/UserGuide/images/copy-script-path.png | Bin 89068 -> 0 bytes
.../doc/UserGuide/images/create-cluster.png | Bin 112949 -> 0 bytes
.../doc/UserGuide/images/db-gloo-socket.png | Bin 112535 -> 0 bytes
.../source/doc/UserGuide/images/dbfs.png | Bin 130045 -> 0 bytes
.../source/doc/UserGuide/images/dllib-jar.png | Bin 134480 -> 0 bytes
.../source/doc/UserGuide/images/dllib-whl.png | Bin 161834 -> 0 bytes
.../UserGuide/images/init-orca-context.png | Bin 83138 -> 0 bytes
.../doc/UserGuide/images/install-zip.png | Bin 88588 -> 0 bytes
.../source/doc/UserGuide/images/notebook1.jpg | Bin 97717 -> 0 bytes
.../source/doc/UserGuide/images/notebook2.jpg | Bin 51440 -> 0 bytes
.../source/doc/UserGuide/images/notebook3.jpg | Bin 105392 -> 0 bytes
.../source/doc/UserGuide/images/notebook4.jpg | Bin 114990 -> 0 bytes
.../source/doc/UserGuide/images/notebook5.jpg | Bin 87998 -> 0 bytes
.../source/doc/UserGuide/images/orca-jar.png | Bin 148785 -> 0 bytes
.../source/doc/UserGuide/images/orca-whl.png | Bin 182355 -> 0 bytes
.../doc/UserGuide/images/spark-config.png | Bin 86848 -> 0 bytes
.../doc/UserGuide/images/spark-context.png | Bin 69706 -> 0 bytes
.../source/doc/UserGuide/images/token.png | Bin 80278 -> 0 bytes
.../UserGuide/images/upload-init-script.png | Bin 52147 -> 0 bytes
.../source/doc/UserGuide/images/url.png | Bin 70550 -> 0 bytes
.../doc/UserGuide/images/verify-dbfs.png | Bin 7144 -> 0 bytes
.../source/doc/UserGuide/index.rst | 60 --
docs/readthedocs/source/doc/UserGuide/k8s.md | 346 -------
.../source/doc/UserGuide/known_issues.md | 40 -
.../source/doc/UserGuide/notebooks.md | 72 --
.../source/doc/UserGuide/python.md | 172 ----
.../readthedocs/source/doc/UserGuide/scala.md | 199 ----
docs/readthedocs/source/doc/UserGuide/win.md | 111 ---
docs/readthedocs/source/index.rst | 159 +---
python/llm/README.md | 58 +-
python/llm/dev/benchmark/README.md | 2 +-
.../benchmark/all-in-one/run-deepspeed-spr.sh | 2 +-
.../llm/dev/benchmark/all-in-one/run-hbm.sh | 2 +-
.../llm/dev/benchmark/all-in-one/run-spr.sh | 2 +-
python/llm/dev/benchmark/harness/README.md | 10 +-
python/llm/dev/benchmark/whisper/README.md | 4 +-
python/llm/dev/release.sh | 10 +-
python/llm/dev/release_default_linux.sh | 4 +-
python/llm/dev/release_default_windows.sh | 4 +-
.../CPU/Applications/autogen/README.md | 16 +-
.../CPU/Applications/hf-agent/README.md | 14 +-
.../CPU/Applications/streaming-llm/README.md | 8 +-
.../example/CPU/Deepspeed-AutoTP/README.md | 14 +-
.../example/CPU/Deepspeed-AutoTP/install.sh | 4 +-
.../llm/example/CPU/Deepspeed-AutoTP/run.sh | 2 +-
.../Advanced-Quantizations/AWQ/README.md | 14 +-
.../Advanced-Quantizations/GGUF/README.md | 14 +-
.../Advanced-Quantizations/GPTQ/README.md | 14 +-
.../Model/README.md | 12 +-
.../Model/aquila/README.md | 18 +-
.../Model/aquila2/README.md | 18 +-
.../Model/baichuan/README.md | 14 +-
.../Model/baichuan2/README.md | 14 +-
.../Model/bluelm/README.md | 14 +-
.../Model/chatglm/README.md | 18 +-
.../Model/chatglm2/README.md | 24 +-
.../Model/chatglm3/README.md | 24 +-
.../Model/codellama/README.md | 14 +-
.../Model/codeshell/README.md | 18 +-
.../Model/deciLM-7b/README.md | 14 +-
.../Model/deepseek-moe/README.md | 18 +-
.../Model/deepseek/README.md | 14 +-
.../Model/distil-whisper/README.md | 16 +-
.../Model/dolly_v1/README.md | 14 +-
.../Model/dolly_v2/README.md | 14 +-
.../Model/falcon/README.md | 16 +-
.../Model/flan-t5/README.md | 16 +-
.../Model/fuyu/README.md | 16 +-
.../Model/gemma/README.md | 12 +-
.../Model/internlm-xcomposer/README.md | 16 +-
.../Model/internlm/README.md | 14 +-
.../Model/internlm2/README.md | 14 +-
.../Model/llama2/README.md | 14 +-
.../Model/mistral/README.md | 16 +-
.../Model/mixtral/README.md | 10 +-
.../Model/moss/README.md | 14 +-
.../Model/mpt/README.md | 14 +-
.../Model/phi-1_5/README.md | 18 +-
.../Model/phi-2/README.md | 18 +-
.../Model/phixtral/README.md | 18 +-
.../Model/phoenix/README.md | 14 +-
.../Model/qwen-vl/README.md | 14 +-
.../Model/qwen/README.md | 16 +-
.../Model/qwen1.5/README.md | 14 +-
.../Model/redpajama/README.md | 14 +-
.../Model/replit/README.md | 14 +-
.../Model/skywork/README.md | 14 +-
.../Model/solar/README.md | 14 +-
.../Model/starcoder/README.md | 14 +-
.../Model/vicuna/README.md | 14 +-
.../Model/whisper/readme.md | 24 +-
.../Model/wizardcoder-python/README.md | 14 +-
.../Model/yi/README.md | 16 +-
.../Model/yuan2/README.md | 16 +-
.../Model/ziya/README.md | 18 +-
.../More-Data-Types/README.md | 6 +-
.../CPU/HF-Transformers-AutoModels/README.md | 4 +-
.../Save-Load/README.md | 6 +-
python/llm/example/CPU/LangChain/README.md | 10 +-
.../CPU/LangChain/README_nativeint4.md | 10 +-
python/llm/example/CPU/LlamaIndex/README.md | 4 +-
.../example/CPU/ModelScope-Models/README.md | 14 +-
.../llm/example/CPU/Native-Models/README.md | 58 +-
.../CPU/PyTorch-Models/Model/README.md | 12 +-
.../PyTorch-Models/Model/aquila2/README.md | 14 +-
.../CPU/PyTorch-Models/Model/bark/README.md | 14 +-
.../CPU/PyTorch-Models/Model/bert/README.md | 14 +-
.../CPU/PyTorch-Models/Model/bluelm/README.md | 14 +-
.../PyTorch-Models/Model/chatglm/README.md | 14 +-
.../PyTorch-Models/Model/chatglm3/README.md | 14 +-
.../PyTorch-Models/Model/codellama/README.md | 14 +-
.../PyTorch-Models/Model/codeshell/README.md | 14 +-
.../PyTorch-Models/Model/deciLM-7b/README.md | 14 +-
.../Model/deepseek-moe/README.md | 14 +-
.../PyTorch-Models/Model/deepseek/README.md | 14 +-
.../Model/distil-whisper/README.md | 16 +-
.../PyTorch-Models/Model/flan-t5/README.md | 16 +-
.../CPU/PyTorch-Models/Model/fuyu/README.md | 16 +-
.../Model/internlm-xcomposer/README.md | 16 +-
.../PyTorch-Models/Model/internlm2/README.md | 14 +-
.../CPU/PyTorch-Models/Model/llama2/README.md | 14 +-
.../CPU/PyTorch-Models/Model/llava/README.md | 16 +-
.../CPU/PyTorch-Models/Model/mamba/README.md | 14 +-
.../PyTorch-Models/Model/meta-llama/README.md | 12 +-
.../PyTorch-Models/Model/mistral/README.md | 14 +-
.../PyTorch-Models/Model/mixtral/README.md | 10 +-
.../Model/openai-whisper/readme.md | 14 +-
.../PyTorch-Models/Model/phi-1_5/README.md | 14 +-
.../CPU/PyTorch-Models/Model/phi-2/README.md | 14 +-
.../PyTorch-Models/Model/phixtral/README.md | 14 +-
.../PyTorch-Models/Model/qwen-vl/README.md | 14 +-
.../PyTorch-Models/Model/qwen1.5/README.md | 14 +-
.../PyTorch-Models/Model/skywork/README.md | 14 +-
.../CPU/PyTorch-Models/Model/solar/README.md | 14 +-
.../Model/wizardcoder-python/README.md | 14 +-
.../CPU/PyTorch-Models/Model/yi/README.md | 16 +-
.../CPU/PyTorch-Models/Model/yuan2/README.md | 14 +-
.../CPU/PyTorch-Models/Model/ziya/README.md | 14 +-
.../PyTorch-Models/More-Data-Types/README.md | 8 +-
.../llm/example/CPU/PyTorch-Models/README.md | 4 +-
.../CPU/PyTorch-Models/Save-Load/README.md | 8 +-
.../example/CPU/QLoRA-FineTuning/README.md | 18 +-
.../QLoRA-FineTuning/alpaca-qlora/README.md | 18 +-
.../finetune_one_node_two_sockets.sh | 4 +-
python/llm/example/CPU/README.md | 24 +-
.../CPU/Speculative-Decoding/README.md | 8 +-
.../Speculative-Decoding/baichuan2/README.md | 12 +-
.../Speculative-Decoding/chatglm3/README.md | 10 +-
.../CPU/Speculative-Decoding/llama2/README.md | 12 +-
.../Speculative-Decoding/mistral/README.md | 12 +-
.../CPU/Speculative-Decoding/qwen/README.md | 8 +-
.../Speculative-Decoding/starcoder/README.md | 10 +-
.../CPU/Speculative-Decoding/vicuna/README.md | 12 +-
.../CPU/Speculative-Decoding/ziya/README.md | 10 +-
python/llm/example/CPU/vLLM-Serving/README.md | 20 +-
.../GPU/Applications/autogen/README.md | 16 +-
.../GPU/Applications/streaming-llm/README.md | 8 +-
.../example/GPU/Deepspeed-AutoTP/README.md | 12 +-
.../Advanced-Quantizations/AWQ/README.md | 10 +-
.../Advanced-Quantizations/GGUF-IQ2/README.md | 8 +-
.../Advanced-Quantizations/GGUF/README.md | 10 +-
.../Advanced-Quantizations/GPTQ/README.md | 10 +-
.../Model/README.md | 4 +-
.../Model/aquila/README.md | 12 +-
.../Model/aquila2/README.md | 12 +-
.../Model/baichuan/README.md | 10 +-
.../Model/baichuan2/README.md | 10 +-
.../Model/bluelm/README.md | 10 +-
.../Model/chatglm2/README.md | 16 +-
.../Model/chatglm3/README.md | 16 +-
.../Model/chinese-llama2/README.md | 10 +-
.../Model/codellama/readme.md | 10 +-
.../Model/deciLM-7b/README.md | 12 +-
.../Model/deepseek/README.md | 10 +-
.../Model/distil-whisper/README.md | 12 +-
.../Model/dolly-v1/README.md | 12 +-
.../Model/dolly-v2/README.md | 10 +-
.../Model/falcon/README.md | 12 +-
.../Model/flan-t5/README.md | 12 +-
.../Model/gemma/README.md | 14 +-
.../Model/gpt-j/readme.md | 10 +-
.../Model/internlm/README.md | 10 +-
.../Model/internlm2/README.md | 10 +-
.../Model/llama2/README.md | 10 +-
.../Model/mistral/README.md | 12 +-
.../Model/mixtral/README.md | 12 +-
.../Model/mpt/README.md | 10 +-
.../Model/phi-1_5/README.md | 10 +-
.../Model/phi-2/README.md | 10 +-
.../Model/phixtral/README.md | 10 +-
.../Model/qwen-vl/README.md | 12 +-
.../Model/qwen/README.md | 10 +-
.../Model/qwen1.5/README.md | 10 +-
.../Model/redpajama/README.md | 12 +-
.../Model/replit/README.md | 12 +-
.../Model/rwkv4/README.md | 10 +-
.../Model/rwkv5/README.md | 10 +-
.../Model/solar/README.md | 10 +-
.../Model/starcoder/readme.md | 10 +-
.../Model/vicuna/README.md | 12 +-
.../Model/voiceassistant/README.md | 24 +-
.../Model/whisper/readme.md | 10 +-
.../Model/yi/README.md | 12 +-
.../Model/yuan2/README.md | 10 +-
.../More-Data-Types/README.md | 6 +-
.../GPU/HF-Transformers-AutoModels/README.md | 4 +-
.../Save-Load/README.md | 10 +-
.../example/GPU/LLM-Finetuning/DPO/README.md | 8 +-
.../GPU/LLM-Finetuning/HF-PEFT/README.md | 6 +-
.../example/GPU/LLM-Finetuning/LoRA/README.md | 12 +-
.../lora_finetune_llama2_7b_arc_1_card.sh | 2 +-
...lora_finetune_llama2_7b_pvc_1110_4_card.sh | 2 +-
...lora_finetune_llama2_7b_pvc_1550_1_tile.sh | 2 +-
...lora_finetune_llama2_7b_pvc_1550_4_card.sh | 2 +-
.../GPU/LLM-Finetuning/QA-LoRA/README.md | 12 +-
.../qalora_finetune_llama2_7b_arc_1_card.sh | 2 +-
.../qalora_finetune_llama2_7b_arc_2_card.sh | 2 +-
...lora_finetune_llama2_7b_pvc_1550_1_card.sh | 2 +-
...lora_finetune_llama2_7b_pvc_1550_1_tile.sh | 2 +-
.../GPU/LLM-Finetuning/QLoRA/README.md | 6 +-
.../QLoRA/alpaca-qlora/README.md | 14 +-
...ora_finetune_llama2_13b_pvc_1550_1_card.sh | 2 +-
...ora_finetune_llama2_13b_pvc_1550_1_tile.sh | 2 +-
...ora_finetune_llama2_13b_pvc_1550_4_card.sh | 2 +-
...ora_finetune_llama2_70b_pvc_1550_1_card.sh | 4 +-
...ora_finetune_llama2_70b_pvc_1550_4_card.sh | 4 +-
.../qlora_finetune_llama2_7b_arc_1_card.sh | 2 +-
.../qlora_finetune_llama2_7b_arc_2_card.sh | 2 +-
...lora_finetune_llama2_7b_flex_170_1_card.sh | 2 +-
...lora_finetune_llama2_7b_flex_170_3_card.sh | 2 +-
...lora_finetune_llama2_7b_pvc_1100_1_card.sh | 2 +-
...lora_finetune_llama2_7b_pvc_1100_4_card.sh | 2 +-
...lora_finetune_llama2_7b_pvc_1550_1_card.sh | 2 +-
...lora_finetune_llama2_7b_pvc_1550_4_card.sh | 2 +-
.../QLoRA/simple-example/README.md | 8 +-
.../QLoRA/trl-example/README.md | 8 +-
.../llm/example/GPU/LLM-Finetuning/README.md | 4 +-
.../GPU/LLM-Finetuning/ReLora/README.md | 12 +-
.../relora_finetune_llama2_7b_arc_1_card.sh | 2 +-
.../relora_finetune_llama2_7b_arc_2_card.sh | 2 +-
...lora_finetune_llama2_7b_pvc_1550_1_card.sh | 2 +-
...lora_finetune_llama2_7b_pvc_1550_4_card.sh | 2 +-
.../LangChain/transformer_int4_gpu/README.md | 8 +-
python/llm/example/GPU/LlamaIndex/README.md | 4 +-
.../example/GPU/ModelScope-Models/README.md | 10 +-
.../GPU/ModelScope-Models/Save-Load/README.md | 10 +-
.../GPU/Pipeline-Parallel-Inference/README.md | 10 +-
.../GPU/PyTorch-Models/Model/README.md | 4 +-
.../PyTorch-Models/Model/aquila2/README.md | 12 +-
.../PyTorch-Models/Model/baichuan/README.md | 12 +-
.../PyTorch-Models/Model/baichuan2/README.md | 12 +-
.../GPU/PyTorch-Models/Model/bark/README.md | 18 +-
.../GPU/PyTorch-Models/Model/bluelm/README.md | 12 +-
.../PyTorch-Models/Model/chatglm2/README.md | 20 +-
.../PyTorch-Models/Model/chatglm3/README.md | 20 +-
.../PyTorch-Models/Model/codellama/README.md | 12 +-
.../PyTorch-Models/Model/deciLM-7b/README.md | 12 +-
.../PyTorch-Models/Model/deepseek/README.md | 12 +-
.../Model/distil-whisper/README.md | 12 +-
.../PyTorch-Models/Model/dolly-v1/README.md | 12 +-
.../PyTorch-Models/Model/dolly-v2/README.md | 12 +-
.../PyTorch-Models/Model/flan-t5/README.md | 12 +-
.../PyTorch-Models/Model/internlm2/README.md | 10 +-
.../GPU/PyTorch-Models/Model/llama2/README.md | 12 +-
.../GPU/PyTorch-Models/Model/llava/README.md | 12 +-
.../GPU/PyTorch-Models/Model/mamba/README.md | 10 +-
.../PyTorch-Models/Model/mistral/README.md | 12 +-
.../PyTorch-Models/Model/mixtral/README.md | 12 +-
.../PyTorch-Models/Model/phi-1_5/README.md | 12 +-
.../GPU/PyTorch-Models/Model/phi-2/README.md | 12 +-
.../PyTorch-Models/Model/phixtral/README.md | 12 +-
.../PyTorch-Models/Model/qwen-vl/README.md | 12 +-
.../PyTorch-Models/Model/qwen1.5/README.md | 10 +-
.../GPU/PyTorch-Models/Model/replit/README.md | 12 +-
.../GPU/PyTorch-Models/Model/solar/README.md | 12 +-
.../PyTorch-Models/Model/speech-t5/README.md | 12 +-
.../PyTorch-Models/Model/starcoder/README.md | 12 +-
.../GPU/PyTorch-Models/Model/yi/README.md | 12 +-
.../GPU/PyTorch-Models/Model/yuan2/README.md | 12 +-
.../PyTorch-Models/More-Data-Types/README.md | 8 +-
.../llm/example/GPU/PyTorch-Models/README.md | 4 +-
.../GPU/PyTorch-Models/Save-Load/README.md | 8 +-
python/llm/example/GPU/README.md | 22 +-
.../GPU/Speculative-Decoding/README.md | 6 +-
.../Speculative-Decoding/baichuan2/README.md | 8 +-
.../Speculative-Decoding/chatglm3/README.md | 8 +-
.../GPU/Speculative-Decoding/gpt-j/README.md | 8 +-
.../GPU/Speculative-Decoding/llama2/README.md | 8 +-
.../Speculative-Decoding/mistral/README.md | 8 +-
.../GPU/Speculative-Decoding/qwen/README.md | 8 +-
python/llm/example/GPU/vLLM-Serving/README.md | 18 +-
python/llm/portable-zip/README-ui.md | 6 +-
python/llm/portable-zip/README.md | 6 +-
python/llm/portable-zip/setup.md | 6 +-
python/llm/scripts/README.md | 8 +-
.../src/ipex_llm/serving/fastchat/README.md | 64 +-
579 files changed, 1940 insertions(+), 25873 deletions(-)
delete mode 100644 docs/readthedocs/source/doc/Application/blogs.md
delete mode 100644 docs/readthedocs/source/doc/Application/index.rst
delete mode 100644 docs/readthedocs/source/doc/Application/powered-by.md
delete mode 100644 docs/readthedocs/source/doc/Application/presentations.md
delete mode 100644 docs/readthedocs/source/doc/Chronos/Howto/docker_guide_single_node.md
delete mode 100644 docs/readthedocs/source/doc/Chronos/Howto/how_to_choose_forecasting_alg.md
delete mode 100644 docs/readthedocs/source/doc/Chronos/Howto/how_to_create_forecaster.nblink
delete mode 100644 docs/readthedocs/source/doc/Chronos/Howto/how_to_evaluate_a_forecaster.nblink
delete mode 100644 docs/readthedocs/source/doc/Chronos/Howto/how_to_export_data_processing_pipeline_to_torchscript.nblink
delete mode 100644 docs/readthedocs/source/doc/Chronos/Howto/how_to_export_onnx_files.nblink
delete mode 100644 docs/readthedocs/source/doc/Chronos/Howto/how_to_export_openvino_files.nblink
delete mode 100644 docs/readthedocs/source/doc/Chronos/Howto/how_to_export_torchscript_files.nblink
delete mode 100644 docs/readthedocs/source/doc/Chronos/Howto/how_to_generate_confidence_interval_for_prediction.nblink
delete mode 100644 docs/readthedocs/source/doc/Chronos/Howto/how_to_optimize_a_forecaster.nblink
delete mode 100644 docs/readthedocs/source/doc/Chronos/Howto/how_to_preprocess_my_data.nblink
delete mode 100644 docs/readthedocs/source/doc/Chronos/Howto/how_to_process_data_in_production_environment.nblink
delete mode 100644 docs/readthedocs/source/doc/Chronos/Howto/how_to_save_and_load_forecaster.nblink
delete mode 100644 docs/readthedocs/source/doc/Chronos/Howto/how_to_speedup_inference_of_forecaster_through_ONNXRuntime.nblink
delete mode 100644 docs/readthedocs/source/doc/Chronos/Howto/how_to_speedup_inference_of_forecaster_through_OpenVINO.nblink
delete mode 100644 docs/readthedocs/source/doc/Chronos/Howto/how_to_train_forecaster_on_one_node.nblink
delete mode 100644 docs/readthedocs/source/doc/Chronos/Howto/how_to_tune_forecaster_model.nblink
delete mode 100644 docs/readthedocs/source/doc/Chronos/Howto/how_to_use_benchmark_tool.md
delete mode 100755 docs/readthedocs/source/doc/Chronos/Howto/how_to_use_built-in_datasets.nblink
delete mode 100644 docs/readthedocs/source/doc/Chronos/Howto/how_to_use_forecaster_to_predict_future_data.nblink
delete mode 100644 docs/readthedocs/source/doc/Chronos/Howto/index.rst
delete mode 100644 docs/readthedocs/source/doc/Chronos/Howto/windows_guide.md
delete mode 100644 docs/readthedocs/source/doc/Chronos/Image/aiops-workflow.png
delete mode 100644 docs/readthedocs/source/doc/Chronos/Image/anomaly_detection.svg
delete mode 100644 docs/readthedocs/source/doc/Chronos/Image/automl_hparams.png
delete mode 100644 docs/readthedocs/source/doc/Chronos/Image/automl_monitor.png
delete mode 100644 docs/readthedocs/source/doc/Chronos/Image/automl_scalars.png
delete mode 100644 docs/readthedocs/source/doc/Chronos/Image/forecast-RR.png
delete mode 100644 docs/readthedocs/source/doc/Chronos/Image/forecast-TS.png
delete mode 100644 docs/readthedocs/source/doc/Chronos/Image/forecasting.svg
delete mode 100644 docs/readthedocs/source/doc/Chronos/Image/simulation.svg
delete mode 100644 docs/readthedocs/source/doc/Chronos/Overview/aiops.md
delete mode 100644 docs/readthedocs/source/doc/Chronos/Overview/anomaly_detection.md
delete mode 100644 docs/readthedocs/source/doc/Chronos/Overview/chronos_known_issue.md
delete mode 100644 docs/readthedocs/source/doc/Chronos/Overview/data_processing_feature_engineering.md
delete mode 100644 docs/readthedocs/source/doc/Chronos/Overview/deep_dive.rst
delete mode 100644 docs/readthedocs/source/doc/Chronos/Overview/forecasting.md
delete mode 100644 docs/readthedocs/source/doc/Chronos/Overview/install.md
delete mode 100644 docs/readthedocs/source/doc/Chronos/Overview/quick-tour.rst
delete mode 100644 docs/readthedocs/source/doc/Chronos/Overview/simulation.md
delete mode 100644 docs/readthedocs/source/doc/Chronos/Overview/speed_up.md
delete mode 100644 docs/readthedocs/source/doc/Chronos/Overview/useful_functionalities.md
delete mode 100644 docs/readthedocs/source/doc/Chronos/Overview/visualization.md
delete mode 100644 docs/readthedocs/source/doc/Chronos/QuickStart/chronos-anomaly-detector.md
delete mode 100644 docs/readthedocs/source/doc/Chronos/QuickStart/chronos-autotsest-quickstart.md
delete mode 100644 docs/readthedocs/source/doc/Chronos/QuickStart/chronos-tsdataset-forecaster-quickstart.md
delete mode 100644 docs/readthedocs/source/doc/Chronos/QuickStart/index.md
delete mode 100644 docs/readthedocs/source/doc/Chronos/index.rst
delete mode 100644 docs/readthedocs/source/doc/DLlib/Image/tensorboard-histo1.png
delete mode 100644 docs/readthedocs/source/doc/DLlib/Image/tensorboard-histo2.png
delete mode 100644 docs/readthedocs/source/doc/DLlib/Image/tensorboard-scalar.png
delete mode 100644 docs/readthedocs/source/doc/DLlib/Overview/dllib.md
delete mode 100644 docs/readthedocs/source/doc/DLlib/Overview/index.rst
delete mode 100644 docs/readthedocs/source/doc/DLlib/Overview/install.md
delete mode 100644 docs/readthedocs/source/doc/DLlib/Overview/keras-api.md
delete mode 100644 docs/readthedocs/source/doc/DLlib/Overview/nnframes.md
delete mode 100644 docs/readthedocs/source/doc/DLlib/Overview/visualization.md
delete mode 100644 docs/readthedocs/source/doc/DLlib/QuickStart/dllib-quickstart.md
delete mode 100644 docs/readthedocs/source/doc/DLlib/QuickStart/index.md
delete mode 100644 docs/readthedocs/source/doc/DLlib/QuickStart/python-getting-started.md
delete mode 100644 docs/readthedocs/source/doc/DLlib/QuickStart/scala-getting-started.md
delete mode 100644 docs/readthedocs/source/doc/DLlib/index.rst
delete mode 100644 docs/readthedocs/source/doc/Friesian/examples.md
delete mode 100644 docs/readthedocs/source/doc/Friesian/index.rst
delete mode 100644 docs/readthedocs/source/doc/Friesian/intro.rst
delete mode 100644 docs/readthedocs/source/doc/Friesian/serving.md
delete mode 100644 docs/readthedocs/source/doc/GetStarted/index.rst
delete mode 100644 docs/readthedocs/source/doc/GetStarted/install.rst
delete mode 100644 docs/readthedocs/source/doc/GetStarted/paper.md
delete mode 100644 docs/readthedocs/source/doc/GetStarted/usecase.rst
delete mode 100644 docs/readthedocs/source/doc/GetStarted/videos.md
delete mode 100644 docs/readthedocs/source/doc/Nano/Howto/Inference/OpenVINO/accelerate_inference_openvino_gpu.nblink
delete mode 100644 docs/readthedocs/source/doc/Nano/Howto/Inference/OpenVINO/index.rst
delete mode 100644 docs/readthedocs/source/doc/Nano/Howto/Inference/OpenVINO/openvino_inference.nblink
delete mode 100644 docs/readthedocs/source/doc/Nano/Howto/Inference/OpenVINO/openvino_inference_async.nblink
delete mode 100644 docs/readthedocs/source/doc/Nano/Howto/Inference/PyTorch/accelerate_pytorch_inference_async_pipeline.nblink
delete mode 100644 docs/readthedocs/source/doc/Nano/Howto/Inference/PyTorch/accelerate_pytorch_inference_gpu.nblink
delete mode 100644 docs/readthedocs/source/doc/Nano/Howto/Inference/PyTorch/accelerate_pytorch_inference_jit_ipex.nblink
delete mode 100644 docs/readthedocs/source/doc/Nano/Howto/Inference/PyTorch/accelerate_pytorch_inference_onnx.nblink
delete mode 100644 docs/readthedocs/source/doc/Nano/Howto/Inference/PyTorch/accelerate_pytorch_inference_openvino.nblink
delete mode 100644 docs/readthedocs/source/doc/Nano/Howto/Inference/PyTorch/index.rst
delete mode 100644 docs/readthedocs/source/doc/Nano/Howto/Inference/PyTorch/inference_optimizer_optimize.nblink
delete mode 100644 docs/readthedocs/source/doc/Nano/Howto/Inference/PyTorch/multi_instance_pytorch_inference.nblink
delete mode 100644 docs/readthedocs/source/doc/Nano/Howto/Inference/PyTorch/pytorch_context_manager.nblink
delete mode 100644 docs/readthedocs/source/doc/Nano/Howto/Inference/PyTorch/pytorch_save_and_load_ipex.nblink
delete mode 100644 docs/readthedocs/source/doc/Nano/Howto/Inference/PyTorch/pytorch_save_and_load_jit.nblink
delete mode 100644 docs/readthedocs/source/doc/Nano/Howto/Inference/PyTorch/pytorch_save_and_load_onnx.nblink
delete mode 100644 docs/readthedocs/source/doc/Nano/Howto/Inference/PyTorch/pytorch_save_and_load_openvino.nblink
delete mode 100644 docs/readthedocs/source/doc/Nano/Howto/Inference/PyTorch/quantize_pytorch_inference_inc.nblink
delete mode 100644 docs/readthedocs/source/doc/Nano/Howto/Inference/PyTorch/quantize_pytorch_inference_pot.nblink
delete mode 100644 docs/readthedocs/source/doc/Nano/Howto/Inference/TensorFlow/accelerate_tensorflow_inference_onnx.nblink
delete mode 100644 docs/readthedocs/source/doc/Nano/Howto/Inference/TensorFlow/accelerate_tensorflow_inference_openvino.nblink
delete mode 100644 docs/readthedocs/source/doc/Nano/Howto/Inference/TensorFlow/index.rst
delete mode 100644 docs/readthedocs/source/doc/Nano/Howto/Inference/TensorFlow/tensorflow_inference_bf16.nblink
delete mode 100644 docs/readthedocs/source/doc/Nano/Howto/Inference/TensorFlow/tensorflow_save_and_load_onnx.nblink
delete mode 100644 docs/readthedocs/source/doc/Nano/Howto/Inference/TensorFlow/tensorflow_save_and_load_openvino.nblink
delete mode 100644 docs/readthedocs/source/doc/Nano/Howto/Inference/index.rst
delete mode 100644 docs/readthedocs/source/doc/Nano/Howto/Install/index.rst
delete mode 100644 docs/readthedocs/source/doc/Nano/Howto/Install/install_in_colab.md
delete mode 100644 docs/readthedocs/source/doc/Nano/Howto/Install/windows_guide.md
delete mode 100644 docs/readthedocs/source/doc/Nano/Howto/Preprocessing/PyTorch/accelerate_pytorch_cv_data_pipeline.nblink
delete mode 100644 docs/readthedocs/source/doc/Nano/Howto/Preprocessing/PyTorch/index.rst
delete mode 100644 docs/readthedocs/source/doc/Nano/Howto/Preprocessing/index.rst
delete mode 100644 docs/readthedocs/source/doc/Nano/Howto/Training/General/choose_num_processes_training.md
delete mode 100644 docs/readthedocs/source/doc/Nano/Howto/Training/General/index.rst
delete mode 100644 docs/readthedocs/source/doc/Nano/Howto/Training/PyTorch/accelerate_pytorch_training_bf16.nblink
delete mode 100644 docs/readthedocs/source/doc/Nano/Howto/Training/PyTorch/accelerate_pytorch_training_ipex.nblink
delete mode 100644 docs/readthedocs/source/doc/Nano/Howto/Training/PyTorch/accelerate_pytorch_training_multi_instance.nblink
delete mode 100644 docs/readthedocs/source/doc/Nano/Howto/Training/PyTorch/convert_pytorch_training_torchnano.nblink
delete mode 100644 docs/readthedocs/source/doc/Nano/Howto/Training/PyTorch/index.rst
delete mode 100644 docs/readthedocs/source/doc/Nano/Howto/Training/PyTorch/pytorch_training_channels_last.nblink
delete mode 100644 docs/readthedocs/source/doc/Nano/Howto/Training/PyTorch/use_nano_decorator_pytorch_training.nblink
delete mode 100644 docs/readthedocs/source/doc/Nano/Howto/Training/PyTorchLightning/accelerate_pytorch_lightning_training_ipex.nblink
delete mode 100644 docs/readthedocs/source/doc/Nano/Howto/Training/PyTorchLightning/accelerate_pytorch_lightning_training_multi_instance.nblink
delete mode 100644 docs/readthedocs/source/doc/Nano/Howto/Training/PyTorchLightning/index.rst
delete mode 100644 docs/readthedocs/source/doc/Nano/Howto/Training/PyTorchLightning/pytorch_lightning_training_bf16.nblink
delete mode 100644 docs/readthedocs/source/doc/Nano/Howto/Training/PyTorchLightning/pytorch_lightning_training_channels_last.nblink
delete mode 100644 docs/readthedocs/source/doc/Nano/Howto/Training/TensorFlow/accelerate_tensorflow_training_multi_instance.nblink
delete mode 100644 docs/readthedocs/source/doc/Nano/Howto/Training/TensorFlow/index.rst
delete mode 100644 docs/readthedocs/source/doc/Nano/Howto/Training/TensorFlow/tensorflow_custom_training_multi_instance.nblink
delete mode 100644 docs/readthedocs/source/doc/Nano/Howto/Training/TensorFlow/tensorflow_training_bf16.nblink
delete mode 100644 docs/readthedocs/source/doc/Nano/Howto/Training/TensorFlow/tensorflow_training_embedding_sparseadam.nblink
delete mode 100644 docs/readthedocs/source/doc/Nano/Howto/Training/index.rst
delete mode 100644 docs/readthedocs/source/doc/Nano/Howto/index.rst
delete mode 100644 docs/readthedocs/source/doc/Nano/Image/learning_rate.png
delete mode 100644 docs/readthedocs/source/doc/Nano/Overview/hpo.rst
delete mode 100644 docs/readthedocs/source/doc/Nano/Overview/index.rst
delete mode 100644 docs/readthedocs/source/doc/Nano/Overview/install.md
delete mode 100644 docs/readthedocs/source/doc/Nano/Overview/known_issues.md
delete mode 100644 docs/readthedocs/source/doc/Nano/Overview/nano.md
delete mode 100644 docs/readthedocs/source/doc/Nano/Overview/pytorch_cuda_patch.md
delete mode 100644 docs/readthedocs/source/doc/Nano/Overview/pytorch_inference.md
delete mode 100644 docs/readthedocs/source/doc/Nano/Overview/pytorch_train.md
delete mode 100644 docs/readthedocs/source/doc/Nano/Overview/support.md
delete mode 100644 docs/readthedocs/source/doc/Nano/Overview/tensorflow_inference.md
delete mode 100644 docs/readthedocs/source/doc/Nano/Overview/tensorflow_train.md
delete mode 100644 docs/readthedocs/source/doc/Nano/Overview/troubshooting.md
delete mode 100644 docs/readthedocs/source/doc/Nano/Overview/userguide.rst
delete mode 100644 docs/readthedocs/source/doc/Nano/QuickStart/index.md
delete mode 100644 docs/readthedocs/source/doc/Nano/QuickStart/pytorch_nano.md
delete mode 100644 docs/readthedocs/source/doc/Nano/QuickStart/pytorch_onnxruntime.md
delete mode 100644 docs/readthedocs/source/doc/Nano/QuickStart/pytorch_openvino.md
delete mode 100644 docs/readthedocs/source/doc/Nano/QuickStart/pytorch_quantization_inc.md
delete mode 100644 docs/readthedocs/source/doc/Nano/QuickStart/pytorch_quantization_inc_onnx.md
delete mode 100644 docs/readthedocs/source/doc/Nano/QuickStart/pytorch_quantization_openvino.md
delete mode 100644 docs/readthedocs/source/doc/Nano/QuickStart/pytorch_train_quickstart.md
delete mode 100644 docs/readthedocs/source/doc/Nano/QuickStart/tensorflow_embedding.md
delete mode 100644 docs/readthedocs/source/doc/Nano/QuickStart/tensorflow_quantization_quickstart.md
delete mode 100644 docs/readthedocs/source/doc/Nano/QuickStart/tensorflow_train_quickstart.md
delete mode 100644 docs/readthedocs/source/doc/Nano/Tutorials/custom.nblink
delete mode 100644 docs/readthedocs/source/doc/Nano/Tutorials/seq_and_func.nblink
delete mode 100644 docs/readthedocs/source/doc/Nano/index.rst
delete mode 100644 docs/readthedocs/source/doc/Orca/Howto/autoestimator-pytorch-quickstart.md
delete mode 100644 docs/readthedocs/source/doc/Orca/Howto/autoxgboost-quickstart.md
delete mode 100644 docs/readthedocs/source/doc/Orca/Howto/index.rst
delete mode 100644 docs/readthedocs/source/doc/Orca/Howto/pytorch-quickstart-bigdl.md
delete mode 100644 docs/readthedocs/source/doc/Orca/Howto/pytorch-quickstart-ray.md
delete mode 100644 docs/readthedocs/source/doc/Orca/Howto/pytorch-quickstart.md
delete mode 100644 docs/readthedocs/source/doc/Orca/Howto/ray-quickstart.md
delete mode 100644 docs/readthedocs/source/doc/Orca/Howto/spark-dataframe.md
delete mode 100644 docs/readthedocs/source/doc/Orca/Howto/tf1-quickstart.md
delete mode 100644 docs/readthedocs/source/doc/Orca/Howto/tf1keras-quickstart.md
delete mode 100644 docs/readthedocs/source/doc/Orca/Howto/tf2keras-quickstart.md
delete mode 100644 docs/readthedocs/source/doc/Orca/Howto/xshards-pandas.md
delete mode 100644 docs/readthedocs/source/doc/Orca/Overview/data-parallel-processing.md
delete mode 100644 docs/readthedocs/source/doc/Orca/Overview/distributed-training-inference.md
delete mode 100644 docs/readthedocs/source/doc/Orca/Overview/distributed-tuning.md
delete mode 100644 docs/readthedocs/source/doc/Orca/Overview/index.rst
delete mode 100644 docs/readthedocs/source/doc/Orca/Overview/install.md
delete mode 100644 docs/readthedocs/source/doc/Orca/Overview/known_issues.md
delete mode 100644 docs/readthedocs/source/doc/Orca/Overview/orca-context.md
delete mode 100644 docs/readthedocs/source/doc/Orca/Overview/orca.md
delete mode 100644 docs/readthedocs/source/doc/Orca/Overview/ray.md
delete mode 100644 docs/readthedocs/source/doc/Orca/Tutorial/index.rst
delete mode 100644 docs/readthedocs/source/doc/Orca/Tutorial/k8s.md
delete mode 100644 docs/readthedocs/source/doc/Orca/Tutorial/yarn.md
delete mode 100644 docs/readthedocs/source/doc/Orca/index.rst
delete mode 100644 docs/readthedocs/source/doc/PPML/Dev/python_test.md
delete mode 100644 docs/readthedocs/source/doc/PPML/Dev/scala_test.md
delete mode 100644 docs/readthedocs/source/doc/PPML/Overview/ali_ecs_occlum_cn.md
delete mode 100644 docs/readthedocs/source/doc/PPML/Overview/attestation_basic.md
delete mode 100644 docs/readthedocs/source/doc/PPML/Overview/azure_ppml.md
delete mode 100644 docs/readthedocs/source/doc/PPML/Overview/azure_ppml_occlum.md
delete mode 100644 docs/readthedocs/source/doc/PPML/Overview/devguide.md
delete mode 100644 docs/readthedocs/source/doc/PPML/Overview/examples.rst
delete mode 100644 docs/readthedocs/source/doc/PPML/Overview/install.md
delete mode 100644 docs/readthedocs/source/doc/PPML/Overview/intro.md
delete mode 100644 docs/readthedocs/source/doc/PPML/Overview/misc.rst
delete mode 100644 docs/readthedocs/source/doc/PPML/Overview/ppml.md
delete mode 100644 docs/readthedocs/source/doc/PPML/Overview/quicktour.md
delete mode 100644 docs/readthedocs/source/doc/PPML/Overview/secure_lightgbm_on_spark.md
delete mode 100644 docs/readthedocs/source/doc/PPML/Overview/trusted_big_data_analytics_and_ml.md
delete mode 100644 docs/readthedocs/source/doc/PPML/Overview/trusted_fl.md
delete mode 100644 docs/readthedocs/source/doc/PPML/QuickStart/deploy_intel_sgx_device_plugin_for_kubernetes.md
delete mode 100644 docs/readthedocs/source/doc/PPML/QuickStart/deploy_ppml_in_production.md
delete mode 100644 docs/readthedocs/source/doc/PPML/QuickStart/end-to-end.md
delete mode 100644 docs/readthedocs/source/doc/PPML/QuickStart/install_sgx_driver.md
delete mode 100644 docs/readthedocs/source/doc/PPML/QuickStart/secure_your_services.md
delete mode 100644 docs/readthedocs/source/doc/PPML/QuickStart/tpc-ds_with_sparksql_on_k8s.md
delete mode 100644 docs/readthedocs/source/doc/PPML/QuickStart/tpc-h_with_sparksql_on_k8s.md
delete mode 100644 docs/readthedocs/source/doc/PPML/QuickStart/trusted-serving-on-k8s-guide.md
delete mode 100644 docs/readthedocs/source/doc/PPML/VFL/overview.md
delete mode 100644 docs/readthedocs/source/doc/PPML/VFL/user_guide.md
delete mode 100644 docs/readthedocs/source/doc/PPML/VFL/vfl_he.md
delete mode 100644 docs/readthedocs/source/doc/PPML/images/fl_architecture.png
delete mode 100644 docs/readthedocs/source/doc/PPML/images/fl_ckks.PNG
delete mode 100644 docs/readthedocs/source/doc/PPML/images/occlum_maa.png
delete mode 100644 docs/readthedocs/source/doc/PPML/images/ppml_azure_latest.png
delete mode 100644 docs/readthedocs/source/doc/PPML/images/ppml_azure_workflow.png
delete mode 100644 docs/readthedocs/source/doc/PPML/images/ppml_build_deploy.png
delete mode 100644 docs/readthedocs/source/doc/PPML/images/ppml_dev_basic.png
delete mode 100644 docs/readthedocs/source/doc/PPML/images/ppml_scope.png
delete mode 100644 docs/readthedocs/source/doc/PPML/images/ppml_sgx_enclave.png
delete mode 100644 docs/readthedocs/source/doc/PPML/images/ppml_test_dev.png
delete mode 100644 docs/readthedocs/source/doc/PPML/images/spark_sgx_azure.png
delete mode 100755 docs/readthedocs/source/doc/PPML/images/spark_sgx_occlum.png
delete mode 100644 docs/readthedocs/source/doc/PPML/index.rst
delete mode 100644 docs/readthedocs/source/doc/Serving/Example/cluster-serving-http-example.ipynb
delete mode 100644 docs/readthedocs/source/doc/Serving/Example/example.md
delete mode 100644 docs/readthedocs/source/doc/Serving/Example/keras-to-cluster-serving-example.ipynb
delete mode 100644 docs/readthedocs/source/doc/Serving/Example/l08c08_forecasting_with_lstm.py
delete mode 100644 docs/readthedocs/source/doc/Serving/Example/l10c03_nlp_constructing_text_generation_model.py
delete mode 100644 docs/readthedocs/source/doc/Serving/Example/tf1-to-cluster-serving-example.ipynb
delete mode 100644 docs/readthedocs/source/doc/Serving/Example/transfer_learning.py
delete mode 100644 docs/readthedocs/source/doc/Serving/FAQ/contribute-guide.md
delete mode 100644 docs/readthedocs/source/doc/Serving/FAQ/faq.md
delete mode 100644 docs/readthedocs/source/doc/Serving/Overview/cluster_serving_overview.jpg
delete mode 100644 docs/readthedocs/source/doc/Serving/Overview/cluster_serving_steps.jpg
delete mode 100644 docs/readthedocs/source/doc/Serving/Overview/serving.md
delete mode 100644 docs/readthedocs/source/doc/Serving/ProgrammingGuide/serving-inference.md
delete mode 100644 docs/readthedocs/source/doc/Serving/ProgrammingGuide/serving-installation.md
delete mode 100644 docs/readthedocs/source/doc/Serving/ProgrammingGuide/serving-start.md
delete mode 100644 docs/readthedocs/source/doc/Serving/QuickStart/serving-quickstart.md
delete mode 100644 docs/readthedocs/source/doc/Serving/index.rst
delete mode 100644 docs/readthedocs/source/doc/UseCase/tensorboard.md
delete mode 100644 docs/readthedocs/source/doc/UserGuide/colab.md
delete mode 100644 docs/readthedocs/source/doc/UserGuide/contributor.rst
delete mode 100644 docs/readthedocs/source/doc/UserGuide/databricks.md
delete mode 100644 docs/readthedocs/source/doc/UserGuide/develop.md
delete mode 100644 docs/readthedocs/source/doc/UserGuide/docker.md
delete mode 100644 docs/readthedocs/source/doc/UserGuide/documentation.md
delete mode 100644 docs/readthedocs/source/doc/UserGuide/hadoop.md
delete mode 100644 docs/readthedocs/source/doc/UserGuide/images/Databricks5.PNG
delete mode 100644 docs/readthedocs/source/doc/UserGuide/images/apply-all.png
delete mode 100644 docs/readthedocs/source/doc/UserGuide/images/cluster.png
delete mode 100644 docs/readthedocs/source/doc/UserGuide/images/config-init-script.png
delete mode 100644 docs/readthedocs/source/doc/UserGuide/images/copy-script-path.png
delete mode 100644 docs/readthedocs/source/doc/UserGuide/images/create-cluster.png
delete mode 100644 docs/readthedocs/source/doc/UserGuide/images/db-gloo-socket.png
delete mode 100644 docs/readthedocs/source/doc/UserGuide/images/dbfs.png
delete mode 100644 docs/readthedocs/source/doc/UserGuide/images/dllib-jar.png
delete mode 100644 docs/readthedocs/source/doc/UserGuide/images/dllib-whl.png
delete mode 100644 docs/readthedocs/source/doc/UserGuide/images/init-orca-context.png
delete mode 100644 docs/readthedocs/source/doc/UserGuide/images/install-zip.png
delete mode 100644 docs/readthedocs/source/doc/UserGuide/images/notebook1.jpg
delete mode 100644 docs/readthedocs/source/doc/UserGuide/images/notebook2.jpg
delete mode 100644 docs/readthedocs/source/doc/UserGuide/images/notebook3.jpg
delete mode 100644 docs/readthedocs/source/doc/UserGuide/images/notebook4.jpg
delete mode 100644 docs/readthedocs/source/doc/UserGuide/images/notebook5.jpg
delete mode 100644 docs/readthedocs/source/doc/UserGuide/images/orca-jar.png
delete mode 100644 docs/readthedocs/source/doc/UserGuide/images/orca-whl.png
delete mode 100644 docs/readthedocs/source/doc/UserGuide/images/spark-config.png
delete mode 100644 docs/readthedocs/source/doc/UserGuide/images/spark-context.png
delete mode 100644 docs/readthedocs/source/doc/UserGuide/images/token.png
delete mode 100644 docs/readthedocs/source/doc/UserGuide/images/upload-init-script.png
delete mode 100644 docs/readthedocs/source/doc/UserGuide/images/url.png
delete mode 100644 docs/readthedocs/source/doc/UserGuide/images/verify-dbfs.png
delete mode 100644 docs/readthedocs/source/doc/UserGuide/index.rst
delete mode 100644 docs/readthedocs/source/doc/UserGuide/k8s.md
delete mode 100644 docs/readthedocs/source/doc/UserGuide/known_issues.md
delete mode 100644 docs/readthedocs/source/doc/UserGuide/notebooks.md
delete mode 100644 docs/readthedocs/source/doc/UserGuide/python.md
delete mode 100644 docs/readthedocs/source/doc/UserGuide/scala.md
delete mode 100644 docs/readthedocs/source/doc/UserGuide/win.md
diff --git a/README.md b/README.md
index 4c6110fe..ab006507 100644
--- a/README.md
+++ b/README.md
@@ -1,38 +1,31 @@
-
+## IPEX-LLM
-

-
-
-
----
-## BigDL-LLM
-
-**`bigdl-llm`** is a library for running **LLM** (large language model) on Intel **XPU** (from *Laptop* to *GPU* to *Cloud*) using **INT4/FP4/INT8/FP8** with very low latency[^1] (for any **PyTorch** model).
+**`ipex-llm`** is a library for running **LLM** (large language model) on Intel **XPU** (from *Laptop* to *GPU* to *Cloud*) using **INT4/FP4/INT8/FP8** with very low latency[^1] (for any **PyTorch** model).
> *It is built on the excellent work of [llama.cpp](https://github.com/ggerganov/llama.cpp), [bitsandbytes](https://github.com/TimDettmers/bitsandbytes), [qlora](https://github.com/artidoro/qlora), [gptq](https://github.com/IST-DASLab/gptq), [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ), [awq](https://github.com/mit-han-lab/llm-awq), [AutoAWQ](https://github.com/casper-hansen/AutoAWQ), [vLLM](https://github.com/vllm-project/vllm), [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), [gptq_for_llama](https://github.com/qwopqwop200/GPTQ-for-LLaMa), [chatglm.cpp](https://github.com/li-plus/chatglm.cpp), [redpajama.cpp](https://github.com/togethercomputer/redpajama.cpp), [gptneox.cpp](https://github.com/byroneverson/gptneox.cpp), [bloomz.cpp](https://github.com/NouamaneTazi/bloomz.cpp/), etc.*
### Latest update š„
-- [2024/03] **LangChain** added support for `bigdl-llm`; see the details [here](https://python.langchain.com/docs/integrations/llms/bigdl).
-- [2024/02] `bigdl-llm` now supports directly loading model from [ModelScope](python/llm/example/GPU/ModelScope-Models) ([éę](python/llm/example/CPU/ModelScope-Models)).
-- [2024/02] `bigdl-llm` added inital **INT2** support (based on llama.cpp [IQ2](python/llm/example/GPU/HF-Transformers-AutoModels/Advanced-Quantizations/GGUF-IQ2) mechanism), which makes it possible to run large-size LLM (e.g., Mixtral-8x7B) on Intel GPU with 16GB VRAM.
-- [2024/02] Users can now use `bigdl-llm` through [Text-Generation-WebUI](https://github.com/intel-analytics/text-generation-webui) GUI.
-- [2024/02] `bigdl-llm` now supports *[Self-Speculative Decoding](https://bigdl.readthedocs.io/en/latest/doc/LLM/Inference/Self_Speculative_Decoding.html)*, which in practice brings **~30% speedup** for FP16 and BF16 inference latency on Intel [GPU](python/llm/example/GPU/Speculative-Decoding) and [CPU](python/llm/example/CPU/Speculative-Decoding) respectively.
-- [2024/02] `bigdl-llm` now supports a comprehensive list of LLM finetuning on Intel GPU (including [LoRA](python/llm/example/GPU/LLM-Finetuning/LoRA), [QLoRA](python/llm/example/GPU/LLM-Finetuning/QLoRA), [DPO](python/llm/example/GPU/LLM-Finetuning/DPO), [QA-LoRA](python/llm/example/GPU/LLM-Finetuning/QA-LoRA) and [ReLoRA](python/llm/example/GPU/LLM-Finetuning/ReLora)).
-- [2024/01] Using `bigdl-llm` [QLoRA](python/llm/example/GPU/LLM-Finetuning/QLoRA), we managed to finetune LLaMA2-7B in **21 minutes** and LLaMA2-70B in **3.14 hours** on 8 Intel Max 1550 GPU for [Standford-Alpaca](python/llm/example/GPU/LLM-Finetuning/QLoRA/alpaca-qlora) (see the blog [here](https://www.intel.com/content/www/us/en/developer/articles/technical/finetuning-llms-on-intel-gpus-using-bigdl-llm.html)).
-- [2024/01] ššš ***The default `bigdl-llm` GPU Linux installation has switched from PyTorch 2.0 to PyTorch 2.1, which requires new oneAPI and GPU driver versions. (See the [GPU installation guide](https://bigdl.readthedocs.io/en/latest/doc/LLM/Overview/install_gpu.html) for more details.)***
-- [2023/12] `bigdl-llm` now supports [ReLoRA](python/llm/example/GPU/LLM-Finetuning/ReLora) (see *["ReLoRA: High-Rank Training Through Low-Rank Updates"](https://arxiv.org/abs/2307.05695)*).
-- [2023/12] `bigdl-llm` now supports [Mixtral-8x7B](python/llm/example/GPU/HF-Transformers-AutoModels/Model/mixtral) on both Intel [GPU](python/llm/example/GPU/HF-Transformers-AutoModels/Model/mixtral) and [CPU](python/llm/example/CPU/HF-Transformers-AutoModels/Model/mixtral).
-- [2023/12] `bigdl-llm` now supports [QA-LoRA](python/llm/example/GPU/LLM-Finetuning/QA-LoRA) (see *["QA-LoRA: Quantization-Aware Low-Rank Adaptation of Large Language Models"](https://arxiv.org/abs/2309.14717)*).
-- [2023/12] `bigdl-llm` now supports [FP8 and FP4 inference](python/llm/example/GPU/HF-Transformers-AutoModels/More-Data-Types) on Intel ***GPU***.
-- [2023/11] Initial support for directly loading [GGUF](python/llm/example/GPU/HF-Transformers-AutoModels/Advanced-Quantizations/GGUF), [AWQ](python/llm/example/GPU/HF-Transformers-AutoModels/Advanced-Quantizations/AWQ) and [GPTQ](python/llm/example/GPU/HF-Transformers-AutoModels/Advanced-Quantizations/GPTQ) models into `bigdl-llm` is available.
-- [2023/11] `bigdl-llm` now supports [vLLM continuous batching](python/llm/example/GPU/vLLM-Serving) on both Intel [GPU](python/llm/example/GPU/vLLM-Serving) and [CPU](python/llm/example/CPU/vLLM-Serving).
-- [2023/10] `bigdl-llm` now supports [QLoRA finetuning](python/llm/example/GPU/LLM-Finetuning/QLoRA) on both Intel [GPU](python/llm/example/GPU/LLM-Finetuning/QLoRA) and [CPU](python/llm/example/CPU/QLoRA-FineTuning).
-- [2023/10] `bigdl-llm` now supports [FastChat serving](python/llm/src/bigdl/llm/serving) on on both Intel CPU and GPU.
-- [2023/09] `bigdl-llm` now supports [Intel GPU](python/llm/example/GPU) (including iGPU, Arc, Flex and MAX).
-- [2023/09] `bigdl-llm` [tutorial](https://github.com/intel-analytics/bigdl-llm-tutorial) is released.
-- [2023/09] Over 40 models have been optimized/verified on `bigdl-llm`, including *LLaMA/LLaMA2, ChatGLM2/ChatGLM3, Mistral, Falcon, MPT, LLaVA, WizardCoder, Dolly, Whisper, Baichuan/Baichuan2, InternLM, Skywork, QWen/Qwen-VL, Aquila, MOSS,* and more; see the complete list [here](#verified-models).
-
-### `bigdl-llm` Demos
+- [2024/03] **LangChain** added support for `ipex-llm`; see the details [here](https://python.langchain.com/docs/integrations/llms/bigdl).
+- [2024/02] `ipex-llm` now supports directly loading model from [ModelScope](python/llm/example/GPU/ModelScope-Models) ([éę](python/llm/example/CPU/ModelScope-Models)).
+- [2024/02] `ipex-llm` added inital **INT2** support (based on llama.cpp [IQ2](python/llm/example/GPU/HF-Transformers-AutoModels/Advanced-Quantizations/GGUF-IQ2) mechanism), which makes it possible to run large-size LLM (e.g., Mixtral-8x7B) on Intel GPU with 16GB VRAM.
+- [2024/02] Users can now use `ipex-llm` through [Text-Generation-WebUI](https://github.com/intel-analytics/text-generation-webui) GUI.
+- [2024/02] `ipex-llm` now supports *[Self-Speculative Decoding](https://ipex-llm.readthedocs.io/en/latest/doc/LLM/Inference/Self_Speculative_Decoding.html)*, which in practice brings **~30% speedup** for FP16 and BF16 inference latency on Intel [GPU](python/llm/example/GPU/Speculative-Decoding) and [CPU](python/llm/example/CPU/Speculative-Decoding) respectively.
+- [2024/02] `ipex-llm` now supports a comprehensive list of LLM finetuning on Intel GPU (including [LoRA](python/llm/example/GPU/LLM-Finetuning/LoRA), [QLoRA](python/llm/example/GPU/LLM-Finetuning/QLoRA), [DPO](python/llm/example/GPU/LLM-Finetuning/DPO), [QA-LoRA](python/llm/example/GPU/LLM-Finetuning/QA-LoRA) and [ReLoRA](python/llm/example/GPU/LLM-Finetuning/ReLora)).
+- [2024/01] Using `ipex-llm` [QLoRA](python/llm/example/GPU/LLM-Finetuning/QLoRA), we managed to finetune LLaMA2-7B in **21 minutes** and LLaMA2-70B in **3.14 hours** on 8 Intel Max 1550 GPU for [Standford-Alpaca](python/llm/example/GPU/LLM-Finetuning/QLoRA/alpaca-qlora) (see the blog [here](https://www.intel.com/content/www/us/en/developer/articles/technical/finetuning-llms-on-intel-gpus-using-bigdl-llm.html)).
+- [2024/01] ššš ***The default `ipex-llm` GPU Linux installation has switched from PyTorch 2.0 to PyTorch 2.1, which requires new oneAPI and GPU driver versions. (See the [GPU installation guide](https://ipex-llm.readthedocs.io/en/latest/doc/LLM/Overview/install_gpu.html) for more details.)***
+- [2023/12] `ipex-llm` now supports [ReLoRA](python/llm/example/GPU/LLM-Finetuning/ReLora) (see *["ReLoRA: High-Rank Training Through Low-Rank Updates"](https://arxiv.org/abs/2307.05695)*).
+- [2023/12] `ipex-llm` now supports [Mixtral-8x7B](python/llm/example/GPU/HF-Transformers-AutoModels/Model/mixtral) on both Intel [GPU](python/llm/example/GPU/HF-Transformers-AutoModels/Model/mixtral) and [CPU](python/llm/example/CPU/HF-Transformers-AutoModels/Model/mixtral).
+- [2023/12] `ipex-llm` now supports [QA-LoRA](python/llm/example/GPU/LLM-Finetuning/QA-LoRA) (see *["QA-LoRA: Quantization-Aware Low-Rank Adaptation of Large Language Models"](https://arxiv.org/abs/2309.14717)*).
+- [2023/12] `ipex-llm` now supports [FP8 and FP4 inference](python/llm/example/GPU/HF-Transformers-AutoModels/More-Data-Types) on Intel ***GPU***.
+- [2023/11] Initial support for directly loading [GGUF](python/llm/example/GPU/HF-Transformers-AutoModels/Advanced-Quantizations/GGUF), [AWQ](python/llm/example/GPU/HF-Transformers-AutoModels/Advanced-Quantizations/AWQ) and [GPTQ](python/llm/example/GPU/HF-Transformers-AutoModels/Advanced-Quantizations/GPTQ) models into `ipex-llm` is available.
+- [2023/11] `ipex-llm` now supports [vLLM continuous batching](python/llm/example/GPU/vLLM-Serving) on both Intel [GPU](python/llm/example/GPU/vLLM-Serving) and [CPU](python/llm/example/CPU/vLLM-Serving).
+- [2023/10] `ipex-llm` now supports [QLoRA finetuning](python/llm/example/GPU/LLM-Finetuning/QLoRA) on both Intel [GPU](python/llm/example/GPU/LLM-Finetuning/QLoRA) and [CPU](python/llm/example/CPU/QLoRA-FineTuning).
+- [2023/10] `ipex-llm` now supports [FastChat serving](python/llm/src/ipex_llm/llm/serving) on on both Intel CPU and GPU.
+- [2023/09] `ipex-llm` now supports [Intel GPU](python/llm/example/GPU) (including iGPU, Arc, Flex and MAX).
+- [2023/09] `ipex-llm` [tutorial](https://github.com/intel-analytics/ipex-llm-tutorial) is released.
+- [2023/09] Over 40 models have been optimized/verified on `ipex-llm`, including *LLaMA/LLaMA2, ChatGLM2/ChatGLM3, Mistral, Falcon, MPT, LLaVA, WizardCoder, Dolly, Whisper, Baichuan/Baichuan2, InternLM, Skywork, QWen/Qwen-VL, Aquila, MOSS,* and more; see the complete list [here](#verified-models).
+
+### `ipex-llm` Demos
See the ***optimized performance*** of `chatglm2-6b` and `llama-2-13b-chat` models on 12th Gen Intel Core CPU and Intel Arc GPU below.
@@ -62,11 +55,11 @@ See the ***optimized performance*** of `chatglm2-6b` and `llama-2-13b-chat` mode
-### `bigdl-llm` quickstart
+### `ipex-llm` quickstart
-- [Windows GPU installation](https://bigdl.readthedocs.io/en/latest/doc/LLM/Quickstart/install_windows_gpu.html)
-- [Run BigDL-LLM in Text-Generation-WebUI](https://bigdl.readthedocs.io/en/latest/doc/LLM/Quickstart/webui_quickstart.html)
-- [Run BigDL-LLM using Docker](docker/llm)
+- [Windows GPU installation](https://ipex-llm.readthedocs.io/en/latest/doc/LLM/Quickstart/install_windows_gpu.html)
+- [Run IPEX-LLM in Text-Generation-WebUI](https://ipex-llm.readthedocs.io/en/latest/doc/LLM/Quickstart/webui_quickstart.html)
+- [Run IPEX-LLM using Docker](docker/llm)
- [CPU INT4](#cpu-int4)
- [GPU INT4](#gpu-int4)
- [More Low-Bit support](#more-low-bit-support)
@@ -74,12 +67,12 @@ See the ***optimized performance*** of `chatglm2-6b` and `llama-2-13b-chat` mode
#### CPU INT4
##### Install
-You may install **`bigdl-llm`** on Intel CPU as follows:
-> Note: See the [CPU installation guide](https://bigdl.readthedocs.io/en/latest/doc/LLM/Overview/install_cpu.html) for more details.
+You may install **`ipex-llm`** on Intel CPU as follows:
+> Note: See the [CPU installation guide](https://ipex-llm.readthedocs.io/en/latest/doc/LLM/Overview/install_cpu.html) for more details.
```bash
-pip install --pre --upgrade bigdl-llm[all]
+pip install --pre --upgrade ipex-llm[all]
```
-> Note: `bigdl-llm` has been tested on Python 3.9, 3.10 and 3.11
+> Note: `ipex-llm` has been tested on Python 3.9, 3.10 and 3.11
##### Run Model
You may apply INT4 optimizations to any Hugging Face *Transformers* models as follows.
@@ -100,13 +93,13 @@ output = tokenizer.batch_decode(output_ids)
#### GPU INT4
##### Install
-You may install **`bigdl-llm`** on Intel GPU as follows:
-> Note: See the [GPU installation guide](https://bigdl.readthedocs.io/en/latest/doc/LLM/Overview/install_gpu.html) for more details.
+You may install **`ipex-llm`** on Intel GPU as follows:
+> Note: See the [GPU installation guide](https://ipex-llm.readthedocs.io/en/latest/doc/LLM/Overview/install_gpu.html) for more details.
```bash
# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
-pip install --pre --upgrade bigdl-llm[xpu] -f https://developer.intel.com/ipex-whl-stable-xpu
+pip install --pre --upgrade ipex-llm[xpu] -f https://developer.intel.com/ipex-whl-stable-xpu
```
-> Note: `bigdl-llm` has been tested on Python 3.9, 3.10 and 3.11
+> Note: `ipex-llm` has been tested on Python 3.9, 3.10 and 3.11
##### Run Model
You may apply INT4 optimizations to any Hugging Face *Transformers* models as follows.
@@ -130,7 +123,7 @@ output = tokenizer.batch_decode(output_ids.cpu())
#### More Low-Bit Support
##### Save and load
-After the model is optimized using `bigdl-llm`, you may save and load the model as follows:
+After the model is optimized using `ipex-llm`, you may save and load the model as follows:
```python
model.save_low_bit(model_path)
new_model = AutoModelForCausalLM.load_low_bit(model_path)
@@ -138,7 +131,7 @@ new_model = AutoModelForCausalLM.load_low_bit(model_path)
*See the complete example [here](python/llm/example/CPU/HF-Transformers-AutoModels/Save-Load).*
##### Additonal data types
-
+
In addition to INT4, You may apply other low bit optimizations (such as *INT8*, *INT5*, *NF4*, etc.) as follows:
```python
model = AutoModelForCausalLM.from_pretrained('/path/to/model/', load_in_low_bit="sym_int8")
@@ -146,470 +139,62 @@ model = AutoModelForCausalLM.from_pretrained('/path/to/model/', load_in_low_bit=
*See the complete example [here](python/llm/example/CPU/HF-Transformers-AutoModels/More-Data-Types).*
#### Verified Models
-Over 40 models have been optimized/verified on `bigdl-llm`, including *LLaMA/LLaMA2, ChatGLM/ChatGLM2, Mistral, Falcon, MPT, Baichuan/Baichuan2, InternLM, QWen* and more; see the example list below.
-
-| Model | CPU Example | GPU Example |
-|------------|----------------------------------------------------------------|-----------------------------------------------------------------|
-| LLaMA *(such as Vicuna, Guanaco, Koala, Baize, WizardLM, etc.)* | [link1](python/llm/example/CPU/Native-Models), [link2](python/llm/example/CPU/HF-Transformers-AutoModels/Model/vicuna) |[link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/vicuna)|
-| LLaMA 2 | [link1](python/llm/example/CPU/Native-Models), [link2](python/llm/example/CPU/HF-Transformers-AutoModels/Model/llama2) | [link1](python/llm/example/GPU/HF-Transformers-AutoModels/Model/llama2), [link2-low GPU memory example](python/llm/example/GPU/PyTorch-Models/Model/llama2#example-2---low-memory-version-predict-tokens-using-generate-api) |
-| ChatGLM | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/chatglm) | |
-| ChatGLM2 | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/chatglm2) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/chatglm2) |
-| ChatGLM3 | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/chatglm3) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/chatglm3) |
-| Mistral | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/mistral) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/mistral) |
-| Mixtral | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/mixtral) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/mixtral) |
-| Falcon | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/falcon) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/falcon) |
-| MPT | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/mpt) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/mpt) |
-| Dolly-v1 | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/dolly_v1) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/dolly-v1) |
-| Dolly-v2 | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/dolly_v2) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/dolly-v2) |
-| Replit Code| [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/replit) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/replit) |
-| RedPajama | [link1](python/llm/example/CPU/Native-Models), [link2](python/llm/example/CPU/HF-Transformers-AutoModels/Model/redpajama) | |
-| Phoenix | [link1](python/llm/example/CPU/Native-Models), [link2](python/llm/example/CPU/HF-Transformers-AutoModels/Model/phoenix) | |
-| StarCoder | [link1](python/llm/example/CPU/Native-Models), [link2](python/llm/example/CPU/HF-Transformers-AutoModels/Model/starcoder) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/starcoder) |
-| Baichuan | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/baichuan) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/baichuan) |
-| Baichuan2 | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/baichuan2) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/baichuan2) |
-| InternLM | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/internlm) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/internlm) |
-| Qwen | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/qwen) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/qwen) |
-| Qwen1.5 | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/qwen1.5) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/qwen1.5) |
-| Qwen-VL | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/qwen-vl) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/qwen-vl) |
-| Aquila | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/aquila) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/aquila) |
-| Aquila2 | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/aquila2) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/aquila2) |
-| MOSS | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/moss) | |
-| Whisper | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/whisper) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/whisper) |
-| Phi-1_5 | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/phi-1_5) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/phi-1_5) |
-| Flan-t5 | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/flan-t5) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/flan-t5) |
-| LLaVA | [link](python/llm/example/CPU/PyTorch-Models/Model/llava) | [link](python/llm/example/GPU/PyTorch-Models/Model/llava) |
-| CodeLlama | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/codellama) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/codellama) |
-| Skywork | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/skywork) | |
-| InternLM-XComposer | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/internlm-xcomposer) | |
-| WizardCoder-Python | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/wizardcoder-python) | |
-| CodeShell | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/codeshell) | |
-| Fuyu | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/fuyu) | |
-| Distil-Whisper | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/distil-whisper) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/distil-whisper) |
-| Yi | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/yi) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/yi) |
-| BlueLM | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/bluelm) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/bluelm) |
-| Mamba | [link](python/llm/example/CPU/PyTorch-Models/Model/mamba) | [link](python/llm/example/GPU/PyTorch-Models/Model/mamba) |
-| SOLAR | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/solar) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/solar) |
-| Phixtral | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/phixtral) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/phixtral) |
-| InternLM2 | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/internlm2) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/internlm2) |
-| RWKV4 | | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/rwkv4) |
-| RWKV5 | | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/rwkv5) |
-| Bark | [link](python/llm/example/CPU/PyTorch-Models/Model/bark) | [link](python/llm/example/GPU/PyTorch-Models/Model/bark) |
-| SpeechT5 | | [link](python/llm/example/GPU/PyTorch-Models/Model/speech-t5) |
-| DeepSeek-MoE | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/deepseek-moe) | |
-| Ziya-Coding-34B-v1.0 | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/ziya) | |
-| Phi-2 | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/phi-2) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/phi-2) |
-| Yuan2 | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/yuan2) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/yuan2) |
-| Gemma | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/gemma) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/gemma) |
-| DeciLM-7B | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/deciLM-7b) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/deciLM-7b) |
-| Deepseek | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/deepseek) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/deepseek) |
-
-
-***For more details, please refer to the `bigdl-llm` [Document](https://test-bigdl-llm.readthedocs.io/en/main/doc/LLM/index.html), [Readme](python/llm), [Tutorial](https://github.com/intel-analytics/bigdl-llm-tutorial) and [API Doc](https://bigdl.readthedocs.io/en/latest/doc/PythonAPI/LLM/index.html).***
-
----
-## Overview of the complete BigDL project
-
-BigDL seamlessly scales your data analytics & AI applications from laptop to cloud, with the following libraries:
-
-- [LLM](python/llm): Low-bit (INT3/INT4/INT5/INT8) large language model library for Intel CPU/GPU
-
-- [Orca](#orca): Distributed Big Data & AI (TF & PyTorch) Pipeline on Spark and Ray
-
-- [Nano](#nano): Transparent Acceleration of Tensorflow & PyTorch Programs on Intel CPU/GPU
-
-- [DLlib](#dllib): āEquivalent of Spark MLlibā for Deep Learning
-
-- [Chronos](#chronos): Scalable Time Series Analysis using AutoML
-
-- [Friesian](#friesian): End-to-End Recommendation Systems
-
-- [PPML](#ppml): Secure Big Data and AI (with SGX/TDX Hardware Security)
-
-For more information, you may [read the docs](https://bigdl.readthedocs.io/).
-
----
-
-## Choosing the right BigDL library
-```mermaid
-flowchart TD;
- Feature1{{HW Secured Big Data & AI?}};
- Feature1-- No -->Feature2{{Python vs. Scala/Java?}};
- Feature1-- "Yes" -->ReferPPML([PPML]);
- Feature2-- Python -->Feature3{{What type of application?}};
- Feature2-- Scala/Java -->ReferDLlib([DLlib]);
- Feature3-- "Large Language Model" -->ReferLLM([LLM]);
- Feature3-- "Big Data + AI (TF/PyTorch)" -->ReferOrca([Orca]);
- Feature3-- Accelerate TensorFlow / PyTorch -->ReferNano([Nano]);
- Feature3-- DL for Spark MLlib -->ReferDLlib2([DLlib]);
- Feature3-- High Level App Framework -->Feature4{{Domain?}};
- Feature4-- Time Series -->ReferChronos([Chronos]);
- Feature4-- Recommender System -->ReferFriesian([Friesian]);
-
- click ReferLLM "https://github.com/intel-analytics/bigdl/tree/main/python/llm"
- click ReferNano "https://github.com/intel-analytics/bigdl#nano"
- click ReferOrca "https://github.com/intel-analytics/bigdl#orca"
- click ReferDLlib "https://github.com/intel-analytics/bigdl#dllib"
- click ReferDLlib2 "https://github.com/intel-analytics/bigdl#dllib"
- click ReferChronos "https://github.com/intel-analytics/bigdl#chronos"
- click ReferFriesian "https://github.com/intel-analytics/bigdl#friesian"
- click ReferPPML "https://github.com/intel-analytics/bigdl#ppml"
-
- classDef ReferStyle1 fill:#5099ce,stroke:#5099ce;
- classDef Feature fill:#FFF,stroke:#08409c,stroke-width:1px;
- class ReferLLM,ReferNano,ReferOrca,ReferDLlib,ReferDLlib2,ReferChronos,ReferFriesian,ReferPPML ReferStyle1;
- class Feature1,Feature2,Feature3,Feature4,Feature5,Feature6,Feature7 Feature;
-
-```
----
-## Installing
-
- - To install BigDL, we recommend using [conda](https://docs.conda.io/projects/conda/en/latest/user-guide/install/) environment:
-
- ```bash
- conda create -n my_env
- conda activate my_env
- pip install bigdl
- ```
- To install latest nightly build, use `pip install --pre --upgrade bigdl`; see [Python](https://bigdl.readthedocs.io/en/latest/doc/UserGuide/python.html) and [Scala](https://bigdl.readthedocs.io/en/latest/doc/UserGuide/scala.html) user guide for more details.
-
- - To install each individual library, such as Chronos, use `pip install bigdl-chronos`; see the [document website](https://bigdl.readthedocs.io/) for more details.
----
-
-## Getting Started
-### Orca
-
-- The _Orca_ library seamlessly scales out your single node **TensorFlow**, **PyTorch** or **OpenVINO** programs across large clusters (so as to process distributed Big Data).
-
- Show Orca example
-
-
- You can build end-to-end, distributed data processing & AI programs using _Orca_ in 4 simple steps:
-
- ```python
- # 1. Initilize Orca Context (to run your program on K8s, YARN or local laptop)
- from bigdl.orca import init_orca_context, OrcaContext
- sc = init_orca_context(cluster_mode="k8s", cores=4, memory="10g", num_nodes=2)
-
- # 2. Perform distribtued data processing (supporting Spark DataFrames,
- # TensorFlow Dataset, PyTorch DataLoader, Ray Dataset, Pandas, Pillow, etc.)
- spark = OrcaContext.get_spark_session()
- df = spark.read.parquet(file_path)
- df = df.withColumn('label', df.label-1)
- ...
-
- # 3. Build deep learning models using standard framework APIs
- # (supporting TensorFlow, PyTorch, Keras, OpenVino, etc.)
- from tensorflow import keras
- ...
- model = keras.models.Model(inputs=[user, item], outputs=predictions)
- model.compile(...)
-
- # 4. Use Orca Estimator for distributed training/inference
- from bigdl.orca.learn.tf.estimator import Estimator
- est = Estimator.from_keras(keras_model=model)
- est.fit(data=df,
- feature_cols=['user', 'item'],
- label_cols=['label'],
- ...)
- ```
-
-
-
- *See Orca [user guide](https://bigdl.readthedocs.io/en/latest/doc/Orca/Overview/orca.html), as well as [TensorFlow](https://bigdl.readthedocs.io/en/latest/doc/Orca/Howto/tf2keras-quickstart.html) and [PyTorch](https://bigdl.readthedocs.io/en/latest/doc/Orca/Howto/pytorch-quickstart.html) quickstarts, for more details.*
-
-- In addition, you can also run standard **Ray** programs on Spark cluster using _**RayOnSpark**_ in Orca.
-
- Show RayOnSpark example
-
-
- You can not only run Ray program on Spark cluster, but also write Ray code inline with Spark code (so as to process the in-memory Spark RDDs or DataFrames) using _RayOnSpark_ in Orca.
-
- ```python
- # 1. Initilize Orca Context (to run your program on K8s, YARN or local laptop)
- from bigdl.orca import init_orca_context, OrcaContext
- sc = init_orca_context(cluster_mode="yarn", cores=4, memory="10g", num_nodes=2, init_ray_on_spark=True)
-
- # 2. Distribtued data processing using Spark
- spark = OrcaContext.get_spark_session()
- df = spark.read.parquet(file_path).withColumn(...)
-
- # 3. Convert Spark DataFrame to Ray Dataset
- from bigdl.orca.data import spark_df_to_ray_dataset
- dataset = spark_df_to_ray_dataset(df)
-
- # 4. Use Ray to operate on Ray Datasets
- import ray
-
- @ray.remote
- def consume(data) -> int:
- num_batches = 0
- for batch in data.iter_batches(batch_size=10):
- num_batches += 1
- return num_batches
-
- print(ray.get(consume.remote(dataset)))
- ```
-
-
-
- *See RayOnSpark [user guide](https://bigdl.readthedocs.io/en/latest/doc/Orca/Overview/ray.html) and [quickstart](https://bigdl.readthedocs.io/en/latest/doc/Orca/Howto/ray-quickstart.html) for more details.*
-### Nano
-You can transparently accelerate your TensorFlow or PyTorch programs on your laptop or server using *Nano*. With minimum code changes, *Nano* automatically applies modern CPU optimizations (e.g., SIMD, multiprocessing, low precision, etc.) to standard TensorFlow and PyTorch code, with up-to 10x speedup.
-
-Show Nano inference example
-
-
-You can automatically optimize a trained PyTorch model for inference or deployment using _Nano_:
-
-```python
-model = ResNet18().load_state_dict(...)
-train_dataloader = ...
-val_dataloader = ...
-def accuracy (pred, target):
- ...
-
-from bigdl.nano.pytorch import InferenceOptimizer
-optimizer = InferenceOptimizer()
-optimizer.optimize(model,
- training_data=train_dataloader,
- validation_data=val_dataloader,
- metric=accuracy)
-new_model, config = optimizer.get_best_model()
-
-optimizer.summary()
-```
-The output of `optimizer.summary()` will be something like:
-```
- -------------------------------- ---------------------- -------------- ----------------------
-| method | status | latency(ms) | metric value |
- -------------------------------- ---------------------- -------------- ----------------------
-| original | successful | 45.145 | 0.975 |
-| bf16 | successful | 27.549 | 0.975 |
-| static_int8 | successful | 11.339 | 0.975 |
-| jit_fp32_ipex | successful | 40.618 | 0.975* |
-| jit_fp32_ipex_channels_last | successful | 19.247 | 0.975* |
-| jit_bf16_ipex | successful | 10.149 | 0.975 |
-| jit_bf16_ipex_channels_last | successful | 9.782 | 0.975 |
-| openvino_fp32 | successful | 22.721 | 0.975* |
-| openvino_int8 | successful | 5.846 | 0.962 |
-| onnxruntime_fp32 | successful | 20.838 | 0.975* |
-| onnxruntime_int8_qlinear | successful | 7.123 | 0.981 |
- -------------------------------- ---------------------- -------------- ----------------------
-* means we assume the metric value of the traced model does not change, so we don't recompute metric value to save time.
-Optimization cost 60.8s in total.
-```
-
-
-
-Show Nano Training example
-
-You may easily accelerate PyTorch training (e.g., IPEX, BF16, Multi-Instance Training, etc.) using Nano:
-
-```python
-model = ResNet18()
-optimizer = torch.optim.SGD(...)
-train_loader = ...
-val_loader = ...
-
-from bigdl.nano.pytorch import TorchNano
-
-# Define your training loop inside `TorchNano.train`
-class Trainer(TorchNano):
- def train(self):
- # call `setup` to prepare for model, optimizer(s) and dataloader(s) for accelerated training
- model, optimizer, (train_loader, val_loader) = self.setup(model, optimizer,
- train_loader, val_loader)
-
- for epoch in range(num_epochs):
- model.train()
- for data, target in train_loader:
- optimizer.zero_grad()
- output = model(data)
- # replace the loss.backward() with self.backward(loss)
- loss = loss_fuc(output, target)
- self.backward(loss)
- optimizer.step()
-
-# Accelerated training (IPEX, BF16 and Multi-Instance Training)
-Trainer(use_ipex=True, precision='bf16', num_processes=2).train()
-```
-
-
-
-*See Nano [user guide](https://bigdl.readthedocs.io/en/latest/doc/Nano/Overview/nano.html) and [tutotial](https://github.com/intel-analytics/BigDL/tree/main/python/nano/tutorial) for more details.*
-
-### DLlib
-
-With _DLlib_, you can write distributed deep learning applications as standard (**Scala** or **Python**) Spark programs, using the same **Spark DataFrames** and **ML Pipeline** APIs.
-
-Show DLlib Scala example
-
-
-You can build distributed deep learning applications for Spark using *DLlib* Scala APIs in 3 simple steps:
-
-```scala
-// 1. Call `initNNContext` at the beginning of the code:
-import com.intel.analytics.bigdl.dllib.NNContext
-val sc = NNContext.initNNContext()
-
-// 2. Define the deep learning model using Keras-style API in DLlib:
-import com.intel.analytics.bigdl.dllib.keras.layers._
-import com.intel.analytics.bigdl.dllib.keras.Model
-val input = Input[Float](inputShape = Shape(10))
-val dense = Dense[Float](12).inputs(input)
-val output = Activation[Float]("softmax").inputs(dense)
-val model = Model(input, output)
-
-// 3. Use `NNEstimator` to train/predict/evaluate the model using Spark DataFrame and ML pipeline APIs
-import org.apache.spark.sql.SparkSession
-import org.apache.spark.ml.feature.MinMaxScaler
-import org.apache.spark.ml.Pipeline
-import com.intel.analytics.bigdl.dllib.nnframes.NNEstimator
-import com.intel.analytics.bigdl.dllib.nn.CrossEntropyCriterion
-import com.intel.analytics.bigdl.dllib.optim.Adam
-val spark = SparkSession.builder().getOrCreate()
-val trainDF = spark.read.parquet("train_data")
-val validationDF = spark.read.parquet("val_data")
-val scaler = new MinMaxScaler().setInputCol("in").setOutputCol("value")
-val estimator = NNEstimator(model, CrossEntropyCriterion())
- .setBatchSize(128).setOptimMethod(new Adam()).setMaxEpoch(5)
-val pipeline = new Pipeline().setStages(Array(scaler, estimator))
-
-val pipelineModel = pipeline.fit(trainDF)
-val predictions = pipelineModel.transform(validationDF)
-```
-
-
-
-Show DLlib Python example
-
-
-You can build distributed deep learning applications for Spark using *DLlib* Python APIs in 3 simple steps:
-
-```python
-# 1. Call `init_nncontext` at the beginning of the code:
-from bigdl.dllib.nncontext import init_nncontext
-sc = init_nncontext()
-
-# 2. Define the deep learning model using Keras-style API in DLlib:
-from bigdl.dllib.keras.layers import Input, Dense, Activation
-from bigdl.dllib.keras.models import Model
-input = Input(shape=(10,))
-dense = Dense(12)(input)
-output = Activation("softmax")(dense)
-model = Model(input, output)
-
-# 3. Use `NNEstimator` to train/predict/evaluate the model using Spark DataFrame and ML pipeline APIs
-from pyspark.sql import SparkSession
-from pyspark.ml.feature import MinMaxScaler
-from pyspark.ml import Pipeline
-from bigdl.dllib.nnframes import NNEstimator
-from bigdl.dllib.nn.criterion import CrossEntropyCriterion
-from bigdl.dllib.optim.optimizer import Adam
-spark = SparkSession.builder.getOrCreate()
-train_df = spark.read.parquet("train_data")
-validation_df = spark.read.parquet("val_data")
-scaler = MinMaxScaler().setInputCol("in").setOutputCol("value")
-estimator = NNEstimator(model, CrossEntropyCriterion())\
- .setBatchSize(128)\
- .setOptimMethod(Adam())\
- .setMaxEpoch(5)
-pipeline = Pipeline(stages=[scaler, estimator])
-
-pipelineModel = pipeline.fit(train_df)
-predictions = pipelineModel.transform(validation_df)
-```
-
-
-
-*See DLlib [NNFrames](https://bigdl.readthedocs.io/en/latest/doc/DLlib/Overview/nnframes.html) and [Keras API](https://bigdl.readthedocs.io/en/latest/doc/DLlib/Overview/keras-api.html) user guides for more details.*
-
-### Chronos
-
-The *Chronos* library makes it easy to build fast, accurate and scalable **time series analysis** applications (with AutoML).
-
-Show Chronos example
-
-
-You can train a time series forecaster using _Chronos_ in 3 simple steps:
-
-```python
-from bigdl.chronos.forecaster import TCNForecaster
-from bigdl.chronos.data.repo_dataset import get_public_dataset
-
-# 1. Process time series data using `TSDataset`
-tsdata_train, tsdata_val, tsdata_test = get_public_dataset(name='nyc_taxi')
-for tsdata in [tsdata_train, tsdata_val, tsdata_test]:
- data.roll(lookback=100, horizon=1)
-
-# 2. Create a `TCNForecaster` (automatically configured based on train_data)
-forecaster = TCNForecaster.from_tsdataset(train_data)
-
-# 3. Train the forecaster for prediction
-forecaster.fit(train_data)
-
-pred = forecaster.predict(test_data)
-```
-
-To apply AutoML, use `AutoTSEstimator` instead of normal forecasters.
-```python
-# Create and fit an `AutoTSEstimator`
-from bigdl.chronos.autots import AutoTSEstimator
-autotsest = AutoTSEstimator(model="tcn", future_seq_len=10)
-
-tsppl = autotsest.fit(data=tsdata_train, validation_data=tsdata_val)
-pred = tsppl.predict(tsdata_test)
-```
-
-
-
-*See Chronos [user guide](https://bigdl.readthedocs.io/en/latest/doc/Chronos/index.html) and [quick start](https://bigdl.readthedocs.io/en/latest/doc/Chronos/QuickStart/chronos-autotsest-quickstart.html) for more details.*
-
-### Friesian
-The *Friesian* library makes it easy to build end-to-end, large-scale **recommedation system** (including *offline* feature transformation and traning, *near-line* feature and model update, and *online* serving pipeline).
-
-*See Freisian [readme](https://github.com/intel-analytics/BigDL/blob/main/python/friesian/README.md) for more details.*
-
-### PPML
-
-*BigDL PPML* provides a **hardware (Intel SGX) protected** *Trusted Cluster Environment* for running distributed Big Data & AI applications (in a secure fashion on private or public cloud).
-
-*See PPML [user guide](https://bigdl.readthedocs.io/en/latest/doc/PPML/Overview/ppml.html) and [tutorial](https://github.com/intel-analytics/BigDL/blob/main/ppml/README.md) for more details.*
-
-## Getting Support
-
-- [Mail List](mailto:bigdl-user-group+subscribe@googlegroups.com)
-- [User Group](https://groups.google.com/forum/#!forum/bigdl-user-group)
-- [Github Issues](https://github.com/intel-analytics/BigDL/issues)
----
-
-## Citation
-
-If you've found BigDL useful for your project, you may cite our papers as follows:
-
-- *[BigDL 2.0](https://arxiv.org/abs/2204.01715): Seamless Scaling of AI Pipelines from Laptops to Distributed Cluster*
- ```
- @INPROCEEDINGS{9880257,
- title={BigDL 2.0: Seamless Scaling of AI Pipelines from Laptops to Distributed Cluster},
- author={Dai, Jason Jinquan and Ding, Ding and Shi, Dongjie and Huang, Shengsheng and Wang, Jiao and Qiu, Xin and Huang, Kai and Song, Guoqiong and Wang, Yang and Gong, Qiyuan and Song, Jiaming and Yu, Shan and Zheng, Le and Chen, Yina and Deng, Junwei and Song, Ge},
- booktitle={2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
- year={2022},
- pages={21407-21414},
- doi={10.1109/CVPR52688.2022.02076}
- }
- ```
-
-[^1]: Performance varies by use, configuration and other factors. `bigdl-llm` may not optimize to the same degree for non-Intel products. Learn more at www.Intel.com/PerformanceIndex.
-
-- *[BigDL](https://arxiv.org/abs/1804.05839): A Distributed Deep Learning Framework for Big Data*
- ```
- @INPROCEEDINGS{10.1145/3357223.3362707,
- title = {BigDL: A Distributed Deep Learning Framework for Big Data},
- author = {Dai, Jason Jinquan and Wang, Yiheng and Qiu, Xin and Ding, Ding and Zhang, Yao and Wang, Yanzhang and Jia, Xianyan and Zhang, Cherry Li and Wan, Yan and Li, Zhichao and Wang, Jiao and Huang, Shengsheng and Wu, Zhongyuan and Wang, Yang and Yang, Yuhao and She, Bowen and Shi, Dongjie and Lu, Qi and Huang, Kai and Song, Guoqiong},
- booktitle = {Proceedings of the ACM Symposium on Cloud Computing (SoCC)},
- year = {2019},
- pages = {50ā60},
- doi = {10.1145/3357223.3362707}
- }
- ```
-
+Over 40 models have been optimized/verified on `ipex-llm`, including *LLaMA/LLaMA2, ChatGLM/ChatGLM2, Mistral, Falcon, MPT, Baichuan/Baichuan2, InternLM, QWen* and more; see the example list below.
+
+| Model | CPU Example | GPU Example |
+| ---------------------------------------- | ---------------------------------------- | ---------------------------------------- |
+| LLaMA *(such as Vicuna, Guanaco, Koala, Baize, WizardLM, etc.)* | [link1](python/llm/example/CPU/Native-Models), [link2](python/llm/example/CPU/HF-Transformers-AutoModels/Model/vicuna) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/vicuna) |
+| LLaMA 2 | [link1](python/llm/example/CPU/Native-Models), [link2](python/llm/example/CPU/HF-Transformers-AutoModels/Model/llama2) | [link1](python/llm/example/GPU/HF-Transformers-AutoModels/Model/llama2), [link2-low GPU memory example](python/llm/example/GPU/PyTorch-Models/Model/llama2#example-2---low-memory-version-predict-tokens-using-generate-api) |
+| ChatGLM | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/chatglm) | |
+| ChatGLM2 | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/chatglm2) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/chatglm2) |
+| ChatGLM3 | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/chatglm3) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/chatglm3) |
+| Mistral | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/mistral) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/mistral) |
+| Mixtral | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/mixtral) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/mixtral) |
+| Falcon | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/falcon) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/falcon) |
+| MPT | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/mpt) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/mpt) |
+| Dolly-v1 | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/dolly_v1) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/dolly-v1) |
+| Dolly-v2 | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/dolly_v2) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/dolly-v2) |
+| Replit Code | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/replit) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/replit) |
+| RedPajama | [link1](python/llm/example/CPU/Native-Models), [link2](python/llm/example/CPU/HF-Transformers-AutoModels/Model/redpajama) | |
+| Phoenix | [link1](python/llm/example/CPU/Native-Models), [link2](python/llm/example/CPU/HF-Transformers-AutoModels/Model/phoenix) | |
+| StarCoder | [link1](python/llm/example/CPU/Native-Models), [link2](python/llm/example/CPU/HF-Transformers-AutoModels/Model/starcoder) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/starcoder) |
+| Baichuan | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/baichuan) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/baichuan) |
+| Baichuan2 | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/baichuan2) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/baichuan2) |
+| InternLM | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/internlm) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/internlm) |
+| Qwen | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/qwen) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/qwen) |
+| Qwen1.5 | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/qwen1.5) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/qwen1.5) |
+| Qwen-VL | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/qwen-vl) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/qwen-vl) |
+| Aquila | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/aquila) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/aquila) |
+| Aquila2 | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/aquila2) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/aquila2) |
+| MOSS | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/moss) | |
+| Whisper | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/whisper) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/whisper) |
+| Phi-1_5 | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/phi-1_5) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/phi-1_5) |
+| Flan-t5 | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/flan-t5) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/flan-t5) |
+| LLaVA | [link](python/llm/example/CPU/PyTorch-Models/Model/llava) | [link](python/llm/example/GPU/PyTorch-Models/Model/llava) |
+| CodeLlama | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/codellama) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/codellama) |
+| Skywork | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/skywork) | |
+| InternLM-XComposer | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/internlm-xcomposer) | |
+| WizardCoder-Python | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/wizardcoder-python) | |
+| CodeShell | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/codeshell) | |
+| Fuyu | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/fuyu) | |
+| Distil-Whisper | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/distil-whisper) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/distil-whisper) |
+| Yi | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/yi) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/yi) |
+| BlueLM | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/bluelm) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/bluelm) |
+| Mamba | [link](python/llm/example/CPU/PyTorch-Models/Model/mamba) | [link](python/llm/example/GPU/PyTorch-Models/Model/mamba) |
+| SOLAR | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/solar) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/solar) |
+| Phixtral | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/phixtral) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/phixtral) |
+| InternLM2 | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/internlm2) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/internlm2) |
+| RWKV4 | | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/rwkv4) |
+| RWKV5 | | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/rwkv5) |
+| Bark | [link](python/llm/example/CPU/PyTorch-Models/Model/bark) | [link](python/llm/example/GPU/PyTorch-Models/Model/bark) |
+| SpeechT5 | | [link](python/llm/example/GPU/PyTorch-Models/Model/speech-t5) |
+| DeepSeek-MoE | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/deepseek-moe) | |
+| Ziya-Coding-34B-v1.0 | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/ziya) | |
+| Phi-2 | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/phi-2) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/phi-2) |
+| Yuan2 | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/yuan2) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/yuan2) |
+| Gemma | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/gemma) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/gemma) |
+| DeciLM-7B | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/deciLM-7b) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/deciLM-7b) |
+| Deepseek | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/deepseek) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/deepseek) |
+
+
+***For more details, please refer to the `ipex-llm` [Document](https://test-ipex-llm.readthedocs.io/en/main/doc/LLM/index.html), [Readme](python/llm), [Tutorial](https://github.com/intel-analytics/ipex-llm-tutorial) and [API Doc](https://ipex-llm.readthedocs.io/en/latest/doc/PythonAPI/LLM/index.html).***
diff --git a/docs/readthedocs/source/doc/Application/blogs.md b/docs/readthedocs/source/doc/Application/blogs.md
deleted file mode 100644
index 783bc202..00000000
--- a/docs/readthedocs/source/doc/Application/blogs.md
+++ /dev/null
@@ -1,49 +0,0 @@
-Blogs
----
-**2023**
-- [Large-scale Offline Book Recommendation with BigDL at Dangdang.com](https://www.intel.com/content/www/us/en/developer/articles/technical/dangdang-offline-recommendation-service-with-bigdl.html)
-
-**2022**
-- [Optimized Large-Scale Item Search with Intel BigDL at Yahoo! JAPAN Shopping](https://www.intel.com/content/www/us/en/developer/articles/technical/offline-item-search-with-bigdl-at-yahoo-japan.html)
-- [Tencent Trusted Computing Solution on SGX with Intel BigDL PPML](https://www.intel.com/content/www/us/en/developer/articles/technical/tencent-trusted-computing-solution-with-bigdl-ppml.html)
-- [BigDL Privacy Preserving Machine Learning with Occlum OSS on Azure Confidential Computing](https://techcommunity.microsoft.com/t5/azure-confidential-computing/bigdl-privacy-preserving-machine-learning-with-occlum-oss-on/ba-p/3658667)
-- ["AI at Scale" in Mastercard with BigDL](https://www.intel.com/content/www/us/en/developer/articles/technical/ai-at-scale-in-mastercard-with-bigdl.html)
-- [BigDL 2.0: Seamless Scaling of AI Pipelines from Laptops to Distributed Cluster](https://arxiv.org/abs/2204.01715)
-- [Project Bose: A smart way to enable sustainable 5G networks in Capgemini](https://www.capgemini.com/insights/expert-perspectives/project-bose-a-smart-way-to-enable-sustainable-5g-networks/)
-- [Intelligent Power Prediction Solution in Goldwind](https://www.intel.com/content/www/us/en/customer-spotlight/stories/goldwind-customer-story.html)
-- [5G Core Network Power Saving using BigDL Chronos Framework in China Unicom](https://www.intel.cn/content/www/cn/zh/customer-spotlight/cases/china-unicom-bigdl-chronos-framework-5gc.html) (in Chinese)
-
-**2021**
-- [From Ray to Chronos: Build end-to-end AI use cases using BigDL on top of Ray](https://www.anyscale.com/blog/from-ray-to-chronos-build-end-to-end-ai-use-cases-using-bigdl-on-top-of-ray)
-- [Scalable AutoXGBoost Using Analytics Zoo AutoML](https://medium.com/intel-analytics-software/scalable-autoxgboost-using-analytics-zoo-automl-30d576cb138a)
-- [Intelligent 5G L2 MAC Scheduler: Powered by Capgemini NetAnticipate 5G on Intel Architecture](https://networkbuilders.intel.com/solutionslibrary/intelligent-5g-l2-mac-scheduler-powered-by-capgemini-netanticipate-5g-on-intel-architecture)
-- [Better Together: Privacy-Preserving Machine Learning Powered by Intel SGX and Intel DL Boost](https://www.intel.com/content/www/us/en/artificial-intelligence/posts/alibaba-privacy-preserving-machine-learning.html)
-
-**2020**
-- [SK Telecom, Intel Build AI Pipeline to Improve Network Quality](https://networkbuilders.intel.com/solutionslibrary/sk-telecom-intel-build-ai-pipeline-to-improve-network-quality)
-- [Build End-to-End AI Pipelines Using Ray and Apache Spark](https://medium.com/distributed-computing-with-ray/build-end-to-end-ai-pipeline-using-ray-and-apache-spark-23f70f36115e)
-- [Tencent Cloud Leverages Analytics Zoo to Improve Performance of TI-ONE ML Platform](https://www.intel.com/content/www/us/en/developer/articles/technical/tencent-cloud-leverages-analytics-zoo-to-improve-performance-of-ti-one-ml-platform.html)
-- [Context-Aware Fast Food Recommendation at Burger King with RayOnSpark](https://medium.com/riselab/context-aware-fast-food-recommendation-at-burger-king-with-rayonspark-2e7a6009dd2d)
-- [Seamlessly Scaling AI for Distributed Big Data](https://medium.com/swlh/seamlessly-scaling-ai-for-distributed-big-data-5b589ead2434)
-- [Distributed Inference Made Easy with Analytics Zoo Cluster Serving](https://www.intel.com/content/www/us/en/developer/articles/technical/distributed-inference-made-easy-with-analytics-zoo-cluster-serving.html)
-
-**2019**
-- [BigDL: A Distributed Deep-Learning Framework for Big Data](https://arxiv.org/abs/1804.05839)
-- [Scalable AutoML for Time-Series Prediction Using Ray and BigDL & Analytics Zoo](https://medium.com/riselab/scalable-automl-for-time-series-prediction-using-ray-and-analytics-zoo-b79a6fd08139)
-- [RayOnSpark: Run Emerging AI Applications on Big Data Clusters with Ray and BigDL & Analytics Zoo](https://medium.com/riselab/rayonspark-running-emerging-ai-applications-on-big-data-clusters-with-ray-and-analytics-zoo-923e0136ed6a)
-- [Real-time Product Recommendations for Office Depot Using Apache Spark and Analytics Zoo on AWS](https://www.intel.com/content/www/us/en/developer/articles/technical/real-time-product-recommendations-for-office-depot-using-apache-spark-and-analytics-zoo-on.html)
-- [Machine Learning Pipelines for High Energy Physics Using Apache Spark with BigDL and Analytics Zoo](https://db-blog.web.cern.ch/blog/luca-canali/machine-learning-pipelines-high-energy-physics-using-apache-spark-bigdl)
-- [Deep Learning with Analytic Zoo Optimizes Mastercard Recommender AI Service](https://www.intel.com/content/www/us/en/developer/articles/technical/deep-learning-with-analytic-zoo-optimizes-mastercard-recommender-ai-service.html)
-- [Using Intel Analytics Zoo to Inject AI into Customer Service Platform (Part II)](https://www.infoq.com/articles/analytics-zoo-qa-module/)
-- [Talroo Uses Analytics Zoo and AWS to Leverage Deep Learning for Job Recommendations](https://www.intel.com/content/www/us/en/developer/articles/technical/talroo-uses-analytics-zoo-and-aws-to-leverage-deep-learning-for-job-recommendations.html)
-
-**2018**
-- [Analytics Zoo: Unified Analytics + AI Platform for Distributed Tensorflow, and BigDL on Apache Spark](https://www.infoq.com/articles/analytics-zoo/)
-- [Industrial Inspection Platform in Midea and KUKA: Using Distributed TensorFlow on Analytics Zoo](https://www.intel.com/content/www/us/en/developer/articles/technical/industrial-inspection-platform-in-midea-and-kuka-using-distributed-tensorflow-on-analytics.html)
-- [Use Analytics Zoo to Inject AI Into Customer Service Platforms on Microsoft Azure](https://www.intel.com/content/www/us/en/developer/articles/technical/use-analytics-zoo-to-inject-ai-into-customer-service-platforms-on-microsoft-azure-part-1.html)
-- [LSTM-Based Time Series Anomaly Detection Using Analytics Zoo for Apache Spark and BigDL at Baosight](https://www.intel.com/content/www/us/en/developer/articles/technical/lstm-based-time-series-anomaly-detection-using-analytics-zoo-for-apache-spark-and-bigdl.html)
-
-**2017**
-- [Accelerating Deep-Learning Training with BigDL and Drizzle on Apache Spark](https://rise.cs.berkeley.edu/blog/accelerating-deep-learning-training-with-bigdl-and-drizzle-on-apache-spark)
-- [Using BigDL to Build Image Similarity-Based House Recommendations](https://www.intel.com/content/www/us/en/developer/articles/technical/using-bigdl-to-build-image-similarity-based-house-recommendations.html)
-- [Building Large-Scale Image Feature Extraction with BigDL at JD.com](https://www.intel.com/content/www/us/en/developer/articles/technical/building-large-scale-image-feature-extraction-with-bigdl-at-jdcom.html)
diff --git a/docs/readthedocs/source/doc/Application/index.rst b/docs/readthedocs/source/doc/Application/index.rst
deleted file mode 100644
index 7ec694eb..00000000
--- a/docs/readthedocs/source/doc/Application/index.rst
+++ /dev/null
@@ -1,2 +0,0 @@
-Real-World Application
-=========================
\ No newline at end of file
diff --git a/docs/readthedocs/source/doc/Application/powered-by.md b/docs/readthedocs/source/doc/Application/powered-by.md
deleted file mode 100644
index 61c6b7a4..00000000
--- a/docs/readthedocs/source/doc/Application/powered-by.md
+++ /dev/null
@@ -1,93 +0,0 @@
-# Powered By
----
-
-* __Alibaba__
-
⢠[Alibaba Cloud and Intel synergize BigDL PPML and Alibaba Cloud Data Trust to protect E2E privacy of AI and big data](https://www.intel.com/content/www/us/en/customer-spotlight/stories/alibaba-cloud-ppml-customer-story.html)
-
⢠[Better Together: Alibaba Cloud Realtime Compute and Distributed AI Inference](https://www.intel.cn/content/dam/www/central-libraries/cn/zh/documents/better-together-alibaba-cloud-realtime-compute-and-distibuted-ai-inference.pdf) (in Chinese)
-
⢠[Better Together: Privacy-Preserving Machine Learning](https://www.intel.com/content/www/us/en/artificial-intelligence/posts/alibaba-privacy-preserving-machine-learning.html)
-* __AsiaInfo__
-
⢠[AsiaInfo Technology Leverages Hardware and Software Products and Technologies to Create New Intelligent Energy Saving Solutions for 5G Cloud Based Base Station Products](https://www.intel.cn/content/www/cn/zh/communications/asiainfo-create-intelligent-energy-saving-solution.html) (in Chinese)
-
⢠[Network AI Applications using BigDL and oneAPI toolkit on Intel Xeon](https://www.intel.cn/content/www/cn/zh/customer-spotlight/cases/asiainfo-taps-intelligent-network-applications.html)
-* __Baosight__
-
⢠[LSTM-Based Time Series Anomaly Detection Using Analytics Zoo for Apache Spark and BigDL at Baosight](https://www.intel.com/content/www/us/en/developer/articles/technical/lstm-based-time-series-anomaly-detection-using-analytics-zoo-for-apache-spark-and-bigdl.html)
-* __BBVA__
-
⢠[A Graph Convolutional Network Implementation](https://emartinezs44.medium.com/graph-convolutions-networks-ad8295b3ce57)
-* __Burger King__
-
⢠[Context-Aware Fast Food Recommendation at Burger King with RayOnSpark](https://medium.com/riselab/context-aware-fast-food-recommendation-at-burger-king-with-rayonspark-2e7a6009dd2d)
-
⢠[How Intel and Burger King built an order recommendation system that preserves customer privacy](https://venturebeat.com/2021/04/06/how-intel-and-burger-king-built-an-order-recommendation-system-that-preserves-customer-privacy/)
-
⢠[Burger King: Context-Aware Recommendations (video)](https://www.intel.com/content/www/us/en/customer-spotlight/stories/burger-king-ai-customer-story.html)
-* __Capgemini__
-
⢠[Project Bose: A smart way to enable sustainable 5G networks in Capgemini](https://www.capgemini.com/insights/expert-perspectives/project-bose-a-smart-way-to-enable-sustainable-5g-networks/)
-
⢠[Intelligent 5G L2 MAC Scheduler: Powered by Capgemini NetAnticipate 5G on Intel Architecture](https://networkbuilders.intel.com/solutionslibrary/intelligent-5g-l2-mac-scheduler-powered-by-capgemini-netanticipate-5g-on-intel-architecture)
-* __China Unicom__
-
⢠[China Unicom Data Center Energy Saving and Emissions Reduction with Intel Intelligent Energy Management](https://www.intel.com/content/www/us/en/content-details/768821/china-unicom-data-center-energy-saving-and-emissions-reduction-with-intel-intelligent-energy-management.html)
-
⢠[Cloud Data Center Power Saving using BigDL Chronos in China Unicom](https://www.intel.cn/content/www/cn/zh/customer-spotlight/cases/china-unicom-bigdl-chronos-framework-5gc.html)
-* __CERN__
-
⢠[Deep Learning Pipelines for High Energy Physics using Apache Spark with Distributed Keras on Analytics Zoo](https://databricks.com/session_eu19/deep-learning-pipelines-for-high-energy-physics-using-apache-spark-with-distributed-keras-on-analytics-zoo)
-
⢠[Topology classification at CERN's Large Hadron Collider using Analytics Zoo](https://db-blog.web.cern.ch/blog/luca-canali/machine-learning-pipelines-high-energy-physics-using-apache-spark-bigdl)
-
⢠[Deep Learning on Apache Spark at CERN's Large Hadron Collider with Intel Technologies](https://databricks.com/session/deep-learning-on-apache-spark-at-cerns-large-hadron-collider-with-intel-technologies)
-* __China Telecom__
-
⢠[Face Recognition Application and Practice Based on Intel Analytics Zoo: Part 1](https://mp.weixin.qq.com/s/FEiXoTDi-yy04PJ2Mlfl4A) (in Chinese)
-
⢠[Face Recognition Application and Practice Based on Intel Analytics Zoo: Part 2](https://mp.weixin.qq.com/s/VIyWRORTAVAAsC4v6Fi0xw) (in Chinese)
-* __Cray__
-
⢠[A deep learning approach for precipitation nowcasting with RNN using Analytics Zoo in Cray](https://conferences.oreilly.com/strata/strata-ny-2018/public/schedule/detail/69413)
-* __Dangdang__
-
⢠[Large-scale Offline Book Recommendation with BigDL at Dangdang.com](https://www.intel.com/content/www/us/en/developer/articles/technical/dangdang-offline-recommendation-service-with-bigdl.html)
-* __Dell EMC__
-
⢠[AI-assisted Radiology Using Distributed Deep
-Learning on Apache Spark and Analytics Zoo](https://www.dellemc.com/resources/en-us/asset/white-papers/solutions/h17686_hornet_wp.pdf)
-
⢠[Using Deep Learning on Apache Spark to Diagnose Thoracic Pathology from Chest X-rays](https://databricks.com/session/using-deep-learning-on-apache-spark-to-diagnose-thoracic-pathology-from-chest-x-rays)
-* __GoldWind__
-
⢠[Goldwind SE: Intelligent Power Prediction Solution](https://www.intel.com/content/www/us/en/customer-spotlight/stories/goldwind-customer-story.html)
-
⢠[Intel big data analysis + AI platform helps GoldWind to build a new energy intelligent power prediction solution](https://www.intel.cn/content/www/cn/zh/analytics/artificial-intelligence/create-power-forecasting-solutions.html)
-* __Inspur__
-
⢠[Inspurās Big Data Intelligent Computing AIO Solution Based on Intel Architecture](https://dpgresources.intel.com/asset-library/inspur-insight-big-data-platform-solution-icx-prc/)
-
⢠[Inspur E2E Smart Transportation CV application](https://jason-dai.github.io/cvpr2021/slides/Inspur%20E2E%20Smart%20Transportation%20CV%20application%20-CVPR21.pdf)
-
⢠[Inspur End-to-End Smart Computing Solution with Intel Analytics Zoo](https://dpgresources.intel.com/asset-library/inspur-end-to-end-smart-computing-solution-with-intel-analytics-zoo/)
-* __JD__
-
⢠[Object Detection and Image Feature Extraction at JD.com](https://software.intel.com/en-us/articles/building-large-scale-image-feature-extraction-with-bigdl-at-jdcom)
-* __MasterCard__
-
⢠["AI at Scale" in Mastercard with BigDL](https://www.intel.com/content/www/us/en/developer/articles/technical/ai-at-scale-in-mastercard-with-bigdl0.html)
-
⢠[Deep Learning with Analytic Zoo Optimizes Mastercard Recommender AI Service](https://www.intel.com/content/www/us/en/developer/articles/technical/deep-learning-with-analytic-zoo-optimizes-mastercard-recommender-ai-service.html)
-* __Microsoft Azure__
-
⢠[Use Analytics Zoo to Inject AI Into Customer Service Platforms on Microsoft Azure: Part 1](https://www.intel.com/content/www/us/en/developer/articles/technical/use-analytics-zoo-to-inject-ai-into-customer-service-platforms-on-microsoft-azure-part-1.html)
-
⢠[Use Analytics Zoo to Inject AI Into Customer Service Platforms on Microsoft Azure: Part 2](https://www.infoq.com/articles/analytics-zoo-qa-module/?from=timeline&isappinstalled=0)
-* __Midea__
-
⢠[Industrial Inspection Platform in Midea and KUKA: Using Distributed TensorFlow on Analytics Zoo](https://www.intel.com/content/www/us/en/developer/articles/technical/industrial-inspection-platform-in-midea-and-kuka-using-distributed-tensorflow-on-analytics.html)
-
⢠[Ability to add "eyes" and "brains" to smart manufacturing](https://www.intel.cn/content/www/cn/zh/analytics/artificial-intelligence/midea-case-study.html) (in Chinese)
-* __MLSListings__
-
⢠[Image Similarity-Based House Recommendations and Search](https://www.intel.com/content/www/us/en/developer/articles/technical/using-bigdl-to-build-image-similarity-based-house-recommendations.html)
-* __NeuSoft/BMW__
-
⢠[Neusoft RealSight APM partners with Intel to create an application performance management platform with active defense capabilities](https://platform.neusoft.com/2020/01/17/xw-intel.html) (in Chinese)
-* __NeuSoft/Mazda__
-
⢠[JD, Neusoft and Intel Jointly Building Intelligent and Connected Vehicle Cloud for HaiMa(former Hainan Mazda)](https://www.neusoft.com/Products/Platforms/2472/4735110231.html)
-
⢠[JD, Neusoft and Intel Jointly Building Intelligent and Connected Vehicle Cloud for Hainan-Mazda](https://platform.neusoft.com/2020/06/11/jjfa-haimaqiche.html) (in Chinese)
-* __Office Depot__
-
⢠[Real-time Product Recommendations for Office Depot Using Apache Spark and Analytics Zoo on AWS](https://www.intel.com/content/www/us/en/developer/articles/technical/real-time-product-recommendations-for-office-depot-using-apache-spark-and-analytics-zoo-on.html)
-
⢠[Office Depot product recommender using Analytics Zoo on AWS](https://conferences.oreilly.com/strata/strata-ca/public/schedule/detail/73079)
-* __SK Telecom__
-
⢠[Reference Architecture for Confidential Computing on SKT 5G MEC](https://networkbuilders.intel.com/solutionslibrary/reference-architecture-for-confidential-computing-on-skt-5g-mec)
-
⢠[SK Telecom, Intel Build AI Pipeline to Improve Network Quality](https://networkbuilders.intel.com/solutionslibrary/sk-telecom-intel-build-ai-pipeline-to-improve-network-quality)
-
⢠[Vectorized Deep Learning Acceleration from Preprocessing to Inference and Training on Apache Spark in SK Telecom](https://databricks.com/session_na20/vectorized-deep-learning-acceleration-from-preprocessing-to-inference-and-training-on-apache-spark-in-sk-telecom)
-
⢠[Apache Spark AI Use Case in Telco: Network Quality Analysis and Prediction with Geospatial Visualization](https://databricks.com/session_eu19/apache-spark-ai-use-case-in-telco-network-quality-analysis-and-prediction-with-geospatial-visualization)
- * __Talroo__
-
⢠[Uses Analytics Zoo and AWS to Leverage Deep Learning for Job Recommendations](https://www.intel.com/content/www/us/en/developer/articles/technical/talroo-uses-analytics-zoo-and-aws-to-leverage-deep-learning-for-job-recommendations.html)
-
⢠[Job recommendations leveraging deep learning using Analytics Zoo on Apache Spark and BigDL](https://conferences.oreilly.com/strata/strata-ny-2018/public/schedule/detail/69113)
-* __Telefonica__
-
⢠[Running Analytics Zoo jobs on Telefónica Open Cloudās MRS Service](https://medium.com/@fernando.delaiglesia/running-analytics-zoo-jobs-on-telef%C3%B3nica-open-clouds-mrs-service-2e64bc823c50)
-* __Tencent__
-
⢠[Tencent Trusted Computing Solution on SGX with Intel BigDL PPML](https://www.intel.com/content/www/us/en/developer/articles/technical/tencent-trusted-computing-solution-with-bigdl-ppml.html)
-
⢠[Analytics Zoo helps Tencent Cloud improve the performance of its intelligent titanium machine learning platform](https://www.intel.cn/content/www/cn/zh/service-providers/analytics-zoo-helps-tencent-cloud-improve-ti-ml-platform-performance.html)
-
⢠[Tencent Cloud Leverages Analytics Zoo to Improve Performance of TI-ONE ML Platform](https://software.intel.com/content/www/us/en/develop/articles/tencent-cloud-leverages-analytics-zoo-to-improve-performance-of-ti-one-ml-platform.html)
-
⢠[Enhance Tencent's TUSI Identity Practice with Intel Analytics Zoo](https://mp.weixin.qq.com/s?__biz=MzAwNzc5NzM5Mw==&mid=2651030944&idx=1&sn=d6e06c6e14a7355971953a501689b232&chksm=808f8a5eb7f80348fc8e88c4c9e415341bf43ef6bdf3fd4f3001da89e2c9ba7fa2ed5deeb09a&mpshare=1&scene=1&srcid=0412WxM3eWdsLLoO2TYJGWbS&pass_ticket=E6l%2FfOZNKjhr05lsU7inAVCi7mAy5LFEehvEJOS2ZGdHg6%2FH%2BeBQisHA9sfXDOoy#rd) (in Chinese)
-* __UC Berkeley RISELab__
-
⢠[RayOnSpark: Running Emerging AI Applications on Big Data Clusters with Ray and Analytics Zoo](https://medium.com/riselab/rayonspark-running-emerging-ai-applications-on-big-data-clusters-with-ray-and-analytics-zoo-923e0136ed6a)
-
⢠[Scalable AutoML for Time Series Prediction Using Ray and Analytics Zoo](https://medium.com/riselab/scalable-automl-for-time-series-prediction-using-ray-and-analytics-zoo-b79a6fd08139)
-* __UnionPay__
-
⢠[Technical Verification of SGX and BigDL Based Privacy Computing for Multi Source Financial Big Data](https://www.intel.cn/content/www/cn/zh/now/data-centric/sgx-bigdl-financial-big-data.html) (in Chinese)
-* __World Bank__
-
⢠[Using Crowdsourced Images to Create Image Recognition Models with Analytics Zoo using BigDL](https://databricks.com/session/using-crowdsourced-images-to-create-image-recognition-models-with-bigdl)
-* __Yahoo! JAPAN__
-
⢠[Optimized Large-Scale Item Search with Intel BigDL at Yahoo! JAPAN Shopping](https://www.intel.com/content/www/us/en/developer/articles/technical/offline-item-search-with-bigdl-at-yahoo-japan.html)
-* __Yunda__
-
⢠[Intelligent transformation brings "quality change" to the express delivery industry](https://www.intel.cn/content/www/cn/zh/analytics/artificial-intelligence/yunda-brings-quality-change-to-the-express-delivery-industry.html) (in Chinese)
diff --git a/docs/readthedocs/source/doc/Application/presentations.md b/docs/readthedocs/source/doc/Application/presentations.md
deleted file mode 100644
index 2a87e6f5..00000000
--- a/docs/readthedocs/source/doc/Application/presentations.md
+++ /dev/null
@@ -1,99 +0,0 @@
-# Presentations
----
-
-**Tutorial:**
-- Seamlessly Scaling out Big Data AI on Ray and Apache Spark, [CVPR 2021](https://cvpr2021.thecvf.com/program) [tutorial](https://jason-dai.github.io/cvpr2021/), June 2021 ([slides](https://jason-dai.github.io/cvpr2021/slides/End-to-End%20Big%20Data%20AI%20Pipeline%20using%20Analytics%20Zoo%20-%20CVPR21.pdf))
-
-- Automated Machine Learning Workflow for Distributed Big Data Using Analytics Zoo, [CVPR 2020](https://cvpr2020.thecvf.com/program/tutorials) [tutorial](https://jason-dai.github.io/cvpr2020/), June 2020 ([slides](https://jason-dai.github.io/cvpr2020/slides/AIonBigData_cvpr20.pdf))
-
-- Building Deep Learning Applications for Big Data, [AAAI 2019]( https://aaai.org/Conferences/AAAI-19/aaai19tutorials/#sp2) [tutorial](https://jason-dai.github.io/aaai2019/), January 2019 ([slides](https://jason-dai.github.io/aaai2019/slides/AI%20on%20Big%20Data%20(Jason%20Dai).pdf))
-
-- Analytics Zoo: Distributed TensorFlow and Keras on Apache Spark, [AI conference](https://conferences.oreilly.com/artificial-intelligence/ai-ca-2019/public/schedule/detail/77069), Sep 2019, San Jose ([slides](https://github.com/analytics-zoo/analytics-zoo.github.io/blob/master/presentations/Tutorial%20Analytics%20ZOO.pdf))
-
-- Building Deep Learning Applications on Big Data Platforms, [CVPR 2018](https://cvpr2018.thecvf.com/) [tutorial](https://jason-dai.github.io/cvpr2018/), June 2018 ([slides](https://jason-dai.github.io/cvpr2018/slides/BigData_DL_Jason-CVPR.pdf))
-
-**Talks:**
-- BigDL 2.0: Seamlessly scaling end-to-end AI pipelines, [Ray Summit 2022](https://www.anyscale.com/ray-summit-2022/agenda/sessions/174), August 2022 ([slides](https://github.com/analytics-zoo/analytics-zoo.github.io/blob/master/presentations/BigDL-2.0-Seamlessly-scaling-end-to-end-AI-pipelines.pdf))
-
-- Exploration on Confidential Computing for Big Data & AI, [oneAPI DevSummit for AI 2022](https://www.oneapi.io/event-sessions/exploration-on-confidential-computing-for-big-data-ai-ai-2022/), July 2022 ([slides](https://simplecore.intel.com/oneapi-io/wp-content/uploads/sites/98/Qiyuan-Gong-and-Chunyang-Hui-Exploration-on-Confidential-Computing-for-Big-Data-AI.pdf))
-
-- Privacy Preserving Machine Learning and Big Data Analytics Using Apache Spark, [Data + AI Summit 2022](https://www.databricks.com/dataaisummit/session/privacy-preserving-machine-learning-and-big-data-analytics-using-apache-spark), June 2022 ([slides](https://microsites.databricks.com/sites/default/files/2022-07/Privacy-Preserving-Machine-Learning-and-Big-Data-Analytics-Using-Apache-Spark.pdf))
-
-- E2E Smart Transportation CV application in Inspur (using Insight Data-Intelligence platform), [CVPR 2021](https://jason-dai.github.io/cvpr2021/), July 2021 ([slides](https://jason-dai.github.io/cvpr2021/slides/Inspur%20E2E%20Smart%20Transportation%20CV%20application%20-CVPR21.pdf))
-
-- Mobile Order Click-Through Rate (CTR) Recommendation with Ray on Apache Spark at Burger King, [Ray Summit 2021](https://www.anyscale.com/events/2021/06/22/mobile-order-click-through-rate-ctr-recommendation-with-ray-on-apache-spark-at-burger-king), June 2021 ([slides](https://files.speakerdeck.com/presentations/1870110b5adf4bfc8f0c76255a417f09/Kai_Huang_and_Luyang_Wang.pdf))
-
-- Deep Reinforcement Learning Recommenders using RayOnSpark, *Data + AI Summit 2021*, May 2021 ([slides](https://github.com/analytics-zoo/analytics-zoo.github.io/blob/master/presentations/210527DeepReinforcementLearningRecommendersUsingRayOnSpark2.pdf))
-
-- Cluster Serving: Deep Learning Model Serving for Big Data, *Data + AI Summit 2021*, May 2021 ([slides](https://github.com/analytics-zoo/analytics-zoo.github.io/blob/master/presentations/210526Cluster-Serving.pdf))
-
-- Offer Recommendation System with Apache Spark at Burger King, [Data + AI Summit 2021](https://databricks.com/session_na21/offer-recommendation-system-with-apache-spark-at-burger-king), May 2021 ([slides](https://github.com/analytics-zoo/analytics-zoo.github.io/blob/master/presentations/20210526Offer%20Recommendation.pdf))
-
-- Context-aware Fast Food Recommendation with Ray on Apache Spark at Burger King, [Data + AI Summit Europe 2020](https://databricks.com/session_eu20/context-aware-fast-food-recommendation-with-ray-on-apache-spark-at-burger-king), November 2020 ([slides](https://github.com/analytics-zoo/analytics-zoo.github.io/blob/master/presentations/1118%20Context-aware%20Fast%20Food%20Recommendation%20with%20Ray%20on%20Apache%20Spark%20at%20Burger%20King.pdf))
-
-- Cluster Serving: Distributed Model Inference using Apache Flink in Analytics Zoo, [Flink Forward 2020](https://www.flink-forward.org/global-2020/conference-program#cluster-serving--distributed-model-inference-using-apache-flink-in-analytics-zoo), October 2020 ([slides](https://github.com/analytics-zoo/analytics-zoo.github.io/blob/master/presentations/1020%20Cluster%20Serving%20Distributed%20Model%20Inference%20using%20Apache%20Flink%20in%20Analytics%20Zoo%20.pdf))
-
-- Project Zouwu: Scalable AutoML for Telco Time Series Analysis using Ray and Analytics Zoo, [Ray Summit Connect 2020](https://anyscale.com/blog/videos-and-slides-for-the-fourth-ray-summit-connect-august-12-2020/), August 2020 ([slides](https://anyscale.com/wp-content/uploads/2020/08/Ding-Ding-Connect-slides.pdf))
-
-- Cluster Serving: Distributed Model Inference using Big Data Streaming in Analytics Zoo, [OpML 2020](https://www.usenix.org/conference/opml20/presentation/song), July 2020 ([slides](https://www.usenix.org/sites/default/files/conference/protected-files/opml20_talks_43_slides_song.pdf))
-
-- Scalable AutoML for Time Series Forecasting using Ray, [OpML 2020](https://www.usenix.org/conference/opml20/presentation/huang), July 2020 ([slides](https://www.usenix.org/sites/default/files/conference/protected-files/opml20_talks_84_slides_huang.pdf))
-
-- Scalable AutoML for Time Series Forecasting using Ray, [Spark + AI Summit 2020](https://databricks.com/session_na20/scalable-automl-for-time-series-forecasting-using-ray), June 2020 ([slides](https://www.slideshare.net/databricks/scalable-automl-for-time-series-forecasting-using-ray))
-
-- Running Emerging AI Applications on Big Data Platforms with Ray On Apache Spark, [Spark + AI Summit 2020](https://databricks.com/session_na20/running-emerging-ai-applications-on-big-data-platforms-with-ray-on-apache-spark), June 2020 ([slides](https://www.slideshare.net/databricks/running-emerging-ai-applications-on-big-data-platforms-with-ray-on-apache-spark))
-
-- Vectorized Deep Learning Acceleration from Preprocessing to Inference and Training on Apache Spark in SK Telecom, [Spark + AI Summit 2020](https://databricks.com/session_na20/vectorized-deep-learning-acceleration-from-preprocessing-to-inference-and-training-on-apache-spark-in-sk-telecom), June 2020 ([slides](https://www.slideshare.net/databricks/vectorized-deep-learning-acceleration-from-preprocessing-to-inference-and-training-on-apache-spark-in-sk-telecom?from_action=save))
-
-- Architecture and practice of big data analysis and deep learning model inference using Analytics Zoo on Flink, [Flink Forward Asia 2019](https://developer.aliyun.com/special/ffa2019-conference?spm=a2c6h.13239638.0.0.21f27955PCNMUB#), Nov 2019, Beijing ([slides](https://github.com/analytics-zoo/analytics-zoo.github.io/blob/master/presentations/Architecture%20and%20practice%20of%20big%20data%20analysis%20and%20deep%20learning%20model%20inference%20using%20Analytics%20Zoo%20on%20Flink(FFA2019)%20.pdf))
-
-- Data analysis + AI platform technology and case studies, [AICon BJ 2019](https://aicon.infoq.cn/2019/beijing/), Nov 2019, Beijing ([slides](https://github.com/analytics-zoo/analytics-zoo.github.io/blob/master/presentations/AICON%20AZ%20Cluster%20Serving%20Beijing%20Qiyuan_v5.pdf))
-
-- Architectural practices for building a unified big data AI application with Analytics-Zoo, [QCon SH 2019](https://qcon.infoq.cn/2019/shanghai/presentation/1921), Oct 2019, Shanghai ([slides](https://github.com/analytics-zoo/analytics-zoo.github.io/blob/master/presentations/Architectural%20practices%20for%20building%20a%20unified%20big%20data%20AI%20application%20with%20Analytics-Zoo.pdf))
-
-- Building AI to play the FIFA video game using distributed TensorFlow, [TensorFlow World](https://conferences.oreilly.com/tensorflow/tf-ca/public/schedule/detail/78309), Oct 2019, Santa Clara ([slides](https://github.com/analytics-zoo/analytics-zoo.github.io/blob/master/presentations/Building%20AI%20to%20play%20the%20FIFA%20video%20game%20using%20distributed%20TensorFlow.pdf))
-
-- Deep Learning Pipelines for High Energy Physics using Apache Spark with Distributed Keras on Analytics Zoo, [Spark+AI Summit](https://databricks.com/session_eu19/deep-learning-pipelines-for-high-energy-physics-using-apache-spark-with-distributed-keras-on-analytics-zoo), Oct 2019, Amsterdam ([slides](https://www.slideshare.net/databricks/deep-learning-pipelines-for-high-energy-physics-using-apache-spark-with-distributed-keras-on-analytics-zoo))
-
-- Apache Spark AI Use Case in Telco: Network Quality Analysis and Prediction with Geospatial Visualization, [Spark+AI Summit](https://databricks.com/session_eu19/apache-spark-ai-use-case-in-telco-network-quality-analysis-and-prediction-with-geospatial-visualization), Oct 2019, Amsterdam ([slides](https://www.slideshare.net/databricks/apache-spark-ai-use-case-in-telco-network-quality-analysis-and-prediction-with-geospatial-visualization))
-
-- LSTM-based time series anomaly detection using Analytics Zoo for Spark and BigDL, [Strata Data conference](https://conferences.oreilly.com/strata/strata-eu/public/schedule/detail/74077), May 2019, London ([slides](https://cdn.oreillystatic.com/en/assets/1/event/292/LSTM-based%20time%20series%20anomaly%20detection%20using%20Analytics%20Zoo%20for%20Spark%20and%20BigDL%20Presentation.pptx))
-
-- Game Playing Using AI on Apache Spark, [Spark+AI Summit](https://databricks.com/session/game-playing-using-ai-on-apache-spark), April 2019, San Francisco ([slides](https://github.com/analytics-zoo/analytics-zoo.github.io/blob/master/presentations/game-playing-using-ai-on-apache-spark.pdf))
-
-- Using Deep Learning on Apache Spark to Diagnose Thoracic Pathology from Chest X-rays in DELL EMC, [Spark+AI Summit](https://databricks.com/session/using-deep-learning-on-apache-spark-to-diagnose-thoracic-pathology-from-chest-x-rays), April 2019, San Francisco ([slides](https://github.com/analytics-zoo/analytics-zoo.github.io/blob/master/presentations/Using%20Deep%20Learning%20on%20Apache%20Spark%20to%20diagnose%20thoracic%20pathology%20from%20.._.pdf))
-
-- Leveraging NLP and Deep Learning for Document Recommendation in the Cloud, [Spark+AI Summit](https://databricks.com/session/leveraging-nlp-and-deep-learning-for-document-recommendations-in-the-cloud), April 2019, San Francisco ([slides](https://github.com/analytics-zoo/analytics-zoo.github.io/blob/master/presentations/Leveraging%20NLP%20and%20Deep%20Learning%20for%20Document%20Recommendation%20in%20the%20Cloud.pdf))
-
-- Analytics Zoo: Distributed Tensorflow, Keras and BigDL in production on Apache Spark, [Strata Data conference](https://conferences.oreilly.com/strata/strata-ca/public/schedule/detail/72802), March 2019, San Francisco ([slides](https://github.com/analytics-zoo/analytics-zoo.github.io/blob/master/presentations/Analytics%20Zoo-Distributed%20Tensorflow%2C%20Keras%20and%20BigDL%20in%20production%20on%20Apache%20Spark.pdf))
-
-- User-based real-time product recommendations leveraging deep learning using Analytics Zoo on Apache Spark in Office Depot, [Strata Data conference](https://conferences.oreilly.com/strata/strata-ca/public/schedule/detail/73079), March 2019, San Francisco ([slides](https://github.com/analytics-zoo/analytics-zoo.github.io/blob/master/presentations/User-based%20real-time%20product%20recommendations%20leveraging%20deep%20learning%20using%20Analytics%20Zoo%20on%20Apache%20Spark%20and%20BigDL%20Presentation.pdf))
-
-- Analytics Zoo: Unifying Big Data Analytics and AI for Apache Spark, [Shanghai Apache Spark + AI meetup](https://www.meetup.com/Shanghai-Apache-Spark-AI-Meetup/events/255788956/), Nov 2018, Shanghai ([slides](https://github.com/analytics-zoo/analytics-zoo.github.io/blob/master/presentations/Analytics%20Zoo-Unifying%20Big%20Data%20Analytics%20and%20AI%20for%20Apache%20Spark.pdf))
-
-- Use Intel Analytics Zoo to build an intelligent QA Bot for Microsoft Azure, [Shanghai Apache Spark + AI meetup](https://www.meetup.com/Shanghai-Apache-Spark-AI-Meetup/events/255788956/), Nov 2018, Shanghai ([slides](https://github.com/analytics-zoo/analytics-zoo.github.io/blob/master/presentations/Use%20Intel%20Analytics%20Zoo%20to%20build%20an%20intelligent%20QA%20Bot%20for%20Microsoft%20Azure.pdf))
-
-- A deep learning approach for precipitation nowcasting with RNN using Analytics Zoo in Cray, [Strata Data conference](https://conferences.oreilly.com/strata/strata-ny-2018/public/schedule/detail/69413), Sep 2018, New York ([slides](https://github.com/analytics-zoo/analytics-zoo.github.io/blob/master/presentations/A%20deep%20learning%20approach%20for%20precipitation%20nowcasting%20with%20RNN%20using%20Analytics%20Zoo%20on%20BigDL.pdf))
-
-- Job recommendations leveraging deep learning using Analytics Zoo on Apache Spark in Talroo, [Strata Data conference](https://conferences.oreilly.com/strata/strata-ny-2018/public/schedule/detail/69113), Sep 2018, New York ([slides](https://cdn.oreillystatic.com/en/assets/1/event/278/Job%20recommendations%20leveraging%20deep%20learning%20using%20Analytics%20Zoo%20on%20Apache%20Spark%20and%20BigDL%20Presentation.pdf))
-
-- Accelerating Deep Learning Training with BigDL and Drizzle on Apache Spark, [Spark + AI Summit](https://databricks.com/session/accelerating-deep-learning-training-with-bigdl-and-drizzle-on-apache-spark), June 2018, San Francisco ([slides](https://github.com/analytics-zoo/analytics-zoo.github.io/blob/master/presentations/Accelerating%20deep%20learning%20on%20apache%20spark%20Using%20BigDL%20with%20coarse-grained%20scheduling.pdf))
-
-- Using Crowdsourced Images to Create Image Recognition Models with Analytics Zoo in World Bank, [Spark + AI Summit](https://databricks.com/session/using-crowdsourced-images-to-create-image-recognition-models-with-bigdl), June 2018, San Francisco ([slides](https://github.com/analytics-zoo/analytics-zoo.github.io/blob/master/presentations/Using%20Crowdsourced%20Images%20to%20Create%20Image%20Recognition%20Models%20with%20Analytics%20Zoo%20using%20BigDL.pdf))
-
-- Building Deep Reinforcement Learning Applications on Apache Spark with Analytics Zoo using BigDL, [Spark + AI Summit](https://databricks.com/session/building-deep-reinforcement-learning-applications-on-apache-spark-using-bigdl), June 2018, San Francisco ([slides](https://github.com/analytics-zoo/analytics-zoo.github.io/blob/master/presentations/Building%20Deep%20Reinforcement%20Learning%20Applications%20on%20Apache%20Spark%20with%20Analytics%20Zoo%20using%20BigDL.pdf))
-
-- Using BigDL on Apache Spark to Improve the MLS Real Estate Search Experience at Scale, [Spark + AI Summit](https://databricks.com/session/using-bigdl-on-apache-spark-to-improve-the-mls-real-estate-search-experience-at-scale), June 2018, San Francisco
-
-- Analytics Zoo: Building Analytics and AI Pipeline for Apache Spark and BigDL, [Spark + AI Summit](https://databricks.com/session/analytics-zoo-building-analytics-and-ai-pipeline-for-apache-spark-and-bigdl), June 2018, San Francisco
-
-- Using Siamese CNNs for removing duplicate entries from real estate listing databases, [Strata Data conference](https://conferences.oreilly.com/strata/strata-eu-2018/public/schedule/detail/65518), May 2018, London ([slides](https://cdn.oreillystatic.com/en/assets/1/event/267/Using%20Siamese%20CNNs%20for%20removing%20duplicate%20entries%20from%20real%20estate%20listing%20databases%20Presentation.pdf))
-
-- Classifying images on Spark in World Bank, [AI conference](https://conferences.oreilly.com/artificial-intelligence/ai-ny-2018/public/schedule/detail/64939), May 2018, New York ([slides](https://cdn.oreillystatic.com/en/assets/1/event/280/Classifying%20images%20in%20Spark%20Presentation.pdf))
-
-- Improving user-merchant propensity modeling using neural collaborative filtering and wide and deep models on Spark BigDL in Mastercard, [Strata Data conference](https://conferences.oreilly.com/strata/strata-ca-2018/public/schedule/detail/63897), March 2018, San Jose ([slides](https://cdn.oreillystatic.com/en/assets/1/event/269/Improving%20user-merchant%20propensity%20modeling%20using%20neural%20collaborative%20filtering%20and%20wide%20and%20deep%20models%20on%20Spark%20BigDL%20at%20scale%20Presentation.pdf))
-
-- Accelerating deep learning on Apache Spark using BigDL with coarse-grained scheduling, [Strata Data conference](https://conferences.oreilly.com/strata/strata-ca-2018/public/schedule/detail/63960), March 2018, San Jose ([slides](https://cdn.oreillystatic.com/en/assets/1/event/269/Accelerating%20deep%20learning%20on%20Apache%20Spark%20using%20BigDL%20with%20coarse-grained%20scheduling%20Presentation.pptx))
-
-- Automatic 3D MRI knee damage classification with 3D CNN using BigDL on Spark in UCSF, [Strata Data conference](https://conferences.oreilly.com/strata/strata-ca-2018/public/schedule/detail/64023), March 2018, San Jose ([slides](https://cdn.oreillystatic.com/en/assets/1/event/269/Automatic%203D%20MRI%20knee%20damage%20classification%20with%203D%20CNN%20using%20BigDL%20on%20Spark%20Presentation.pdf))
-
diff --git a/docs/readthedocs/source/doc/Chronos/Howto/docker_guide_single_node.md b/docs/readthedocs/source/doc/Chronos/Howto/docker_guide_single_node.md
deleted file mode 100644
index fb5933ed..00000000
--- a/docs/readthedocs/source/doc/Chronos/Howto/docker_guide_single_node.md
+++ /dev/null
@@ -1,139 +0,0 @@
-# Use Chronos in Container (docker)
-This page helps user to build and use a docker image where Chronos-nightly build version is deployed.
-
-## Download image from Docker Hub
-We provide docker image with Chronos-nightly build version deployed in [Docker Hub](https://hub.docker.com/r/intelanalytics/bigdl-chronos/tags). You can directly download it by running command:
-```bash
-docker pull intelanalytics/bigdl-chronos:latest
-```
-
-## Build an image (Optional)
-**If you have downloaded docker image, you can just skip this part and go on [Use Chronos](#use-chronos).**
-
-First clone the repo `BigDL` to the local.
-```bash
-git clone https://github.com/intel-analytics/BigDL.git
-```
-Then `cd` to the root directory of `BigDL`, and copy the Dockerfile to it.
-```bash
-cd BigDL
-cp docker/chronos-nightly/Dockerfile ./Dockerfile
-```
-When building image, you can specify some build args to install chronos with necessary dependencies according to your own needs.
-The build args are similar to the install options in [Chronos documentation](https://bigdl.readthedocs.io/en/latest/doc/Chronos/Overview/install.html).
-
-```
-model: which model or framework you want.
- value: pytorch
- tensorflow
- prophet
- arima
- ml (default, for machine learning models).
-
-auto_tuning: whether to enable auto tuning.
- value: y (for yes)
- n (default, for no).
-
-hardware: run chronos on a single machine or a cluster.
- value: single (default)
- cluster
-
-inference: whether to install dependencies for inference optimization (e.g. onnx, openvino, ...).
- value: y (for yes)
- n (default, for no)
-
-extra_dep: whether to install some extra dependencies.
- value: y (for yes)
- n (default, for no)
- if specified to y, the following dependencies will be installed:
- tsfresh, pyarrow, prometheus_pandas, xgboost, jupyter, matplotlib
-```
-
-If you want to build image with the default options, you can simply use the following command:
-```bash
-sudo docker build -t intelanalytics/bigdl-chronos:latest . # You may choose any NAME:TAG you want.
-```
-
-You can also build with other options by specifying the build args:
-```bash
-sudo docker build \
- --build-arg model=pytorch \
- --build-arg auto_tuning=y \
- --build-arg hardware=single \
- --build-arg inference=n \
- --build-arg extra_dep=n \
- -t intelanalytics/bigdl-chronos:latest . # You may choose any NAME:TAG you want.
-```
-
-(Optional) If you need a proxy, you can add two additional build args to specify it:
-```bash
-# typically, you need a proxy for building since there will be some downloading.
-sudo docker build \
- --build-arg http_proxy=http://: \ #optional
- --build-arg https_proxy=http://: \ #optional
- -t intelanalytics/bigdl-chronos:latest . # You may choose any NAME:TAG you want.
-```
-According to your network status, this building will cost **15-30 mins**.
-
-**Tips:** When errors happen like `failed: Connection timed out.`, it's usually related to the bad network status. Please build with a proxy.
-
-## Run the image
-```bash
-sudo docker run -it --rm --net=host intelanalytics/bigdl-chronos:latest bash
-```
-
-## Use Chronos
-A conda environment is created for you automatically. `bigdl-chronos` and the necessary depenencies (based on the build args when you build image) are installed inside this environment.
-```bash
-(chronos) root@icx-5:/opt/work#
-```
-```eval_rst
-.. important::
-
- Considering the image size, we build docker image with the default args and upload it to Docker Hub. If you use it directly, only ``bigdl-chronos`` is installed inside this environment. There are two methods to install other necessary dependencies according to your own needs:
-
- 1. Make sure network is available and run install command following `Install using Conda `_ , such as ``pip install --pre --upgrade bigdl-chronos[pytorch]``.
-
- 2. Make sure network is available and bash ``/opt/install-python-env.sh`` with build args. The values are introduced in `Build an image <#build-an-image-optional>`_.
-
- .. code-block:: python
-
- # bash /opt/install-python-env.sh ${model} ${auto_tuning} ${hardware} ${inference} ${extra_dep}
- # For example, if you want to install bigdl-chronos[pytorch,inference]
- bash /opt/install-python-env.sh pytorch n single y n
-
-```
-
-## Run unittest examples on Jupyter Notebook for a quick use
-> Note: To use jupyter notebook, you need to specify the build arg `extra_dep` to `y`.
-
-You can run these on Jupyter Notebook on single node server if you pursue a quick use on Chronos.
-```bash
-(chronos) root@icx-5:/opt/work# cd /opt/work/colab-notebook #Unittest examples are here.
-```
-```bash
-(chronos) root@icx-5:/opt/work/colab-notebook# jupyter notebook --notebook-dir=./ --ip=* --allow-root #Start the Jupyter Notebook services.
-```
-After the Jupyter Notebook service is successfully started, you can connect to the Jupyter Notebook service from a browser.
-1. Get the IP address of the container
-2. Launch a browser, and connect to the Jupyter Notebook service with the URL:
-`https://container-ip-address:port-number/?token=your-token`
-As a result, you will see the Jupyter Notebook opened.
-3. Open one of these `.ipynb` files, run through the example and learn how to use Chronos to predict time series.
-
-## Shut down docker container
-You should shut down the BigDL Docker container after using it.
-1. First, use `ctrl+p+q` to quit the container when you are still in it.
-2. Then, you can list all the active Docker containers by command line:
- ```bash
- sudo docker ps
- ```
- You will see your docker containers:
- ```bash
- CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
- ef133bd732d1 intelanalytics/bigdl-chronos:latest "bash" 2 hours ago Up 2 hours happy_babbage
- ```
-3. Shut down the corresponding docker container by its ID:
- ```bash
- sudo docker rm -f ef133bd732d1
- ```
diff --git a/docs/readthedocs/source/doc/Chronos/Howto/how_to_choose_forecasting_alg.md b/docs/readthedocs/source/doc/Chronos/Howto/how_to_choose_forecasting_alg.md
deleted file mode 100644
index a1b5fa4f..00000000
--- a/docs/readthedocs/source/doc/Chronos/Howto/how_to_choose_forecasting_alg.md
+++ /dev/null
@@ -1,48 +0,0 @@
-# Choose proper forecasting model
-
-How to choose a forecasting model among so many built-in models (or build one by yourself) in Chronos? That's a common question when users want to build their first forecasting model. Different forecasting models are more suitable for different data and different metrics(accuracy or performances).
-
-The flowchart below is designed to guide our users which forecasting model to try on your own data. Click on the blocks in the chart below to see its documentation/examples.
-
-```eval_rst
-.. note::
-
- Following flowchart may need some time to load.
-```
-
-
-```eval_rst
-.. mermaid::
-
- flowchart TD
- StartPoint[I want to build a forecasting model]
- StartPoint-- always start from --> TCN[TCNForecaster]
- TCN -- performance is not satisfying --> TCN_OPT[Make sure optimizations are deploied]
- TCN_OPT -- further performance improvement is needed --> SER[Performance-awared Hyperparameter Optimization]
- SER -- only 1 step to be predicted --> LSTMForecaster
- SER -- only 1 var to be predicted --> NBeatsForecaster
- LSTMForecaster -- does not work --> CUS[customized model]
- NBeatsForecaster -- does not work --> CUS[customized model]
-
- TCN -- accuracy is not satisfying --> Tune[Hyperparameter Optimization]
- Tune -- only 1 step to be predicted --> LSTMForecaster2[LSTMForecaster]
- LSTMForecaster2 -- does not work --> AutoformerForecaster
- Tune -- more than 1 step to be predicted --> AutoformerForecaster
- AutoformerForecaster -- does not work --> Seq2SeqForecaster
- Seq2SeqForecaster -- does not work --> CUS[customized model]
-
- click TCN "https://bigdl.readthedocs.io/en/latest/doc/Chronos/Overview/forecasting.html#tcnforecaster"
- click LSTMForecaster "https://bigdl.readthedocs.io/en/latest/doc/Chronos/Overview/forecasting.html#lstmforecaster"
- click LSTMForecaster2 "https://bigdl.readthedocs.io/en/latest/doc/Chronos/Overview/forecasting.html#lstmforecaster"
- click NBeatsForecaster "https://bigdl.readthedocs.io/en/latest/doc/Chronos/Overview/forecasting.html#nbeatsforecaster"
- click Seq2SeqForecaster "https://bigdl.readthedocs.io/en/latest/doc/Chronos/Overview/forecasting.html#seq2seqforecaster"
- click AutoformerForecaster "https://bigdl.readthedocs.io/en/latest/doc/Chronos/Overview/forecasting.html#AutoformerForecaster"
-
- click TCN_OPT "https://bigdl.readthedocs.io/en/latest/doc/Chronos/Overview/speed_up.html"
- click SER "https://github.com/intel-analytics/BigDL/blob/main/python/chronos/example/hpo/muti_objective_hpo_with_builtin_latency_tutorial.ipynb"
- click Tune "https://bigdl.readthedocs.io/en/latest/doc/Chronos/Howto/how_to_tune_forecaster_model.html"
- click CUS "https://bigdl.readthedocs.io/en/latest/doc/Chronos/Overview/speed_up.html"
-
- classDef Model fill:#FFF,stroke:#0f29ba,stroke-width:1px;
- class TCN,LSTMForecaster,NBeatsForecaster,LSTMForecaster2,AutoformerForecaster,Seq2SeqForecaster Model;
-```
\ No newline at end of file
diff --git a/docs/readthedocs/source/doc/Chronos/Howto/how_to_create_forecaster.nblink b/docs/readthedocs/source/doc/Chronos/Howto/how_to_create_forecaster.nblink
deleted file mode 100644
index 6a1c5320..00000000
--- a/docs/readthedocs/source/doc/Chronos/Howto/how_to_create_forecaster.nblink
+++ /dev/null
@@ -1,3 +0,0 @@
-{
- "path": "../../../../../../python/chronos/colab-notebook/howto/how-to-create-forecaster.ipynb"
-}
\ No newline at end of file
diff --git a/docs/readthedocs/source/doc/Chronos/Howto/how_to_evaluate_a_forecaster.nblink b/docs/readthedocs/source/doc/Chronos/Howto/how_to_evaluate_a_forecaster.nblink
deleted file mode 100644
index 917ed6ec..00000000
--- a/docs/readthedocs/source/doc/Chronos/Howto/how_to_evaluate_a_forecaster.nblink
+++ /dev/null
@@ -1,3 +0,0 @@
-{
- "path": "../../../../../../python/chronos/colab-notebook/howto/how_to_evaluate_a_forecaster.ipynb"
-}
\ No newline at end of file
diff --git a/docs/readthedocs/source/doc/Chronos/Howto/how_to_export_data_processing_pipeline_to_torchscript.nblink b/docs/readthedocs/source/doc/Chronos/Howto/how_to_export_data_processing_pipeline_to_torchscript.nblink
deleted file mode 100644
index eadc2331..00000000
--- a/docs/readthedocs/source/doc/Chronos/Howto/how_to_export_data_processing_pipeline_to_torchscript.nblink
+++ /dev/null
@@ -1,3 +0,0 @@
-{
- "path": "../../../../../../python/chronos/colab-notebook/howto/how_to_export_data_processing_pipeline_to_torchscript.ipynb"
-}
\ No newline at end of file
diff --git a/docs/readthedocs/source/doc/Chronos/Howto/how_to_export_onnx_files.nblink b/docs/readthedocs/source/doc/Chronos/Howto/how_to_export_onnx_files.nblink
deleted file mode 100644
index 744723e3..00000000
--- a/docs/readthedocs/source/doc/Chronos/Howto/how_to_export_onnx_files.nblink
+++ /dev/null
@@ -1,3 +0,0 @@
-{
- "path": "../../../../../../python/chronos/colab-notebook/howto/how_to_export_onnx_files.ipynb"
-}
\ No newline at end of file
diff --git a/docs/readthedocs/source/doc/Chronos/Howto/how_to_export_openvino_files.nblink b/docs/readthedocs/source/doc/Chronos/Howto/how_to_export_openvino_files.nblink
deleted file mode 100644
index a139b146..00000000
--- a/docs/readthedocs/source/doc/Chronos/Howto/how_to_export_openvino_files.nblink
+++ /dev/null
@@ -1,3 +0,0 @@
-{
- "path": "../../../../../../python/chronos/colab-notebook/howto/how_to_export_openvino_files.ipynb"
-}
\ No newline at end of file
diff --git a/docs/readthedocs/source/doc/Chronos/Howto/how_to_export_torchscript_files.nblink b/docs/readthedocs/source/doc/Chronos/Howto/how_to_export_torchscript_files.nblink
deleted file mode 100644
index a4deeb6e..00000000
--- a/docs/readthedocs/source/doc/Chronos/Howto/how_to_export_torchscript_files.nblink
+++ /dev/null
@@ -1,3 +0,0 @@
-{
- "path": "../../../../../../python/chronos/colab-notebook/howto/how_to_export_torchscript_files.ipynb"
-}
\ No newline at end of file
diff --git a/docs/readthedocs/source/doc/Chronos/Howto/how_to_generate_confidence_interval_for_prediction.nblink b/docs/readthedocs/source/doc/Chronos/Howto/how_to_generate_confidence_interval_for_prediction.nblink
deleted file mode 100644
index 21a5df68..00000000
--- a/docs/readthedocs/source/doc/Chronos/Howto/how_to_generate_confidence_interval_for_prediction.nblink
+++ /dev/null
@@ -1,3 +0,0 @@
-{
- "path": "../../../../../../python/chronos/colab-notebook/howto/how_to_generate_confidence_interval_for_prediction.ipynb"
-}
\ No newline at end of file
diff --git a/docs/readthedocs/source/doc/Chronos/Howto/how_to_optimize_a_forecaster.nblink b/docs/readthedocs/source/doc/Chronos/Howto/how_to_optimize_a_forecaster.nblink
deleted file mode 100644
index 7785f137..00000000
--- a/docs/readthedocs/source/doc/Chronos/Howto/how_to_optimize_a_forecaster.nblink
+++ /dev/null
@@ -1,3 +0,0 @@
-{
- "path": "../../../../../../python/chronos/colab-notebook/howto/how_to_optimize_a_forecaster.ipynb"
-}
\ No newline at end of file
diff --git a/docs/readthedocs/source/doc/Chronos/Howto/how_to_preprocess_my_data.nblink b/docs/readthedocs/source/doc/Chronos/Howto/how_to_preprocess_my_data.nblink
deleted file mode 100644
index 6a0cef76..00000000
--- a/docs/readthedocs/source/doc/Chronos/Howto/how_to_preprocess_my_data.nblink
+++ /dev/null
@@ -1,3 +0,0 @@
-{
- "path": "../../../../../../python/chronos/colab-notebook/howto/how_to_preprocess_my_data.ipynb"
-}
\ No newline at end of file
diff --git a/docs/readthedocs/source/doc/Chronos/Howto/how_to_process_data_in_production_environment.nblink b/docs/readthedocs/source/doc/Chronos/Howto/how_to_process_data_in_production_environment.nblink
deleted file mode 100644
index 50c5564c..00000000
--- a/docs/readthedocs/source/doc/Chronos/Howto/how_to_process_data_in_production_environment.nblink
+++ /dev/null
@@ -1,3 +0,0 @@
-{
- "path": "../../../../../../python/chronos/colab-notebook/howto/how_to_process_data_in_production_environment.ipynb"
-}
\ No newline at end of file
diff --git a/docs/readthedocs/source/doc/Chronos/Howto/how_to_save_and_load_forecaster.nblink b/docs/readthedocs/source/doc/Chronos/Howto/how_to_save_and_load_forecaster.nblink
deleted file mode 100644
index cf0b97af..00000000
--- a/docs/readthedocs/source/doc/Chronos/Howto/how_to_save_and_load_forecaster.nblink
+++ /dev/null
@@ -1,3 +0,0 @@
-{
- "path": "../../../../../../python/chronos/colab-notebook/howto/how_to_save_and_load_forecaster.ipynb"
-}
\ No newline at end of file
diff --git a/docs/readthedocs/source/doc/Chronos/Howto/how_to_speedup_inference_of_forecaster_through_ONNXRuntime.nblink b/docs/readthedocs/source/doc/Chronos/Howto/how_to_speedup_inference_of_forecaster_through_ONNXRuntime.nblink
deleted file mode 100644
index 3c6a4e9c..00000000
--- a/docs/readthedocs/source/doc/Chronos/Howto/how_to_speedup_inference_of_forecaster_through_ONNXRuntime.nblink
+++ /dev/null
@@ -1,3 +0,0 @@
-{
- "path": "../../../../../../python/chronos/colab-notebook/howto/how_to_speedup_inference_of_forecaster_through_ONNXRuntime.ipynb"
-}
\ No newline at end of file
diff --git a/docs/readthedocs/source/doc/Chronos/Howto/how_to_speedup_inference_of_forecaster_through_OpenVINO.nblink b/docs/readthedocs/source/doc/Chronos/Howto/how_to_speedup_inference_of_forecaster_through_OpenVINO.nblink
deleted file mode 100644
index 32cb876c..00000000
--- a/docs/readthedocs/source/doc/Chronos/Howto/how_to_speedup_inference_of_forecaster_through_OpenVINO.nblink
+++ /dev/null
@@ -1,3 +0,0 @@
-{
- "path": "../../../../../../python/chronos/colab-notebook/howto/how_to_speedup_inference_of_forecaster_through_OpenVINO.ipynb"
-}
\ No newline at end of file
diff --git a/docs/readthedocs/source/doc/Chronos/Howto/how_to_train_forecaster_on_one_node.nblink b/docs/readthedocs/source/doc/Chronos/Howto/how_to_train_forecaster_on_one_node.nblink
deleted file mode 100644
index cf39d394..00000000
--- a/docs/readthedocs/source/doc/Chronos/Howto/how_to_train_forecaster_on_one_node.nblink
+++ /dev/null
@@ -1,3 +0,0 @@
-{
- "path": "../../../../../../python/chronos/colab-notebook/howto/how_to_train_forecaster_on_one_node.ipynb"
-}
\ No newline at end of file
diff --git a/docs/readthedocs/source/doc/Chronos/Howto/how_to_tune_forecaster_model.nblink b/docs/readthedocs/source/doc/Chronos/Howto/how_to_tune_forecaster_model.nblink
deleted file mode 100644
index 10d6ab10..00000000
--- a/docs/readthedocs/source/doc/Chronos/Howto/how_to_tune_forecaster_model.nblink
+++ /dev/null
@@ -1,3 +0,0 @@
-{
- "path": "../../../../../../python/chronos/colab-notebook/howto/how_to_tune_forecaster_model.ipynb"
-}
\ No newline at end of file
diff --git a/docs/readthedocs/source/doc/Chronos/Howto/how_to_use_benchmark_tool.md b/docs/readthedocs/source/doc/Chronos/Howto/how_to_use_benchmark_tool.md
deleted file mode 100644
index 87032714..00000000
--- a/docs/readthedocs/source/doc/Chronos/Howto/how_to_use_benchmark_tool.md
+++ /dev/null
@@ -1,174 +0,0 @@
-# Use Chronos benchmark tool
-This page demonstrates how to use Chronos benchmark tool to benchmark forecasting performance on platforms.
-
-## Basic Usage
-The benchmark tool is installed automatically when `bigdl-chronos` is installed. To get information about performance (currently for forecasting only) on the your own machine.
-
-Run benchmark tool with default options using following command:
-```bash
-benchmark-chronos -l 96 -o 720
-```
-```eval_rst
-.. note::
- **Required Options**:
-
- ``-l/--lookback`` and ``-o/--horizon`` are required options for Chronos benchmark tool. Use ``-l/--lookback`` to specify the history time steps while use ``-o/--horizon`` to specify the output time steps. For more details, please refer to `here `_.
-```
-By default, the tool will load `tsinghua_electricity` dataset and train a `TCNForecaster` with input lookback and horizon parameters under `PyTorch` framework. As it loads, it prints information about hardware, environment variables and benchmark parameters. When benchmarking is completed, it reports the average throughput during training process. Users may be able to improve forecasting performance by following suggested changes on Nano environment variables.
-
-Besides the default usage, more execution parameters can be set to obtain more benchmark results. Read on to learn more about the configuration options available in Chronos benchmark tool.
-
-## Configuration Options
-The benchmark tool provides various options for configuring execution parameters. Some key configuration options are introduced in this part and a list of all options is given in [**Advanced Options**](#advanced-options).
-
-### Model
-The tool provides several built-in time series forecasting models, including TCN, LSTM, Seq2Seq, NBeats and Autoformer. To specify which model to use, run benchmark tool with `-m/--model`. If not specified, TCN is used as the default.
-```bash
-benchmark-chronos -m lstm -l 96 -o 720
-```
-
-### Stage
-Regarding a model, training and inference stages are most concerned. By setting `-s/--stage` parameter, users can obtain knowledge of throughput during training (`-s train`), accuracy after training(`-s accuracy`). throughput during inference (`-s throughput`) and latency of inference (`-s latency`). If not specified, train is used as the default.
-```bash
-benchmark-chronos -s latency -l 96 -o 720
-```
-```eval_rst
-.. note::
- **More About Accuracy Results**:
-
- After setting ``-s accuracy``, the tool will load dataset and split it to train, validation and test set with ratio of 7:1:2. Then validation loss is monitored during training epoches and checkpoint of the epoch with smallest loss is loaded after training. With the trained forecaster, obtain evaluation results corresponding to ``--metrics``.
-```
-
-### Dataset
-Several built-in datasets can be chosen, including nyc_taxi and tsinghua_electricity. If users are with poor Internet connection and hard to download dataset, run benchmark tool with `-d synthetic_dataset` to use synthetic dataset. Default to be tsinghua_electricity if `-d/--dataset` parameter is not specified.
-```bash
-benchmark-chronos -d nyc_taxi -l 96 -o 720
-```
-```eval_rst
-.. note::
- **Download tsinghua_electricity Dataset**:
-
- The tsinghua_electricity dataset does not support automatic downloading. Users can download manually from `here `_ to path "~/.chronos/dataset/".
-```
-
-### Framework
-Pytorch and tensorflow are both supported and can be specified by setting `-f torch` or `-f tensorflow`. And the default framework is pytorch.
-```bash
-benchmark-chronos -f tensorflow -l 96 -o 720
-```
-```eval_rst
-.. note::
- NBeats and Autoformer does not support tensorflow backend now.
-```
-
-### Core number
-By default, the benchmark tool will run on all physical cores. And users can explicitly specify the number of cores through `-c/--cores` parameter.
-```bash
-benchmark-chronos -c 4 -l 96 -o 720
-```
-
-### Lookback
-Forecasting aims at predicting the future by using the knowledge from the history. The required option `-l/--lookback`corresponds to the length of historical data along time.
-```bash
-benchmark-chronos -l 96 -o 720
-```
-
-### Horizon
-Forecasting aims at predicting the future by using the knowledge from the history. The required option `-o/--horizon`corresponds to the length of predicted data along time.
-```bash
-benchmark-chronos -l 96 -o 720
-```
-
-## Advanced Options
-When `-s/--stage accuracy` is set, users can further specify evaluation metrics through `--metrics` which default to be mse and mae.
-```bash
-benchmark-chronos --stage accuracy --metrics mse rmse -l 96 -o 720
-```
-
-To improve model accuracy, the tool provides with normalization trick to alleviate distribution shift. Once enable `--normalization`, normalization trick will be applied to forecaster.
-```bash
-benchmark-chronos --stage accuracy --normalization -l 96 -o 720
-```
-```eval_rst
-.. note::
- Only TCNForecaster supports normalization trick now.
-```
-
-Besides, number of processes and epoches can be set by `--training_processes` and `--training_epochs`. Users can also tune batchsize during training and inference through `--training_batchsize` and `--inference_batchsize` respectively.
-```bash
-benchmark-chronos --training_processes 2 --training_epochs 3 --training_batchsize 32 --inference_batchsize 128 -l 96 -o 720
-```
-
-To speed up inference, accelerators like ONNXRuntime and OpenVINO are usually used. To benchmark inference performance with or without accelerator, run tool with `--inference_framework` to specify without accelerator (`--inference_framework torch`)or with ONNXRuntime (`--inference_framework onnx`) or with OpenVINO (`--inference_framework openvino`) or with jit (`--inference_framework jit`).
-```bash
-benchmark-chronos --inference_framework onnx -l 96 -o 720
-```
-
-When benchmark tool is run with `--ipex` enabled, intel-extension-for-pytorch will be used as accelerator for trainer.
-
-If want to use quantized model to predict, just run the benchmark tool with `--quantize` enabled and the quantize framework can be specified by `--quantize_type`. The parameter`--quantize_type` need to be set as pytorch_ipex when users want to use pytorch_ipex as quantize type. Otherwise, the defaut quantize type will be selected according to `--inference_framework`. If pytorch is the inference framework, then pytorch_fx will be the default. If users choose ONNXRuntime as inference framework, onnxrt_qlinearops will be quantize type. And if OpenVINO is chosen, the openvino quantize type will be selected.
-```bash
-benchmark-chronos --ipex --quantize --quantize_type pytorch_ipex -l 96 -o 720
-```
-
-
-Moreover, if want to benchmark inference performance of a trained model, run benchmark tool with `--ckpt` to specify the checkpoint path of model. By default, the model for inference will be trained first according to input parameters.
-
-Running the benchmark tool with `-h/--help` yields the following usage message, which contains all configuration options:
-```bash
-benchmark-chronos -h
-```
-```eval_rst
-.. code-block:: python
-
- usage: benchmark-chronos [-h] [-m] [-s] [-d] [-f] [-c] -l lookback -o horizon
- [--training_processes] [--training_batchsize]
- [--training_epochs] [--inference_batchsize]
- [--quantize] [--inference_framework [...]] [--ipex]
- [--quantize_type] [--ckpt] [--metrics [...]]
- [--normalization]
-
- Benchmarking Parameters
-
- optional arguments:
- -h, --help show this help message and exit
- -m, --model model name, choose from
- tcn/lstm/seq2seq/nbeats/autoformer, default to "tcn".
- -s, --stage stage name, choose from
- train/latency/throughput/accuracy, default to "train".
- -d, --dataset dataset name, choose from
- nyc_taxi/tsinghua_electricity/synthetic_dataset,
- default to "tsinghua_electricity".
- -f, --framework framework name, choose from torch/tensorflow, default
- to "torch".
- -c, --cores core number, default to all physical cores.
- -l lookback, --lookback lookback
- required, the history time steps (i.e. lookback).
- -o horizon, --horizon horizon
- required, the output time steps (i.e. horizon).
- --training_processes
- number of processes when training, default to 1.
- --training_batchsize
- batch size when training, default to 32.
- --training_epochs number of epochs when training, default to 1.
- --inference_batchsize
- batch size when infering, default to 1.
- --quantize if use the quantized model to predict, default to
- False.
- --inference_framework [ ...]
- predict without/with accelerator, choose from
- torch/onnx/openvino/jit, default to "torch" (i.e. predict
- without accelerator).
- --ipex if use ipex as accelerator for trainer, default to
- False.
- --quantize_type quantize framework, choose from
- pytorch_fx/pytorch_ipex/onnxrt_qlinearops/openvino,
- default to "pytorch_fx".
- --ckpt checkpoint path of a trained model, e.g.
- "checkpoints/tcn", default to "checkpoints/tcn".
- --metrics [ ...] evaluation metrics of a trained model, e.g.
- "mse"/"mae", default to "mse, mae".
- --normalization if to use normalization trick to alleviate
- distribution shift.
-```
-
diff --git a/docs/readthedocs/source/doc/Chronos/Howto/how_to_use_built-in_datasets.nblink b/docs/readthedocs/source/doc/Chronos/Howto/how_to_use_built-in_datasets.nblink
deleted file mode 100755
index cf1456b0..00000000
--- a/docs/readthedocs/source/doc/Chronos/Howto/how_to_use_built-in_datasets.nblink
+++ /dev/null
@@ -1,3 +0,0 @@
-{
- "path": "../../../../../../python/chronos/colab-notebook/howto/how_to_use_built-in_datasets.ipynb"
-}
\ No newline at end of file
diff --git a/docs/readthedocs/source/doc/Chronos/Howto/how_to_use_forecaster_to_predict_future_data.nblink b/docs/readthedocs/source/doc/Chronos/Howto/how_to_use_forecaster_to_predict_future_data.nblink
deleted file mode 100644
index 486ca63b..00000000
--- a/docs/readthedocs/source/doc/Chronos/Howto/how_to_use_forecaster_to_predict_future_data.nblink
+++ /dev/null
@@ -1,3 +0,0 @@
-{
- "path": "../../../../../../python/chronos/colab-notebook/howto/how_to_use_forecaster_to_predict_future_data.ipynb"
-}
\ No newline at end of file
diff --git a/docs/readthedocs/source/doc/Chronos/Howto/index.rst b/docs/readthedocs/source/doc/Chronos/Howto/index.rst
deleted file mode 100644
index f93a6941..00000000
--- a/docs/readthedocs/source/doc/Chronos/Howto/index.rst
+++ /dev/null
@@ -1,52 +0,0 @@
-Chronos How-to Guides
-=========================
-How-to guides are bite-sized, executable examples where users could check when meeting with some specific topic during the usage.
-
-Installation
--------------------------
-
-* `Install Chronos on Windows `__
-* `Use Chronos in container(docker) `__
-
-Data Processing
--------------------------
-* `Preprocess my data `__
-* `Built-in dataset `__
-
-
-Forecasting
--------------------------
-
-Develop a forecaster
-~~~~~~~~~~~~~~~~~~~~~~~~~
-* `Choose a forecaster algorithm `__
-* `Create a forecaster `__
-* `Train forecaster on single node `__
-* `Tune forecaster on single node `__
-* `Evaluate a forecaster `__
-* `Use forecaster to predict future data `__
-* `Generate confidence interval for prediction `__
-
-Speed up a forecaster
-~~~~~~~~~~~~~~~~~~~~~~~~~
-* `Speed up inference of forecaster through ONNXRuntime `__
-* `Speed up inference of forecaster through OpenVINO `__
-* `Optimize a forecaster by searching the best accelerate method `__
-
-Persist a forecaster
-~~~~~~~~~~~~~~~~~~~~~~~~~
-* `Save and load a forecaster `__
-* `Export the ONNX model files to disk `__
-* `Export the OpenVINO model files to disk `__
-* `Export the TorchScript model files to disk `__
-* `Preprocess my data `__
-* `Built-in dataset `__
-
-Benchmark a forecaster
-~~~~~~~~~~~~~~~~~~~~~~~~~
-* `Use Chronos benchmark tool `__
-
-Deploy a forecaster
-~~~~~~~~~~~~~~~~~~~~~~~~~
-* `A whole workflow in production environment after my forecaster is developed `__
-* `Export data processing pipeline to torchscript for further deployment without Python environment `__
diff --git a/docs/readthedocs/source/doc/Chronos/Howto/windows_guide.md b/docs/readthedocs/source/doc/Chronos/Howto/windows_guide.md
deleted file mode 100644
index 400de173..00000000
--- a/docs/readthedocs/source/doc/Chronos/Howto/windows_guide.md
+++ /dev/null
@@ -1,91 +0,0 @@
-# Install Chronos on Windows
-
-There are 2 ways to install Chronos on Windows: install using WSL2 and install on native Windows. With WSL2, all the features of Chronos are available, while on native Windows, there are some limitations now.
-
-## Install using WSL2
-### Step 1: Install WSL2
-
-Follow [BigDL Windows User guide](../../UserGuide/win.md) to install WSL2.
-
-
-### Step 2: Install Chronos
-
-Follow the [Chronos Installation guide](../Overview/chronos.md#install) to install Chronos.
-
-## Install on native Windows
-
-### Step1: Install conda
-
-We recommend using conda to manage the Chronos python environment, for more information on install conda on Windows, you can refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
-
-When conda is successfully installed, open the Anaconda Powershell Prompt, then you can create a conda environment using the following command:
-
-```
-# create a conda environment for chronos
-conda create -n my_env python=3.7 setuptools=58.0.4 # you could change my_env to any name you want
-```
-
-### Step2: Install Chronos from PyPI
-You can simply install Chronos from PyPI using the following command:
-
-```
-# activate your conda environment
-conda activate my_env
-
-# install Chronos nightly build version (2.1.0 stable release is not supported on native Windows)
-pip install --pre --upgrade bigdl-chronos[pytorch]
-```
-
-You can use the [install panel](https://bigdl.readthedocs.io/en/latest/doc/Chronos/Overview/install.html#install-using-conda) to select the proper install options based on your need, but there are some limitations now:
-
-- `bigdl-chronos[distributed]` is not supported.
-
-- `intel_extension_for_pytorch (ipex)` is unavailable for Windows now, so the related feature is not supported.
-
-### Known Issues on Native Windows
-
-#### Fail to Install Neural-compressor via pip
-
-**Problem description**
-
-Installing neural-compressor via pip may stuck when installing pycocotools.
-
-**Solution**
-
-Install pycocotools using conda:
-
-`conda install pycocotools -c esri`
-
-Then neural-compressor can be successfully installed using pip, we recommend installing neural-compressor 1.13.1 or higher:
-
-`pip install neural-compressor==1.13.1`
-
-#### RuntimeError during Quantization
-
-**Problem description**
-
-Calling `forecaster.quantize()` without specifying the `metric` parameter (e.g. `forecaster.quantize(train_data)`) will raise runtime error, it may happen when neural-compressor version is lower than `1.13.1`
-
-> [ERROR] Unexpected exception AssertionError('please use start() before end()') happened during tuning.
->
-> RuntimeError: Found no quantized model satisfying accuracy criterion.
-
-**Solution**
-
-Upgrade neural-compressor to 1.13.1 or higher.
-
-`pip install neural-compressor==1.13.1`
-
-#### RuntimeError during forecaster.fit
-
-**Problem description**
-
-`ProphetForecaster.fit` and `ProphetModel.fit_eval` may raise runtime error on native Windows.
-
-> RuntimeError: Error during optimization!
->
-> [ERROR] Chain [1] error: terminated by signal 3221225657
-
-According to our test, this issue only arises on some test machines or environments, you could check it by running `ProphetForecaster.fit` and `ProphetModel.fit_eval` on your own machines or environments.
-
-There is a similar [issue](https://github.com/facebook/prophet/issues/2227) in prophet repo, we will stay tuned for its progress.
\ No newline at end of file
diff --git a/docs/readthedocs/source/doc/Chronos/Image/aiops-workflow.png b/docs/readthedocs/source/doc/Chronos/Image/aiops-workflow.png
deleted file mode 100644
index ee1589b13536feaf53cfaaa6ec11f83698391240..0000000000000000000000000000000000000000
GIT binary patch
literal 0
HcmV?d00001
literal 48659
zcmb@u1yoeg_b)zxD58V{B1l@42+}=tcZjrzbhk7rp(vp+BHcB_NH+*5-3>zsNOuh}
zz`Qf+_qTrUz4d?Vt@VF*Eti)&_nzGQoV`E$v(I~VRXHL;YC;GELZl!stqFl#VuV0&
zJqa#@CvbPA$KXF4S4}xdNKr4%D){4)rG$zE1X3J%?aUM({C(9)Ue6T*x#@=ehcjbA
z{|W-B&sC6?(0*mSF@3F0+r1ux;b5iIc<%T(J~ryZOBzlIaixLriwKD|1qER%ebab?
zIIpg27B(S-*4~e9C`D>N^d5fe(X5VDMspSbKUI+k6Nl^}C02<)1qUz6>eupF47V
z34%X2>g3q2f=gV3H0aL_L`LF&e`#~=PL)eSxzn7akFd*cc^?CLj{1Yo6J^%XR02+^
zs{2c@W)Mpv=b
z1XeIQ_B_!vzfth4xOvLCjkaQ{F(AOTS8#dUW%yUq+aB;}d5fYCrfY0n)Og`-kj!nV
zDp5IC+(yUUMjueeu9hq^p&ZBc^9v5PIIC-fGfY9J2I~8GRHEKxal)QO?0A3c2_ATY
zz4`CMqoFB55Kfo&iL1tQduxdY#hZqX5BT-^N<4bIaaMHDYICyOLonQBE@gQv&1+y;
z?qqB@bB;TMhvT$L7B7y+`tvj$X8h|DGp>fGRgdc(Xc|c_oHtV>Gce4v#;N|^@
zEWo$S&;P_ti*Nn<>TTOM;e*Bqs~g16rr
z>!J(K`#c)wBXlc}9Nwx2rD-Kg=B0JU&{KcgK~yg0rXOnlcMT1;FXXV(Am&W`YqxUy
zt_p|KiB9%}^SiGN|8N~E(9?qY9`F3{g9fkn@8{>|dwQ$7#S1v68@#Zbv$ZkEXx`NI
z8ZFR!k9O9o86zCa8Ve^AGd3_t@W8nFx+Q83>^Zk_PM6tDa8vn@=*BH4`}BUf!|ITocGFzAVq*-ElEf5i~ecdR#f;kj=KItk4fu_f=^-x)RRRaVGT)4C*Flu=zaUIQ*qt5#vJGqQ5YRvH;asn
z&|{Gp7(H8Kihit+yb4Sq?Sz+?7v*)cp-~7mXUX#T
z!{Wh^n%YL~mysu^HtltSSX~Rr#Ar1!!QE4DmBzXm{zlAnjeBWisEZLXb_BwD@$V64
zr*XwpQRlObuJUM)elD8f)1KPUnbD?Z_3?<)o`B5RvfX1)J&yPG*RCRl@@j8PWvEcI
z2E1=8H*#)vUv*zNZadF3yEWZt#V2csk~~ff>8wgXN_VXsuQ^X=3x&iO=$c5ai{k4H~>b7Ua$(HEsR{kE|~^Xw^-ys{@gXAIT%!M|v^
zJ1+Xwqpf|IVo?`Xk;HA8S0h+*)KVOX1%H!zI$9zcFKVg}29;DI=|razU*408P&N!c
zqQYU%Vw=*+R(3v$H}sr#FM6pg^^>2&e#iZRiB!Dok=i*Kk9NVds9CZfN>A5jul-(R
z)#oQ;CIR(qyc(@ARDyq9!^@zA<=*h6FG=jG0hq?)oSb}8VkPnrg(e#+5$^?FfT%?EVo%4enXnCbR
z3WaoxoVMsecc}T#Z&B@=U=B@+hF8Ec;{3oYLS#=~xAW21B6f08d1LHIB`KU{B}|oa
zwZ8if@Tq*|Y?@ZeR69cNMEYsGGnjs
zwur6`oep7(!ujRjU1_0oYl^!eM0b`H*>ip=)DUih1=A?cE%O?+{HrtLS9AEfVX>^^
z3!;a^T179ruQDb6Qp^kPdvB2!g!7ss;H0Z4CQ0VQt!mATm1T~^X1i!BQdu0rl)|(E
z;zPtI76a-3s#}4-tv!S_W^JX9Cz0_p8##Uxc(Xutpq1%Bf2S-V!0ByF@!yNbJ8{F-R@
z-*v)@$ecDv{ejXsX>K20%FRewv$hELtkzGm>2Bpyk@IP6s{{TtglU4Wqm}jd{N-Ya
zwka0`;@!?#5o>^=0>`M%
zhD)lV!>eGEb9|bBv)u}XK-MhN9oY;qaw&n&QcCDPfsJo1hAhISko%KkvbS?5!7j{J
z{urNvhp=lfOKEc%S-VlvH5gI%dEEuu8TIStcY+Amv~v<&!q}P;zy?&WGuapdyFP?*
zhZ!Eh1aG34`ZeQD>0j+MT`}tT`j$j&9Ogn65ppBK;gh1k-PR{
z@`29A-J_0U)M1e9p=nl?Oi1V;rHJrITY9*P%Ru-nxQ9MnB0nQiY!XJzEjw7KgucNy
zA#k?SVw6{`)h=m%=U{cH4HnPzuRhXFJ^~FLokhh~k%NAzqG4&6giaXpMJNpK3bFFR
zRj2(PeoA&C6N!*2#=AOsQ1nLWbbaf_M@}>YC1Js_7zeqN0tIBCVg0aqiD_VrIJA0r
zS%%iDwNUAt)#~?qr&lWGG`S>~)ex}9?--rWRjtsxol0H5Gzf+B$8XMuX;7p!*=VBW^CD3g)Y~O>}2_`&D7Tx*4b#5
z61hK%BZ0TCA8Kpp%6)~G*`8CUH8oOtN62z_gvN1O0%eCtBNj|_qdPBdzY`-HGxAJg
z4?Wx6i_uHFe#I}MjBeV7;F(|VeIfF78A_v=m5o8i&@a2RuWd9RhL}V~L@vp!RvnwS
zEb>fclVs;%2oKSuo&WaLnp0eZZ_g#_jb*i@WZM6;^_LqyBx6!l7J)k0T1PlbtoeJ|2d=xN-jj)8%}toodq2-8sJm_eyUxPg$5elH>;t=Trxt7^t+5M@=X
zd36j{DCoNMvOHyCW41uA^hGRDj`Kq6&my_~=FL7HnMf%4&MyoO61|7>C{lUK72jx!NJAaSKDbZZgK5Vk?`@TEt|U8W$kSp
z8s4scZ_Dnl#Y0X7op03luWcei5ZbL$LY;}~OMzc@RYdO_Msi)BAIlyP{<4tdCYOI)v-WgwgW+jO6ifHio&yHu6BOk<;A(?wMD4zTsYdzURMfVw?VlV}Q3R6zes%pw$ZOsi^CS1_fX2Na
zk)x>tWX1vKhl#kPpS#}O!oCz(@m~2q{=v(?p9Q8f5d1NgAz^PA{&WVyO!$WMPm!t2
z|H}S<@RI%m0_VA4%J{W4o1r43+Hvcp?zpYEj$V=d?(t>x`^9;tFCS8#J@$W5ef)p?
zBd4OyaJIouMMWi?O4y^IKA?wp?5iA|*ALMQfjzpQ-2D7f;WS#MN^WnuKYd`1pkRCY
z^<;k;SyS_dV$9g5kVu~Sl}5pHY2Zi|P2HWmVJ={m;~QvtcD%ck?6Xz87!^y|ZU1maoPr@_{i(vYH_G&Ps&XIM;G~HM
zZNHiOV&G65cca<{#whe>I>lGDK3p*uw-Qj(5O21W`+YZ`A?Vasz4V&9QA$!-8>16-yZ)f1m2#H7A{iUMxVn
zfrVnXf6ZO|?{iYii?frW7-Kuv>*mmdP2|{@5%gjdqm&&LgmvQc;&uXlU!&>FE6XIk
z`YowaZ@+~yuP(l$>k#q0wK|_GL;2n$L-{@}!%j%g#;v7XQ@@_?{yx=~5D1~n+h-!c
z!mJqTRNj3X<0qLb7;=}2rv;+kVQhS=wtw6_)}4|P&{E(VI}iX#9Xy!H>7)@YKv0S9
zF}Usos#RA*0(}SH#g7@fhHnlwku1b1wgQ(i^=AEHpOEWV%ZHu}wfeQC+ofF&w90MM
z`O0WjpLin=`&mx+Zu9V{fp3kM?|(=1z9B3
z%U&_Bpp%0f_!JF9Tr8LOClDcfu9l67(xC{r0x7O|f@_!Kt*<7~)4EzqUs3C6xW4ASiShqW!t?V!>#?umfX`N4hEO#%1J<
zvi^E{GY}HUCyCiU3<&6>^V>KIY#?b6Ji$Sq19!X{4u?x~Uw7AxQ`97tTqTFCXo)Eu8lA92^e*oERFRchxC8hn8}Jgr|UmzLSm+#H_{VGL&W`kcR!evRR)|6
zrEJ(IO1oaqQf4-K(Go&rt;GaMP0YO-LL$B~>hGylKl}WM0Fqj0faztkXhX4AWOSOC
zV77UBAc40u4d<|OXeaxFF_dP0>7mjk#+b)x#iG2~IjyKmfr^~FH4}V`Pj%~RHvOUi
ziXomsv|VpeVLxGoepj^sFLrngNmaku^MofR#L7?&l6#37Ukc})5z>&+VGX3G*>j1(0L;SrTd%g@gLE
zZ4!w1-2FGCHnogv12-X^(?%p56_wN>T1injkf<;rw$Y>DW9&<&0MJo){v*XJT8PP#
z#u|KxxX=Ztj~>C}8wq6JEJuv-HU*w|wU6(dEvmw`Qf=fo<=JnQK+cc`5J<^fBmWr~M+W}=<-SU-#2=(d
z5cR8H9tF%MqTZsri)!6Dr;l(4b!??-`_xjFOUo
zJYcJ}y2`}}N&RLOfs>kUH&fCxQ-L(s{qOZyHF)hWO|TIE!_A`jGi1X{v}pcscEB)c
z(ii8ahaP{Sw{{({B_++5hb%%8+fxT1z$LM4mqZXZk!h)
z#hz>zOuz7VJG!|)b6V&zlglWo;vS2#7+_n!xzGC3%M2YyCbwnQ75UhA?Cb@~#Xxq|
zp#DX1eWBkmE@-?}vd8A+1hUEr544&hm_gdBX7j8s?)4S7V%Ts3-7}k|u_EWlO#aO&
z35%6LR)dD4j>>Mta#7i$qUZ7hshbpNfjiJCQ$G6%hl`7)D;B+&^P##r-wdD3SIva4
zh7aVvBureJ;f)PA=U&1Pt(z~`ms)(F3TU}jVaVctw)uML+S?lBo{6!MrT;;Sk{wl6
zB(NOHlw~e6x|K_ttYD5R-7QAqBfBTVk=+%WDOGYfzYRiBwO{FsgunXvcyOg)Vz*GC
zhchFz(-~QBLG$Ciex_M8Cf6cM5UCdT42#%x_LA#PfLGr8nV&z-&j!IlL@9Nd0d$DWgWhL@`eDHjh||@n&u1y8V$f~5%0>pCK+Si785CU
zUIf~{zl56T>e)w(#CeYSOG9v(Ig`wU#@u>Wr-lS`U2ll5YcBE$vE+;EJxDH5b~x^3
zKpv#sZs4e^fn%-t*e{+Uasw{a8kUOCo_gU`H%$o6+HUUV39(Xv~fNv`%v2J
zE?K%)v+_H)2|SxE&b?oB@3-5Q_&hr(s3742#ETz*xn>ntz{Lp9&)?LBlLsF6s&Gs~
zf^dm-{TCR7^VstSTds;jg0o!hWiQ0K+w^=Zze?}A(FJVh
zpW9bIYj*;>dzy4leNj=bYL+4u>FJHE{KCf5rvCJ-7~FR`uIkI_y$VSQLUK4UbmpSW
zq;u}mX8cI&`&bZ2o~KPd_VK{v@YWe6rhdxUT_-C?X?c3;9z-9R-6P!X>3^EONrS3G
zptzansAli@7x&X-h)~oBwvK0@o|qBOA~(j&j-;0oWuu;zI_Gl{L8}-`=xcLgED_)beGurZ8aelSEqY4aLqPJJ)QYTHSL3AYh;6PMT(#kMpOmi
zK)c~u21PWUzurVVOjsBnJQ^Mn8@s+~F2}l?!d9Q+n&Q>xA7UM$(EyeV2s?Mcg|CmW
z%`mgi9o7@R>!&wuOakVp6VVcu&U{aNUz6w_%PjQ}
za>{Qtt%6GPnI6UTn6GH^W8uC=i)Gc?$?KPNYFgZ_??l|5dZxjCKU{t;Z_nSUv1rS<
z#COA0$VZ1i9IiE;1u!wX6oI+jA=|Zbo#G=L0u8-=hm)CQyFkVy-_4Ye(2-@u*GW6B
zSw~v?ht+ssv(wyRd<Hs`u6L*
zGS{~ngt;4)kDdwCbo#Xxr>JUk@^-e=t2LO%j(C?9tn4ba63Of39ig^n20jYfIU0=G
zt{Dyp>G+=~wYpPimT`CD9U)s_@ntVl+!tr*4cM1g1<)t)F@;gr%*B`kGsd2han^@K
zq|H9m;B2;h`lDma2TlwPCC+s@qUL^5#vZSI%WsRAC>glg)G%LC$@-lSvP}^6nlr(*
zKD|_O8fPaGT`r43_VNmF`82$I!dVvkyWq)zDjQ%5%^LF?pvGWXohm1tnZUPp6*IckoZ
zh5TjvFu6GL$zmm<+9xYZ9i6DwseKzPr^5BIgbc^mFP+RY3R|i!*VVt`l^!IcebzL74xcSLI^JSx
z!G{F8d?t`AFln4o&9XqEvWt6gVjc%XoC1l#M-*AXFTukbC-*E}7sWQdsrxbIa=g~o
z${|*vWtB}32yz<{ndFdSVfi7pJe19j9J=iErBHDeA=`jLSdL})^`#gqV*LH3A6|l_
zhQsp!;*%)d(*V3HRR~0z(EZKI&;bSW{yloD-*4==@9_*u)O7I6u<~!`^
z4gZ{{L2hB}Y}Km4UXpd#SjFS+?ax9`iqovh>Z(JGp1KKZ2!KG!E`v=zFf3|Iu}YS`
z{~aN9IYWVOzNUYR=jAN>bZ4-eSn?)0{WQ_^@OVa;9!I}g!RvsT&(;Kp!#wJN1MAp1
z(7G64!9dD{W?yIW~KoG8
zOsln`qrauRl|dA^W<7*e^l6+-HgZN=bkF(JYSyVrU~T~LT4Nqr{KqtckQ)fROvlBg
z{&_bVESsB(4|CE;hum3@QRoB!1;(e<8|I0u+MH8Z@|b()-aKEgW}B{m2rKJz-YX9<
z4Nc~i((-iho*DO-xg&V#uY>@Md*Ui2@P(GvRd>2oYw{1;uL*SEyfwkwFE#W?`Jp>C
z?4fSu=s9}&-(9}_WqJazj^jrV~xua$5R56*}m>hacYwe;g<*V4X*e=M~Xiy
zPAqGK8ucS{`kqgDA54fkzg#qrhCrf7KxA@Uvev^Rq}G6I_(_b2*ZwEet&Urd4XqsI
zB514~udQaq8gnJ^*>eG8kVn+}z{!;@1!Z71RRkniIZdAb4f!>8w&O1GlzJFkEINf8q)
zF&$l`nIHMi>PisYFoZ&Z9euDkv$wtBtX_<7sJ9)nOvq1@JSVIz3FLJzau*6pNlt;YZqz4x86B+p(5<5
zKDaE`XK-S?V5!ve-giIwfg_)%*|0s0!5FwGDPis@K_0BGyc;Oyiq<
z=6tfZ$Ojqvyb409?zU`?>$f!IXw7o~5M5mz>95-+J#rI
zpknPg?-s1Q$0ebjkf7*@4mePrR4gAoJf!ES=o{f`m}enWwL15v)lt?OFw;F-(=kZ<
z=3Y2n&ffeQ;5##4yJ}svc;x1D1DbjD|Y@n?u#(((p
zPO(#*dPN&Yg1dqkO07+k>gK5(h?(vY9Vm>P-8>!oq|
z-A53t?M@ON`?9a1cdI3oq*9M^p*5WQXf~kHC)2m=z&ELJiDG{Rn!-D~?fGVP$cux&
z(hF;#Sg`5LW^VRbaFA
ze$Nv~OtwMndE_ek_a)akHjMRCJO5eWu7Opx_of5@ZPC3)kW_8bk60Y2^MfqqFgeqW
zpjLS@2c7;DTQF?$751%~(ek9RlTo9m2$$$r6rCNyLtQw}Kfh8n&U#+n&W&VpU8L_?3tB<^^4NJO%R&pb4&>zp9+
zTtvHgw_6g}RI0+8Ja;(de>P@Y-R{VBO74XJfWK97ugKfn)t>7)(4VVm9wR#O>t7($
zx5Btx2MD2{un9n)@o!;6e09kg3(=caGH*1GJt;D@V1Q
zmI$>z*4Il$pPE}#RM9uEa_icyN8h4lQ~G^1zNc2}{$4PAH5XiQ^AVx+3U8SaJk$Gs
z34>GbHVy3#I4dr0l=i98NU<+R_I;${zS$;w@D=JXx7!oKx-9fi>^UhNWnpQ5_+jYLVYjQXovM*fbWEVwvRr%VQ&F5?fq(#ZSy7-LxZ|PMQG99NZx48kf#i08U
zYyB^*hd9q&=3*p#34k;fJks>A2G+ei);{<%Rt5&Kd#PD&}Zjl=g|OE2_|W{l1~gRiPje^X673A1VIbFOT$IboHb`-Xc=jqHa_c
zdSjBkmf|uuRkFsnmxHd7ZSds~inkx{E`WVn*VyYPj5W6J<1^kDaT68GasYe+(yj>%
zw%of*TCD?I@WA|%Y0a9NWRA&)I))3m^BY4)<|A;OqL@266^@PP9w1t8f}R{R?Dz^}
z)2-2ruBj0M9<9KS)zQKPz{B|ytw#AmBoBWQ`2!aDSi>mM#XK_
z&!F3BeA}r@w^iJ>E@h2FDvrim)I2o^k@HE2)#$cwKToEC3Ba+y$c?67+~EDmNHCzN
z?WJvkH;~3?+C5%uW|Zfg`HA^(`Im6MHd(sc`CW3Q;#oC(%i-Njp`GEKl&s$-7~l7K
zP-hR-Xxyt449D1}#T02|$YPPBcTY)asWp=x*|=Rhq@h+dg%a7nKP}ox60jL2Hd+k;
zk$gKcyl_GdEv;w)k607`>Uw*X!d-v=quN=O1KTvSEK@;>GXfJeq`EbM7`Eiaa0Y2M
zb(zT_r&wx*4DzC-SVD7(povR0gCK6)#uR@$2zFkTCYfLhHi@E`gcpq|HmJE$`k1Tf
zH$=#D6-M4$KKR>2ZD8USGSJ}e$7rcKKX?`WP^F}jdcdqVQ4lq%m1S_`4l{wB8`8RV
zKLLz@AvNfx;w~3H%if{?R_%5JK;;Fz6>2_-vf?yhqvxq5q(RIYHv5>!@T!J1^Giw72~Fh-(Ta|SG=9_-S1J*yL^)Q
zD`eYKG6h{l%t=9d6`dlVMY^4noIe&hl%+(eYhzb)lTuuR?2`)(?b}~iDOJDFjvU19
zTE(u
ze!9Y&HxVLK&^=;bIlZJ9K0!*Y?+OC)9)qVNS3qFH8_JXQx$lVE&U+7r@csE1u{lt|
z@{1vq#zp>xPnqB3Q_<|Pp~9`$^_@gQAyQxLq;?|DPWCs+LziwBitEmetlWs}lO4`0`o2DCnZa%E
z1sG$G{c=F>*<1+6S0+cXK~-*qz;N#o@z&VLM^PJkfGYwR^&Y{ekii*08IGZLJiuCV
z&HC+jqCK3KzU?;97-Dr>j;ONg)$R+|=BrHOUULg+$_T37YbkfI(PU&COUA?LnqfCa
zJKVH^hr?IR1+0_bT)8;ZUX8F-{S?lkl|Ak*J9}mJB#Y}BdEgE@#%6YJ*FUU;W~nE^
za%^Gsq=w$ZJ8zlYsAiZhlv7D6?jE5BvsGG}>bY6mOaOy{kFF;V7Lu%Rp?f+?f3Z8a
z)Ti8-tdj9>%Mejr`s>|`HkQ;j1_rO7QRZFHp
zx01M`?rfT*XUcM>bdlrwn#0qE%AZ&~cYnc=r0l4EHS!UEDP`UI7DdP#l37}0AB|8d
z>w@#qm%8c><a@rTRmIX&OjRx%bZ(J_hI=e1LTKi`Xb~{MnV%HRHLbvPy77QUq~Q
zIfs?!08|UhC1C4M?cBm*vrI64|Edpj56a6yhtQ+-%-czM!yHY8pC+#~vq2pxr+nZD@rlqvN
zHPgRKkjB=rI{4%D3xksT=Q`Tqkh$O_o0_
zd?V0v)Xur@5y_RU%1}0{QtK`c-N}irF8+A3qT>%e$0w(ecMzX%g*nmJiBW${u!_x;}#RXnl=J^1$L-j6!hOVw9Qz~ybJ#+^4annkp
zhx@FZ#2)}*LbyRfZkM70IxTJsrRH&v>jdfSs=(F|8-ct0bB!7+pGt`a8_)}%Sn#sc@caXx^oYDtk84?
z3MzP^edV+1&~?a(>wJ6mLkdN7pY{#Zfyn9o_VCl>ogS$eoZYV3M$=Nxk}UE5vq|19
z5Rn$&%`5d&alT2bmAx*I&O7kL9GTg`xYdAm*gxCvW5Oj)mi1NNnahk;Es?kuLoAeC
z@W|$-(;`2y%h5-y6O^DsrMhfj=ACfwvM0Z2t%tIW%QuUKr~6o!^yqB2r%d8{_~ilM
zk=i|%?z!>1uK2urr|;>+4R+~s;asm~#KpI}9<`+>iEzX;@@^YzyOKyu`sME7KIz>A8-Cubv#gcHQh(7Q8v3rviVHeRj+lqCcs&^X#
z?hEyc0AQ62huqw$?gDjX3H1n1^JM0spMP6GwdV^X?>>JwPnTPfTzLgaolFQP1puG=ti+X*A$Igp0$QbBQ^(
z8xH!*Xbn8~OTn}nKeA@Qzb}LPHr+Ns5#yV57`L`=M>8yfyS#b=?+&H#g7I#D+4k$Di*?(p
zS0NpSfANIz4wzCNni>xYRSf5jGG8WukQ*@YOCV@xgz4=-+
z$YSK-=*afG;pXaC&h-!{qgk{~fB~`fZKcDMRJP^hFGC{G>9g$&k
zhWSHAGYqDW++b6UA+pmumk+Js@(raPN>8aS^`wOrvVL!9E;`qF7&0qM3msTkyRCIE
zG&7_7iVo_~m)5bcuQ4`xCV?~KA?qx%%=0M1qTAF5SrSo-5y?F}6%Jq5#GeizK5SUw
z{xwTdpS}G{3qf_h!K~B-<@Z%D#xw
z0;23JCrh_y>n^!JXBK;${FTR65uc!@
zrGZw_uMLmMT{Ir7hWMBuS!yLIy4iXKm~Uvy!v>4;J50@0`_Lx=97>8%U3^v8(?$}w
zVe6rFn&LNB-}KJsHA^H7S3_Cz)-G{n+GD^5QDJW1muX<*NqCpptYtfa6+U1p)^gri
z#@@5Y)^xGvGqv8eXoC4zPt5_p-Nq~Yp3i(>x5}B%69W1lI)
zmzkTIJLlN8OLkj^rF1Q%aBxg;hA5C#gkPz2`RZFO
z#p-j|C|Y+@FfqUc-u^jKOwrty;@1aL9?ZR$Tx91{lTItwrb)5orZx2h4It^80M`=G
z)cHD+@+Km0MN})5ffgU1JnH=x0$0p5`#UAc#0lIVAmE662u_Sm0%o|!YXmrm+-J_Rw>#k`poL-M`-#+}7-H}R_w
zWkc^Q#qqtB%PE>As
z>o!aD!I%XBsc6JajzB3i!B%4O25GXl>8obxDyS6Ud0L^_fr^H$A(hr$s500&Xb|g=
zGhwlc5OMXHvAWi~okQYNH!TbeyH0ywV~G&3B6$Bs$v-7lat+pvs*5HTHo-3lV4J4b_v$?5NX1Kzg=mPQMUo9^3a*|1D+fcYy<8
zmH`2vD6+_@Hf*d>uhu!?WUG+l#yf~&P_Ila
zJh%@EflRP|UP2l|s@IM*GT-y!;ielK_aJc&(CxsQH}Ic-5+T4r^h@
z?wqQz$b`LO6{wi)^Sf*^=lWJL*TZLSG(^bOw#QcNoLqr07?_g<^?wU!ta)j5_nu_E
zvF%{h6gBm?2PQiZt|7q+n;tn-;{I|&_glN59;ym^
z%s5QZKmhj*^RHC4O~uWzLK5*@&Ahos3*9G;0f4->a-Yzcp>g;~Nd(SI$Jrj3%y<88X_vR8c>!
zKQ}{9?0G~j7q32xS6u}n_zxxz*kBiflq@fj(URlB64~@xjEUxQM*i)$s}xAB*eaA6!;r!uN6W5$iOna%q{qTJL|O8W>PI
z$bWACL$1>jRL`;4qbf=K!w
zhElxW5X0vj{+RJVk9h{U&Ch$N9+*iUGtcKW@i9W)X&k?!S?L55ni42qDar
zFC~Eusx-tw^Ld3v4Ym->flv{Al=_C;#ygcz?|^Wt1u7;|OvkHGwX+;By-L`~`dl~B
zS$%%(6`XA(>+99IN`!&d0O-W+O}Itx#wxW@~hgV$bR1sXkjzQJ|*1H?n-UTdd=trtt7Y
z&Y>LuuwQmtTqCG?J7+=wMIo2@662m4tO6Tr)h3sLRW5G(f33P{SZ?WKz(VM=r;A}U
z{dTt?Dr|JOcar^m8tcf3`73+@9T^qHCj@B=JQrQ{WJroEbLo4}a~p|_<|2^%YOmit
z3Eiw8awdt?%L!@IW{GUDfSSh@~1V$S4P0#xKH73BQ;WJqv`sj
zyun-rbe>`*mvZw%HjPe#s(D?+mfKW|S$3z_Gt<%H5s#_ly9wg!+7+(@NXiO|j`LkM
za(3zoLa(#OjE-lUTwH|zSZBGNb|i(!FbEK#knXxqb3>-Aph9O^s!g7IKoN6>2Z>_G
zM)p2xQw@%oyEo+c-*toWW%k~NQKfZe;@-+Dx#}OV3#Z~6M{na1fSg26&Inm1Z)lIR<%la~0FWhv>YJx2|bvtP6{O7Uo~
z5z*(FyLYFf3%rCJF2>pXpwq?sO+<|iQKN-k-jO<{$-v+8xv!UPwr1{Q_n}&49?y25
zuLN^_?G@X*C3>aGO`(}NG^@U9k7c_4!aksP<)sN;4h)lYyVvV~vf{2=u+Pw!?CY=B9&cjv=u-Cpva&vgs02jK
zb|Z{m!X&Cq14`O3i<31}`I4Lniy&%iX_@5+ykEKv7+qEouSZkUC<0K^ZvmO!pE~`X
z2=mTPoZ7avF#AFJy>}31BCOT&MPY3EAK~bdzS2}wd>%92i@7daz(08q#(*UN)i7N0
z1FYhxE7Wt0G$#x~F0W;w{l20_%QpgnCuM&l4$J7eVX4fYP-(|4>O*fcxml8*07zdR
zSfXTs-pQL)K;!?r^jjRj9X7~!q9YSz0il+`-~!zekOG11v4M~Tz{SZR
zF{Q@SexxrF>hZxBXGftJF&xNm-G8njgxL~HbJl+Y28{U;2$keA`yH)2I93K6y#DVW
z@XXeE?Qf8WGF%D_a~b~Dfbu}z;;IEqwaSryOtt-RptY*bf3y}4y=m^ftz|AYQ`}=W
ztH^f=!r*f{bDTMcmYThHimjg>%gb7}N+Bxtst5Hj+_~^+2>{HzJwTKS4CD6xErIgb
z4+)5mtPFSwf?}B&+t>UY+ixF<0^S=G%X{l>!V@!B6|sn7cbwTN!WKb`@pIN}Icfti
z*bTq|1Q6`s8P}=f)38V{WO`sek`bPy+`9
z5-bPUtTO8}g{9ROXftI%3+#^AH0TaFGS0BR3@LLy1vJms4U`AxsoPimOaOrqVdRIW
z3$);HETGP|=q$3W#{g5njU}1V*7|m0nW0#U=+oTN`AEKtA-h@Br=CD{$h>RHM4Kof
zr2Pv1s|Sy#qJTlDtBa)i7=@FnS%~~LVd#ZiSmG#moJ1#~Hmrp_k!>534bk-^kX%E0
zM@&0$%c@+HIz6e)8;Q%8AiqtXO1yJZ!wpn)`M+l2Qp+`^k|{!%vFt(8Gz%(E(YkSw
z)8TxFh$pohqfKs8^_^wgWjR>tCy~|_AEZO9(KI)J7~mJtk@hAeDZX_*GY*&)yi#|i
zq?4Czfah$6l>tHY?;F=!u0rsaLYHZHZDQ()03^|$U0-*Qva}nFh=W2gA$lhcNnqG>
z$?K2uxa3IIZDk`x+G#|7X<;>~9Ge1c*|B%j^JaOL)3L6!x4`%)O|QPed9p5Rm6ck3
z>x=Ip)xL8mZN-M8`Odjqyiw)m(iYBQ&i)pyGGa|s8L>sA)nY$;UXzd!a>xr5U@;lk
zn)uipV=E~*y>QF0J*I!!w&bG9M@0uMIwcC$nC9`H(|@UvYD
zfihyR_K*_#we|I$5nYqbrHw}{1QLMcg1xM+noDMWCJ$1NzAp1PM$CD$e5?iwB!P=P
zmOLoRz@hr*%#8PXp_}?&BL1F54nxr3!{JA7?f&&jO&Fh1dV+MJEmG~J4CqCt`iMV?
zuHw_mb17XgqV35yZv(Yf2I&V0(Z(QMQObJCwNJDn^6%$?0wCIwMmu#E`Gvgr-#`Vr
z4afcltZ;i=sTxy3ss9XCDf98KQu^V~lK1Q_?yWQY0NniB|1zZcRL`NgR3WEt)h`EK
zu55q$RDz)JS%w96=+=85{^^dmxrXxh$Xu@`e9EtYK-?<&IjI2fmuJ%yIMT&`+=~>M
zr~5m`_8ke14b}-JFOekD%_$aR{)k9mXX>w(JC1dxW|yiz=1vqu
zusBvP-V%PV`IzSx^SSh&>iRuEsu)uaB#4KNfUz?~{f89KEvpLA=YI>>OWnFnIDnfz
zAwi(Q1mkEpPN#zU`+{7MKi@*K6q-V@Srn)%d8ZeMfOFd;dH+9DU9T)U?vBr3(=0pi
zb9tjzeB$Sc=(y81V}0s2(*?-%5PLv5fVZ&y;`Bk;m72C(7B;y`u=bfT#U1?ypYas6
zX^Z~Q-%mUF3y)ZUVZb5t!KNIL!Yyp#Okh-vUjHvLLg`n8V!jykY~|sJZk6?8C`d
z_|(!ZUaY|jAj`FkR*yAH3MWiJAEd>2u;hRQnWOHPkJ?E;#Jouk~ZHUzx40ItQf7KtAu;wjsE`IASgc1L$By*Rr?Ghed6Cp
zW(c%~0g7#jdYS{(nZ2%!82*zGxApZc(Bj`${uHRKRKi#!W|hc>eMIu-(X+=Bq%S2v
z+a(!6zLPk9*Mn5?TR>x($-!a`(XLv$MA7~R^etE#oRSboIF|Sw_udQa>vyrQ--%M;
z3wlEVVw4_7;3e!n4kSaI?>}YU`}2}>U;lRXVDF!Deu`tu;ggIb%`0L2?@#VOzVik<
z1?rFfuAcM%`C0PM2M@bL7_Nb#At%E&d-5yizqggKM>TLkAbRNMUBsmT`dGqTRLm6k
zI+p6ynkG!t9hx06D^6#N&hmGWJbOv(b$FluN>-6;n~_Q+$WYm?7fxuo@-!hsY^-IY
z8tt&90RmFkRaCt6e0x8?Cka#9TrgYK(<32xnmZ&rw3<=0Mb`;9>BJVzjfYP#j?Xyw
z0aiB%HgZT5uQ_yTxBbr2`{kL+BPfPr`cAgZe)Yw)a%2&$G8P~`f$TZ%%h4y4->Hmi
ze2jd~9d)toxV~}5K{VT?ERx5ecLNewf*qeI3Veo!{UOSpcp{{7=(NWZG60*X3e0PP
z_WeYK%3ydrMNUH|Di+9wN)4{;=hI(*KnnI9FDx$H9>&E$9+I!WlT2`L_U~Bw!NdVP!*zZ|
zgA;-!PulzjTBJzvr)0KSq@5%0b*8|<#XgMyY@%qkZ&hs%$Ek?;;Ha^U*^m7NzjRPW;n^6!LQ*Ya{|1Q%7%Y3Bp}
zI`P5RC?|nl#ZUre2n;LnJ#leegz3gVF>4(stV)snc-!=$O8m>MMvVVacTypM{$@r2
z9T4fVeAKe7o2Blwn41w%GE`xY^c?n1m~IufBsIFlZN6!(;HUEhz!3Jr_N>?+*FLuOeP0
zaUdZy37k7&iDko^b$wy~d+0~T*^1v`FBIde%X-~>R9l}LM5*+#>LWe@;Dk8hbnzBI
zTQ{(BR1RU!7FoL03Jxn_Es6u_d4=!6
zh^H3hsReREo&Z)%nQp=1ICr0j*;_!hgfKt;)4(i?+x#=HGHNZmL6-mXKgT0*l+%fN
zN9ltVte+4&5S^od77-&4(=DNRnOe_v`8N
z5cMC9bTSxj%9cIrH>=+9vpznw_K~bXoXnA?c(Vg|ok=2k^pnHzr`%X$?~bB-QZv@g9&AhNTZ=
z$c?S%D-Zqdh@PIBI&rtIFE1}Y1k^LN?hmToc0KXD4jr#tm%8&1MMV)tGiu(32v{!|
zMUeQ*B+A4pBIXR{A@-FavKSl+awMOTrtL9KY=3v!o$p9hI4bh_R9=M;irm$l&%kW>iH
zU-5rP&6@ZA2A_zi2OvjP`v@CUIi=CQZn3nnq6KYxLIit(qz^WUr5m%&syRilZBjoS
zz9Z)QYVaL>axz&gMO0V`?wNY*q}*ZF0Kx?71C2-ym@to>aH%3<^`XYqiMyVcdsSKB
zjY1V2ZOXTnejuBkIo%%({|8_19Z&WD|Nm=fD4|G{l`=z_*|N!Yq|79HWzU1MWh9P_
z?0JZjy(yBCy|-i@d!FMsob!D=d%Zs6{k?pDKYzK@Imh!n$K!Dyx7+m&DXSvpX=9SE
zPH$Q(2E~NU6w?V<)`~_?7JJXRb#fK}M+=F}6QlcJg@2moW1%;G=7@ezH~&+xeel{~
zuG*UwpB(++obfZ+7W7<=5zB(GC)2Dy-Dswzm7ZPHHqi7^klHM2FqyRyls9G9W0Oyc
z&O9jS{AOMFi9LNe<7N$c=bZ}~w4Q3M@L``nF%cM%RQHfcni|ILOQTtwyt-_DJWZw$
z2E;2-47i&}@?t=GEA#9xFF;7gDjt`6F9=8kHY-hxyD3oFkV(FDXyGS&h9Eh@%5X#v
z8^@|N;gWFld0U@|Ku~Tl{Q+_hjh8xtut0NsqZZ~(>7TX86Cl5so*LRW%833gz+8^*
z8k81syqT|=k)F_=c@kUyCz2azihQ@4L@-G!Pwk8+^ffzxAj)3wlhu&>sPVGu_h!>N
zObOV4%b`sp@Pc09|4WVpnd0C7Oe$p(Mm{Zn8Nn^&^AD_GuUC2&@rU!_J5I`L!W-5a
zB(nU`61zc6$6$>}mh`RW7Sb6^_o1sduRIE2MmV
zW#;iZB|(s#Brf_hr}B-baUjb9pp(CHilOXNnXD*AU#dC%2URO^dsJ}Tly
zBpw|dead%V8_8S$!e6@gYiXcS3wicqRV+Q63@g
z?alf%Uj3rWW-_-Y8d>zgdSY|CSauCYyo`QxzMIOUh%SclcJh%$c%-9l{XL)#(K7jG
zIs1zaiB5Z%s2U&j`RdGj2ePoHF8#fTSs=}exO|nNqPVoaGf`l?);Wi}9Y4=RTv7cV
zb>l%Rg8pE0dJ^$o=~(&htv>=x>e>b`X5m2!G868OS)|z4(wlIk4=x-Zcl#cy}n7s)P1!4m&e}%R&zzEXCjx
z7rnU=%8wvocGS7{#!qbRQ^nQsbK|gGy@=SgjJY*aVm&vV;ZObFcT|5BJ?q@*V9Gck
z6e2sIwNBC;Y?BH6Uw)&ut@)6>xG;U*cyrQ7QPl@yCeed0up1osVB2vT-
z;RF0Y`l|erMbIjMO)cp;KA4j9hK3Ywp)#48jCaMG%;V6ZZ)e!~N*EnrTlgAtj)SN8
z!-XzuNx9I=WgU0w`TUK2Hrptq?SGYYGqHpCETFV5pDO!m2FcxjKY&-0ZRJPSKmyO4g7NnI6*W
zcYnNL|L0po*{@*TDGzJu<9TO*ql$Oy%AU-0Kk9I6i1DmAzU*WF;9tQ*YO>?!2s`ZQUbh!Nr-8(+YBI}#9IMdrtV3_|9TFWp
zlhgi|b`r}`&&Oq#&NztASQ{Ih*7*&eP~N#EOr008M+1UeXvc(p{$EJ9|H|$E
z;l7oX(}{w+JoE3W5%?5fEbi0GKh~{h5r9_a|Hj5CwEtsM7O)w(^IsvNcr=~B_nz}4
z3s4rZv9S~@Rb4{@8mdr>mqdoM)0grO0l%er_7ntu6RG)3ZC!{&J^)(4Fo@ulYS%R%
zEA7Y*a$z60wdYQwj#HLSrxMuyI*UL7Iswc4trNiwM6N(CTB~E02FHv?{B8rP>8>}V
zm(~TrtvlG@=>#we?^0)cJ*#U1jGufYqDfrl|0{~_Y!>|vh;-dhMEUCkkF80*0ZE7l
z6F7CdfLzCx2{_?Fe_F@EiMZvYo@_~?8Y`^?r7IgeV9zXE527r7QsQ#g{EEc33|I*k
ze*FmYO)8R{6oI`ONtV8MRFHZpGhtx~TJwruh~+1k$B2GMkdWZYmp2=d
zU0Vi56SodeZbF7vdpa4LJ}9+tgIdl&M6U+}Tc6eqD1F+?oAXU}n3R4%d;AA=_IBXa
zELZL2+#dC`95H>c09lk=4c3w-r9tUmZn?EGl#B=yB^0e=SOBX0nMrmAt-66}
zK9+lQQOlbxeQ0$5)2j7|(}ZKQ>)?O(gn=`aBy(YSd3*?|yH;hO&eZ|2`QS!1b|asV
zOdxuDk-P#kHa0c|g}+4mtJtZP+=DOx3o#$1#TrTH`AGIkYvP5$i*qebDxZHnja^qX
zFs+C0$B%qam5O<$45&8Op_R55pl+B%9F5lb=Mrpj_A57z4hlb_n+A9KJlZ2lA5YTo
zt`!vijA!qV;uGrz8c7~1zCnB*=QJj48CJwOy*u9Mzi$+0^g+&IveMi4q!LfiCZcG0
zDhDaHjtozjt~VLB>^5~DFgcFTrITh-y2YT421N^8do#iy|1G%eS0>P!au&Cp-1H3S
zE)HmU^MkDk2|QAI2Vl>r$$9q4u!h&hs~d_bAYPUU{9x~(5zx@46D5taW&r%j`(RkV
z8l8cf&+Dc6xLl7y}aB+l9MQ`dl?1&Yc~sveVxK`q5MfgCep0tv{GWrM>E#xmLFJDruleYRMIp5!MNXU2?UMmtcR)59bEz%iYuuLoBN-n`R`IOS
ztE^%4&Tu`@5-?GfSdEuksQlt;W3Jys!8h!x*JrJG=Gz(A`|+sxMWt8$R*%;<2Mb~u
zTA!fiMsrRE$|ypX;}eH55VWZpdobPt^!HcSG^&W5PcL9`Kt)%R;oFY~lrzm_z?(=j
z5g%Rb6F&ZrD#s};`_*6_xJ=(!KA?S(fNhyY?`6XqSBr|TBHa23if8oaaGA`n
zr7ra^TQkAtqyapvYpTw5dTOCJWM=DrEe)BpdH?h9>I$pg^y-gX%2WWc)mQEJ*&^)O
z_Gr~o^4+!WPOLDf(NrgQ8%le`X2WthbZO$hC`%~l?h{r%H&oU2pL
z`hz)!=LdbadOTK)qK)L(6F_>N!z?c)1T(mXoZ#xvCaVe1kXz)kp!ZI?GOP8>!!@-
z#Iw7_j(2`m+Iw|Xsdv*j7O<7@A-2b&Z;JGEJAi)Yfw;V+IcB3oY;a?EO#?S6{1v{|0>v60L_NQevT{
zOG>C*U#ShFl)+Qahd^;0>_xg7qV0%3=+)+bpiKp^L)auQh2;J_Zo0{{tPDcrJ0{+X
zH2RUF@69nC(u95leFgJ?LrY3XFx}-&j_q;s`lEEmar0Qo5m3q?;FgU!MSL`MEO?Z?
z7NzZFsg&qQJCBtW@Hd2YBkbs}k{T+r}vbkewMLEm|!vgFURArW+QI5{#T^D##YzO-{K{
z*xguwAv&X$3u=*FEGBwKS7qzWnqhtB=GVOq-?|3A+TLPAh{JEgu_}@*p?6
zS$u$0vlV{%j1DsR>vg8goz~~?w={1XiiM`91k1-dxsionX)^n)^eW-DY&mg{`2Xd3
zDi;g~QKEB;?=7jq&*Y}I3KMX-Qs@4d^lCO!yh-eI>(X(5GHxWM;SBoc?F#~@wTW~+NuaQV$YF-{K^1ej%4N9A5$)|Qz&oR6((i!W-4bfZ)J_p2il
z+$gh!kh|`Gv!r$*kM*beoh{q8@Oo;ZUng5mX5KE%XIq;^gJfZo+R_*B#347<6!ILk
zxV*<7CYD6frOvP3*X?cD?c_F3HT)Nunme%chVtOx;F&so*6j?$K7glwYu1>e78x>U
z$+fF~7g74Y!@+k77R7RFPmgZoT|7@*k!V!tU>71q;Tywftx7Vl^C`OSK@MKB+tz2v
zNvR~rseF&nASD0!FD~)VyNmtb4?PG2?=3IovDwG;y>FK1Kai%PA@>hH75F!npvRx(
zBwby5^oW;~TTl(@t`KS0*u_5~^oE#<~H)VsfQN3?j|JvKeIrdPR9_LmtY
z#t{6T>B*YOT-#=05%oH=@AK)Je6b#P%^Cf+AL*-ZfV|%=Z-FFM(9<~U{=0#J7)jm!
zaMn(j?I)#1XjSy$KkV2_(3tzsynk1pf2kS2>6+In4&X`PfiNZPekdKPj{$fx{Z5Jf
z>=e6Tbzwo-HwBT-Wn%2GiGcdzjTT_7Fyh2D^(Z`?2r1|0CoOSo##!o5c;6UlyCU9&
zpdUq);@-}L-A~GvQN1C-4XgeN0>>@Y?)ZN|h7;AoRKX9SJPSQfKg^;VqCwN@2m2f=
zzF8};pyRO04oZ)|(3QCvq4tIE8DhL3r(FBF+k&TC0VwKyX>0Kd2`JG)m9_nJdvy8d
zC;=kGdZcFWn{Xxlsj8GF-!LDen%8gMMU27w{MSzQyJZqyXI*}+^{^}NBM3XoGPp^Z
z&RxFo+_7OpSfrpI@<4&|#glwwTai
zG0^)AMZms!lbS_d$lK#!!Togl+;}b6mfoLc_788{cCTXaY1*7_n2ZcyLB5A?HDT+0
zqbGAOpO1FS+!!dS)@Ef^u6*O&uRweEG&gW`?S8U=#bNL{Ujj{IF-m&|txVDm@gwLO
zcVL&qH0@I}_lIBMaR)ogT@zObK*O}}-46_xzIi8@wz}0i3hCp|JW6QS<u`Xe3GwGC?BNB5X0cP6te-2G07!)x9Zmfb~8g@Sj}GaTtf96uUzB?m|hW4<9v
zN=lpxyY7}q59e;Yjqf#Q6T43EMzS#k3E7R*-c7MKsg+D-+X!$~m>mTLwmT&PVnVQy
z>G$GxJyzcag(;Y%B+x>C*qTtu7|mF#qbYQiPKnj-!q2i_-_*r4vg}O^W0g$7_D+(}
zTqIgBnyRERVPnv;aR%_gK>PWpFPr=rooSJvLqK~?)>U(-!JS?eA-5MFc#VxXT@7kW
zf3MoQ$tk!MXq-I|ODQNugOZ}k_)FG7%(wnwoK?E__ZY6s(eTZhC3IvBqiMYxGBx#H
zMu(c)y`fujsN&-3^v;w576tdqf5g0p8bQHrVm_CRZf-Zb_#_3z!f-&3`=OyF@HP~v
z8Bnv*)2q$ZLS1eVwtv4GpGo6bGy9xfWPq_`e*bO$jSR+7d{J*&@9nr(l0GG{wPb#a
z$Z5|eKJanosNUF-l4beSoWs!(U#gR?wdSR@ofW&~f1cdjL5;Wz`sY+bb%?QIs~m<;
zNbv)Y`Kl}c0q6gtqsb8^+|Le*V&KOb{-5dM|D97Nv}^sj-f{
zA`VLQpa!n_|MPBz1|Ok%;2Wc{Af)K4ffgbF4A3_3W>~f_VZpEZOoBc8=*ABXqc8oo
z%%PHh<|0&H(t2Hr!=Efoj55B$y0{KVg{6*%8E-k
zAs2Q<2#&`H4G#}{*p8N_c`w8%O$o$DGl-8!rQ7~L*n|asJOzQWt`#h;_b$Y+|7utN
z?}qK){_LnA(y`d5bhQCg*Ef?FQMo~sVtoLF4qz`zOz@$u4Cz@7pP5J+?sZpPNua)|
zF0A{frKm*@tI?Aj9rc{&@XAu5MgO|1rR^f;##Q+H%w@6gnkkpJb-&+S0#xYzT75k|
zMA9)Fv>OT?@nM!L$8WnLySExdk=^3EVxXi%l0Xs=XRMnc1%I(t*O_qP47E=Wu$TU&
z6)Hy4FD@k}0zc_WX7@+Y!E;QlE74anPgClf!mA*+r^^G-$6e~84cOdl*S-F2y^Yrs
zG(csLy2b{yD?~_Kk-QeB?Vlw*+<;>t>-Ltiz
z^fWVd-yZqo`oSmze4or$Iu>&8NI?mH@)b~5OjzEMx|lf@$uAaqn+HfA%*OXeE|SRP
zMc(9;`+EK~cta&Qy^377(9rF<6uVMq*S7_~A6+12gp^S+wkzQO`+UA*rM~}jHx`P@
zd4Y2EF6R49r!jvzg=bXYp5U=cZ@Gg*FH9iV>P~3{YP;(?L6c5&|P`
zRO)m;fSKwu%N}4~a6F}ywPEP=)6%^h?U9xB2*N^QIlu$QV+nF
zj~Wz5IVu}H)zJ8WZMipv6_;9cLH78hv0~;JhZE_No-uM5`GoKLWP(DUsn`iB*`$-f
z8IBS&Ea60~8j<~qvuj{ypL=gizWzaazSRLEd5v#U?>^chhH+$ce;hR-d4_~CZ}S=>
zd|z9@7qk8*_G28DIoM|$cAi$MJ!t(r$)e2bFwp>$7lh-fn@d$R#nVQav8yW
z`X-~+_!XV|FT6}$kNeMDR3?$(2H8!T>YUc(`;i-)XGl_V9OtjQjt1}vV+p;N95vdc
zzZ-i9Fo=5sy_UOB!}>&)nXQw@c!`;95(No|18|PF`IVbD;1-yv9A;rmm+*S-mdQG
zo(e&Y;~B*DQ~%Nr-KW~1k1Ay_loo{Oht-ZIPFt~b-tTZffv(N5;kw^~4+)3(s>vg`
zf^}wuAL(s_{n;l#9diaqy=YhQX2%E!f!MHFpNg)=k_@_|&HzcI%*`w!_NWC#B>uO$
z4bX$~35*J?R)vHN3Q;Rr4xToOxhzIObl_DPd!v)k@v;p$-C+w)0oTr!L7TfI@`W=*z)v5OViL-=dc64YVwTxMn!vuXY@9vQ
zX_AoVsn3s(oo@z?S=3y~x6OEcmKgwe1|b;ZepBW@c*4^peeyXbMKdV1aG<~v5%ClS
z5YW)V6f*!Qyd)XJK~kA;6zFAvmR85d_$F=Q;XRdhqdh>OXbtw2%q>!(!GG0#>abb;
z49a@F+i$L3HGuRtq)hwSz-kzNfh2)H`c0N_3;m0V$Gy$q7)SukvA&@JR)s*feaS3Hce#QGiea@gI+b@wSh~q%IZkBxyg=opK@{+)
z1nIvw5%=Ekcy3n5{;R?SpvKURvw_EU9vPIj#79aAB%NPBO7oS~q}A`2GXTZPyNG7K
z{U#OyL|K#;jRya+_`A1PB*uKI%+=Eulum_@v7tmDjmpjEy_pik3>2Zc)DxpKPaQnQ
z-9-k?u%(d8KN$__F^?FtRL+4H`m^FO8SXRM1%Z1uG>-TSg>IPwF$<}wW`5IVs0+;swp(K75kk$!(aXzIy
zPWZwO6LE)WNi*Om-Wwow0E?7^qIeW;?c_NQNfZ7@RJ*Pg^u)-uTd4X|KHa!7UGP)j
zzGrw}0jlRqhODBqTe2!odbw95L@yI+?#gSW=L}
zr~yzSXj{ZWZ(*Sd9*YJYx=h?L7Sir0YmQUW8oheWQ!J#w#u-G}LO4Ke@PK3Q?Eg+7
zRbB%S=>H-0&0U&5r)v4*j_`%7D|ZtVvo422j`Y7c6Y#hRgK;DR{>NUaR9XG}0}!-BO_ihEzr~jBR
zYMqaLNn|YPS>_?!5fIL~y1obMPp!BMvIzx7+;-R#>tOV7Nzi!x#x>V=l!*`@-|Pw_
z(x|9bv*nd?_6sH@#EE)pP3nga4uX)`x3R{**f;lKWps_-mMf+y!pExnx&R@R)*eS8
z;`h~Uy)Wsb=t@kP(2v+qlS7b3uTiirUZwxgR(tPk^zLqVq=+pktsPrl-w?lgoKA>`9|16mmRjS-WSQ*kPn=PJe
zoJK8gI%=?zp5Z;lnZA8bGbdoUkUeRtk({N2PQLe2HUZ_*8~j^DSWC*|=w$l-#&lDo
ze}FF}y{tPJG3@54nR=0GDSsF6A*VxOdNf;H-*Plt?7q)#H`c*{1N_*|eQCOiX{VRU
z+CuESJMSvM2_ju#%Upp2mB1domqRpqFK8?_uQxe07nLDlWnwguFjv^woT5z4`uT>X
zZSJLeekUDo9Tty+xd{VoDFY#q-+Jx@_haGWy!K*!
ztZXSxiSYqQPlegj+}@`e-Vpl&In?g?xTiHq+bV0`+@bT5p}l#EOjqC(^@mc$(&@_X
zd(X;#nn><2v@`q%x>DyKbmeVdGmvAo$Z{=5Z&;;Tcs-t(dI>=?H%-hAv3FB9ieLUc
zPhEe{Q##iiwx%HI&5KKsoE+7Zh<(!^SW@j3wF@V%#}L&LbcU?FfYZ5rka%=$rfL4Qy>$9$b9j
zwxXqLXlQ5Rx1#`!BN7oImJ)jJVxDuQxZ+8}l6ccnM^DiO{sC3}FvuG>9uOG$!vw$a
zdNXc79soJH+}MK4PXVqT7c%4QLEPtj$Kqt{pVdk_>Y~p9^N%xx@ms^9{o~oC28|US
zZyDgN6e4s=`nM0vW%G|QLcZ?vkUJZ7t1Q-`JoDgqqdU6#qC#~_>czQ`6T!k{#rn}M
ziwr9h=M{|;EqMhc>AJo5EY)Y!OAOWdvw4X}R_n3l+s+%rq&BQD{$OmH@M>>T2%C&8
z?+NY~vkk?RPp-vARI7Ig3@(4wtw{V!&YXHZeYN8Uwm-|wpz0_4wQN6xA=}xOve%tD
zjx8;cfMzBe`i6?0g|%oUnbJ06DW~Gt^=xNYk6v1Z)a}PiR2=L@97TI_LQ3IJ-sw5R
zp2&>BSX6UVkJlT{PsZK!Q_zLs4stZDPmo8?AhjKW{#Q*JQPlM>HPWvt{Y5lc+2Ph2
zs1*_Ln`l*WJG4~6H4F_4!=wn|R)77x+h%2P(?RUoAty=B#`fN^hfVVvb+mOV&0(MG
z!A#g>PWcl0L6e3U8Lv$ej8wVRJlDNY&NNy%d-**BM349K80^y8Z@rXO*jliAyKmCuX#&<39X|II1$z7+X$$o2eqV2HKoM*OUBi6;J@Pnr#RZ*-2#cVG;X
zcD`H{bu`|$TR#jZRpg};`66|N`YLoe^h#S=S`OR!^6ib|sDnAceb$`h>1X7GB70>4
za%sZxaNQ?YdJgxZd!dLYLWYM2QwZTgPFjZ32x)jjq83@yX5*
z(9iH=kah@Ty7&_CHC0}J-1^uz#wesZxmsaXNnxOYU#L-E6SteV6p9tdMFoAB(Q{+GHj$
zJg7yBFK=K)8gx9GefFEdJ_5{&YNCD%e1rycQOnV}AM*O0g9j}ADP&c>xLmS;{X=wT
z-+XWL@@8_viB8m^eN~g*@83$Ed=X9Ax!$YSTE#c_0PHYcE@(->Q@Z*^!I#L9Eavzb
zyUf1BXmVJy>d}eu(DreWP_(4amGwa{nhP@CyZSCCqT~tXqq8=RFZfo^WQoFW9QM|i
zMy_#ra}^basZ$U1*BcDn%IYI40|$|R4av`$F!!fjNJx7mECY2f+>NsY+BXqV2d+!EqC4_#7ym6n#b^ubRX%62+bV|m;0Wv7{mL!vQ<;5u3q-Q(slft}J*
zdPARlJN|QW>j*=}7#^LkwePQ-!`t&rcd(w3a;n
zevZqu?u1bK7tfhXgn@E6Kw4)S@%UIQSHD-C)mM6LhW8_79Xgd0nl*IcLfvH{IwdsD
zc~XnZ4I!OS)ElQ&f)REPG~A7}hkk@o11Ah2?ggBs`jtzTsJBjH?T
zu4JciAg|l5c14;_es0z%c-cb}N+<&*cq-b(0RE+GtvQggkQ3
z>wOU-3oZOpX#X~v$x#GEzD|I1JiR|@MS)Km$)Xa>9zO-`9;t{q8wR4+T9Eu_`89OQ
z7UR;F6Ko}ohQ$qe?nAQNLl&laK+>$$ILO;sja-21o08g^Ty=PoW$q2AmGy)Ep}S?h
ze~8l7br8WKAM_d4m;gv&>@HJ#7Qm#?hAiIb@}Z^q2B;p-@Q
zP`yt->TBK7ZWjGfU{*=khXH33?BF&+D(p1Fp-L-z_ol_S`{nnRljgRKZNjRd!
zw;Bf_jop)QAqK!?TMSDm+McL6mOeJlV(HOl7mJ!#44`J&iSsBD)=a7Kulon!*G(VI
z);H}4(-3O|{9#^Qe@>sy0?IA8$$niO1gPKj1W>+*wgEz>8RX>Xl+}u?70^N7BbVWrHv=<-FeB%g(3F!q7j3NzB0-eS;d}-P3jw5pFtYx%|T7e1Pssg4Ard
z$fd0N=>Y_9z~&yz&z}KPPzp@YZ&5e?HpTfZ!6c5-V561ux4smo~rRA0f0*52Sg
zPE2an)ZlpgJeaH1Osv7*74@v*yU8O=OC3b<3~Q;=i;D)#)dW?X@sscq>q`gupD#x6
zGwbyG7tsXfLlWO^g)BYLaNFqWc{qRB6ZODYnO(cZe`h|`uKD&0s0|f4aGHY$Vu3oS
zAkVYH%5-n~_7ayg5ZNf)Q8&Yr7c^fouYDOAyuPO187O0e3}+732|6C$B%I(UF{eByha}-jwQGSP+%Sv|Bv4kZ_CEqTo#*lcb3NAXTlq8X4_#Eiw;k6*#|t
zLn%d*Hw6h^Ky0JwzZ-wuJ!#-0;j$k^Ir8Y1R=vT#^Fe(Pc!m~fyf5QA{8ITGf_(Ca
zCDd&@s7KpAFWDq-*QJziARZpo`+(-qI{K+a_gljcm^SGxpb+wOjdrkGS_bmaWmziM
zjfPo`mQw2m-YTK}*sx;>>GmoWn42k{Rx;FwK~kF67G-g~H#E|(D{2&$f1gLqqJ?iF
z%&paYZ8qcKIAe~B;(bu>OH9zYFWW*K`q*mch{wIo}$iX1w0|aaS+n
zO!IUi`SzkG+(^
z4a^We64Mbh&86uD%XWrsEDiw2dLj^U?smCDPBoZFr{9@Q$l^@Of>Xp+#bhjnI82lq
z2OLUlyfu;Qf4*V5e`D2s-va3n{$we2a9QDSO0W3IT})nSsm(pYLvH<2x4fZ_I?;~8
zs}KUW@@^hwlozn48X=u4UoTZ>#1a%^pWNBondBIqwYYl1^J#YD=!BYKM!V%AVmNNq
z)7@@MNeovi;$8HyxF_5E8Y3S#b3$MHx7P3XU0uY^I+;#68BVYIPdV)__4zJXxK8Pf
z4)~Uygm0RD+`hqTGJc-rI*-AEYP5}6o|t`?p1v{Su&-zXu_g3S4_^#%G}HaEZ5~j
zj2!+NM0H)avn3ncfF^$?{T*rra8WBvQL%}3vsIe@#s)RjkM|@`T(|$;X(0?;oQ2n#
zLw$Z8^bVE{J=os2pk|cI%fl^)_UexJVs*20#!`HvcF{!fBeucK{xa{3@{`vJ;D*z(
z=7aBPmd)znQuY}VOGYj$1~c4Y^cXWMO^`fMqzGVb2E(vQ6Qps6#`&|qkGen2Hq1Gh
zflld5e1n*N$PxTmxKBL)wbt2TpIGumTO&5GY{%WIbtfKqR+ovuJBU^pmbk;vdL!&}H{9V_?9kju1*(zAzI9*?-V>1vj_K_k_
zqued6T>`fdFU+F`Q_*nOLEc_v4Leeza!{qp<6b6v7tD?-bh(*+*eiwq#7z@J5xL@?
zGa1k4qX|ykw}M|CRPh!lnRK1?b5G#hm!xrLV16#ePhgpTAceRbD-6DZcl>cjn0SCcpk=N0%?+O@y1
zA59vq)ng{?pn0=ie=h2C-6Ap;^X7@^CGb8yI?}FG5z*nsGt1X$wM!Y_73nedoJLgZ
z&|%oAp1hwA?&i#fs|LHhnfAwTY|ivFW2#VfIxgbXvO8e-Yvtg9jkYy1r90t`j;LZ@
zZqW(j()<RjgkUd0bKaH&t2;UzT|y
zGWOkl-$^UY-0Y|z7_DR5=nk#=J;ENp16NR?|wCLbVQS-o=GTDYYDl#
zel3R*ZkQ1CS&GVJUG47rSpnT`neMv5ivkte>3{@8xq4T4dO_UN1_@lt^*YXlL3G4e
z#n;r*17TttU7oqxug{g}w-NW($6X7|pB6=WN2%)F@T<)vwu}R_5(yWALaGb2rv(6+
zSEOctF-Jv-^8#L%12%g-^s{rZmBLFvj|M!QC_;OR}ppvh|nz%x3MnZG248L%D3j
znP)ASf-ev+779@27VMX?#yo7D=7t}!=UlM?^FEV~@-rh9cluss@!;5(Y^+(BK3%L(
z?0i%fqx~`9WbbZ1;oG=(#*tgkrtKAgqLs;F*iGg8?ytXSr*oghu!PRLe^Q1C&`&(u0Q)vUX|q?IS6)XzjkM3h}ItDP9of}d8>QI~f
zQqiRk&B*G`lx3kB9Nh2;ID*gcxKe@?WT`Z9?93aTsaDoM9;SQe!#zSNV|Fe0uQSJW
zXE1{>gowfc>x>ikUWi8Cz~{^mtwM7c7sMZ)c&~Fo$}59-ybMTO>H)5)ezq_X%@)n<
zG=JLF8f}(f);tCLWA4S1A68Asy0j=ozR)Vg_(&u9uv2gvRb32b@F9WxAZkxJ*>%MGkVUG
zFBjFzyDB}hU*pO)?|jwx^h7raBts1!(xsmEUwA(DeXTcuP{F;SmPV+m=7XajgE$We
zu$NU!P?{N1rLZv>jr`|I4-+&VbN?Qf#{*n{fWGSHCw7|IfFMwlnUHGQj24+$*%UH;
za9TD*{-5vhVw+_@KI8ZsBC6zLdsSR)j(yj5He?A*=$%#U25UMJsZu|S`ztyX+I&3a
zRH8E$3*1LeWV3P%6>7BdPLQ}xm9exUA>=FD#uh+^+e_Pn+u)5F=*RHkXD3UtZoi?V
z=M!GPumFYr4sJRM{0F@AMRT!+h96{D2p+YRCs6}E{B<+$5@JIV*K
zpS3J|KB85-3j!?3?&Ka7T;Fucw-lVCRfHWXP+@^DCW|jsZZeVcJ>$8!eO$DZss4?9
zI=y-$E}Vp7UN8VJXYz0!{0=H8Zkp4xuL{%Dj~L$4aDT@@a2#=JgCP%bhYr;2sk0U=
z4k(OJeH7OpeIw@(V4kJ$-#!BX7hHHMcGCm4rm)V_+d!^|8k@hCcVO#18ykLFWbwTa
zrAK;^B^{3>#vyuD6S=CjvHEswe&&DNGO3zJS}Pewl{vT4UN%RBh{bt>cHaB+oQ}50
z6l+PdglN+F
zDL@8~|2{hWuz9upDH4YqN!?g0DqjmLhMkXDm1a>T2JXjp4vNK!F5U1wb~~<6SHEQX
zcui|)NGXU2CVR70wMB|IgAbhJl030vmpdoyY-;90bQeMzHPTwt=T1RSChmv-SiD=`lX404i7X}
z>e+s0{Wl2`gw=mf6mdXuEqS_3IXRyA>$Hp8kK7_8?hH9f8IuY!wM5Hy<3N~k$95Z?r5?_nTpdZ1@6eSK{(jIj^3g~4ss3X<
zME8>Dd$jvOE&r2N`2N1hVV51Sac&U_TV?n}%{&5Le4!UvJ8$~n(FMLwT9()!rgkq(?wo^=AH5{|^n(GhwoEN$R!4c_gOH54hw<#1!jPy{mvJwV
zF(;AvLVq>G6sj2a{ezSPrf3t4f~jjro!kzBw+UB()(@`3kSWQui`Q{UPwJxxm?Gx9
zAO*x*|B{>SppH!!)UN9B{il&S6WpQR{PXf3n17=M6
zA(7z?s|+J$aCpHilkE#CtRY_v*CYh(enL~3NDw9yTK0T)Xm|)x^sJ`xjMbVdj8Y5f
zpnK_<0lxnhu22O1XMWr&BWAF|st2T7L2V%e`9wGl!0W-)chX=BqhlcTqZT
zC~5j{vF3^xe?61-fhQG#0C4`em#Vs5QoM`1Y@2T2;t(j%oK
zH`&)sZ%?MXoYbepi=jI6G>+ZypRT80FOG#8;r
zhTEEw(F#oV&c!~^TQvB!c1e~f*k<7_(FzkIiupLT%@n)n^0&zXy95Uit$tS$E$lJ(W`z
zpjJ23q%@2RIP3_U{Pe~p3x4SgCDZ~YmwGrd+sahWNw&!>{fXX`i@E8n59g)GR_C_1{H0yAPe8~6vbp*OVeoKmzLc$G
zsC&tpQ;tN2oWB2_a~`1oO20hLI!y&p(k2q_lVo*{#3EpI
z6B@%U3jqG$1BDF7nY+5*RmPZ1!DLQu7Mf#6xqW5&9m#g)&ZL@6D6aXQA$-NxT*#a+lmZrWsHYf@l$8|e!
z$-*F==+oIT62qFoVyoDAv1(WaxwK2<(dt8xBHPm;?;c!?E1g!9>N6oPyp`uKHOMYc
z2+n!L(QT?fzW3-WdG!AM;|e_$R>>8i5&J1y6O-x(=9Kf0kb)NrDH%j&uoMkFTQM6}
zSEd4b<(ERawozd6V4Y;XjufU4-jy5APvMaN>2wPZefnOf)2)}k2=4RNhCRo-r>a?h
z*^7OK>5{d;=j@cTjy$$=KCZXuFXyV!lB=Yk55p!4Ng__`8q9iXte9hK))*VcJBB>l
zT?0_R#F!CV4mwPXrlK#H8YB&RXJ@8rbp_~!d~sa`H)q*frn4Q0_*AAOU&7&%Mh1gB
zby19rfXZ--GK&;(f(Q*+JZ$6%z_#UPS@iquU+nkjV?UdnaPkV+8rT)bf5?Nk{2rfT
z(M`?y*mD34)}DmsU`$d%>Rg+UoJWP%fb7x2<|stYKHz-OeH>X*;ngu9{a80&-PS}5
zz>Zzg#5Zh5k=iVRKEbHQNT8yY;(_h_QW!Ip2SZRxhYstVD|vh*NuA0s$|xVwcfgyG
zbfH|WAs1u{BchoW??{-4<RS)O9slWG9yF1m*!
z%P7@Yvn?AbQ9(Fr()8PXr|7yNp|un=f>eE0D`k8UQ^^~En9^M|d3G+PGOB?4mwz*x
z1(;MMYKl%Rl()Mn*i|r+b6|^?kX$mIhb9&pnOBfV7)SJq=wWD*s2pOCzH9??F$@z9
zd|sz|7wm-ald2Ufs?8$>^IM~=MGM8*A61)cke1dc`TRFhx`aJSN8Rqe^oZUpM&0j^
z4^x_hdTV=oK-YE0J4Mz%J!8L73tVb?#CI%Hw!?@|r3S(@pa6Y~
z*Hg?F!ch0U%>bOckUx+6OEcg->lm4yw`Fr4afV-=rSUN_QuP&N(59*VW!acgDJ9(+
z>p@0S1kh$
zj{&O|h8VK5OG
zAvFs(cu`=eeoKp&vZg*)R1jVyuU856<}Fvl>IWIvv+(?1N>~9s5?Q!=;nfsd9g91>
z2;$$-fZxIX)j8H#6Ff<%JJV37V@pN`zD<#8bpf_HX);&UyWXFMnt8zL{_q;QBDWVY
zI*?EPmn}Y5ra0LO6fnDx0w(($3&c5^Syeg}xLfs7VIJznGaH%_uf8?Wke)(ty{PBf
zL|$uw?Mt|S`S5o#WHT*UmN`BjH3CCS##Rdn{d~AG_%rRMrD4DNFjGhYNylrvkR^F=
z>!)Bhy6Y1jOpQD0k9D*aB=ydx6}=2ozZ_YmZlDji-0n>Fo_UGfL(8WAIK0SyR
zPW{1m+XqD$wVrdf5wdUs&B9O1F(Vl7->W<2A22#JG{&i+L)q&2S)0Swu3@rtW1aS{
zX(G@GETeiXX8jp|C<>BGr;@!$deHa2^Y?d(Jx`$qQJuWLF#PTWfIjf%%*GZ`cl7TK
zF)6s|$gth)yrV_Dh|X67gb1?P4!6iU_4FB5A%}itYTNJG&71(u{fwSxI3*(hCIuZt
zcgE!6KHgBy?Z`|lCo<(?7T?kTB87iEvLz23$Z>tLT)JY^Go(nj=+w{sQ-65C*t_Qr
zFL{fopW%@XZHy(A$13R7{1(@TvFkc_QwYx9obp_jqK;G_r|IvNwfof}jBe6WilYw|
znEe`ESiOr=T7!gYbzD1GPwgN|mp^g%B-V58H4Mvs*pUy1>=zR*rWK1>xo@uC8mWuw
zEGVur3wbyZ@W}P$WEV5Sjq@Z=9?p8mMpYkKN_7L8WljEjB@z!tuH5lK1(?~SHAVYa
zIp{KxnJIn1F?cy9EmyBZLoE3HDz^JMsXCHdS6$#|6VJ5^rd
zns{Aj{mayW*$F{;E3aXLXB1FSlsbXp-UQLem4zX{+daR2rHEAy?K-onW?BsOo8JiR
z`@sd`mh$Ic6aSu%Kj3_ba^bL^#b;SC(0qZ__gN9vB|fN;+_|PFLbn#Si*5$~85FJk
z%MIUOE{OdCY0sjn=Wzjo#wGflK95sg2B)!ci8IF&@WRn-+b<$Nz19&}`rNcp@L(A47hcQ-6mR>
znM&RQnFr4HaZAhOFl{+|=RJv|{f6r3Uoo+U?gAvaSsJgX;XDP@pIO*vZJm4%=**n@
z`By7@Ym!GlEIb=D`LP;Vmclf(*CBGPH_c;{S-?#O!GkaEMHar#zkFJIQ5}udURBoO
zFM0g>n#Jw#fwg&l%D;s0o^FDvGUJ
z*Ntuj5djrMB!gr@$r6MnYa=;HR*{^PC>aGok)T_eoTGq%XD4X_&-{Z#BG&!04k7K!vI;(#I#OPN=-|>Uh
z17gff<)VrWS#u)%a+g5W^S?_Wi`7Sx7=>#Oyi5XbrG%rMXEKS5Ij#O)TMHvCYbS-O
zg1%Q9_K8ngv6u7RCtYx}W7QA!wg+p<6Cs`Oky58h-LhB9%P0NU3qOxya5RZIYz<^NBv;7oo#Wn`TZuA>E6f)AcURQxq26qE0MjD}G->_mK%<+8d%CQ&hMfD**JjbV7&NTm(+y
zvk%zVv=McC=VZNlc6Q!hr?YyFw|6b%W{xN;t?=y<`s{-M8I^@gM?Z2&mqgPS>6Vca
z<69poAMZ!E^KKT3lr~PV;`1SHM-Ap$?GMRidmiO2_&PV3=eW9)YS>osd6PfdC|-8A
zN5WID{JKtiYfor1hYp~75B&T;fLRiJ)I+)X1tRu$JwLPl&LAq})QWNHAJSR)Ex!4Z
zZ26KBg^U*2&0HC*N^R(e6B#aR!va$5EcQ6ZqWP%rR;i&OdA_i6$u(}2bugc|U<`Gv
z`nw@?+iOu-G<3o@b((HyF~~IuESaENxjv;4ePp%O*)j6hJ^5UH$yWXC%L=Je^2I0B
zXie}#>N>4BtsIKo>xEqNACH#gx+m8A#%b=&0u5U2*zg)rbm?lKrEGiq1B`Sm>Vpd2
zF=gx$NzA!r<*ny1Wt47YKYWfAJ}z6x_RaheD`7L9P*YU$=t+YncJPsRw0nVNDccoPf&yY?&cN{~wrp%DrcynGu795@MfvD+N@-c1;p_&Y+!aC^UQaDZEI3
z*H|HLUkdf1@}`b=s%^q-dCP8l-s9fW9k5<>?%Qtm+nBulQ75a^&?~`UXQOsxb7wZF
z)OFx!yR>RBZ79(PpnqU-htj+fW0EB3uSC>WOOJB|5JVPvyQq4s^OC?j!^vE{j0K=Llh4
zhJj{EN?^UtoOGyn3x6uh_?ZPk0!oh_zL-H+P*8BoW#XgTmQXl3bOMYN*@;{hyIbv=
zeN^9%EB1o$eFqRyjKf==&rg9Ej{5mVUSN%x-k*vrL0(dI<4nl)6t%0sU_^{jA-hlU
z68@kSk<+bUyl?^?+w0t^ytVaKCFTnbF>3gi;e9N9=*w{@_1JF?T?TI}-P`h?f?
z(5jkgvSi;)5GE2LTYwiBYGKLUs%&c14uuw3>>np!EFFeg{g`?mpPkhUD75$M(|Act
zUNgQ%TQ2(w)ClIKeX3USVEZ%xx{qsM5HK@mk6QiS$f&J8xP<=4&RZabGtInl{Jj$U
zTCsk6TYLayPMr$8G7P7%^p6~)8#$N{qQbbVP?e*jFPx94Fdo+X*zX`*ZiMKiS`aGf
zo>YBk(Q8#?vKUxOj6#bdYL|fj_(Zt0$Cp}Kt5!sB1agU6O+LnR`+gahP_KQEGbk>q
z6|Hkx7@Dvd|2SbYsy;h&0`-C4OJ#0fdXjn%2Q6@{Rzyk=j;r;2F>3wJt{{F2k-Q?D
z>s{~p(d!j(e93yY{c6}`|JlV1O7vZVz)i_F$aSkZFq)T{6n2Id4#6eX2XR12mrc0w3V#8-C=xM7KOU%ML*{uHA(76GB)as@VXIjy3qYH
zLswqO-3bdsr?k-=YMZe#&H{EIzCy~AC-)?}kfUeBNq#xrBh&C^OI$gk;LL9qy9?053P*IkS4x7r-nwb
zaCGrJc)2@e8x~t}#<*x(%q34KzG~UTWwAW)nCmT8>TMA?)yIA*MS`L4FF#^J)YruK
zz$-`~L}6$CKf*FZnKl?5CX`#cy?i~6E$kTWje;KZT6+8o_e|cAkw9J5wSG(EaQYFmuw=y`ZR#S(cg3&g%O{`rS{k!~lIGS?F!0x%n|)1UL8e
z_aVS35ge#2JHxM8AbKC&Ef{%kbd;>ydp+0a=;`pzXYFm`qyw)HuJOap`*vpA&K{K_
zXw$h$bA{d<_s#%I(%zzdx*2|g-v}yP@f#irgLnj^Mv`JO^ZA@dB?Yd9?nNW=Pt_+*
zC!zmR^ZT@Oxnz&81;m5oCFmHbXWLeifAbsWOWf}F&SN_88Y18A3fQ?y7-j2w)2{&hWgH*N|iX)W`7u|LSC3jJ$WbtW5_}6uSU42B3TI=#tUFJ&H=|g8O0NbItu=YXp~-^vpCCf9lOy5EAQI!D#0a)C*;g<^1D!6X4B*K5;LtqGx1$upYVb
z{as>(2YlZy+$*48&aU|PK&r?IsV2wtQp%EhA*U>j6ojJjTLjP4F(W^V`~-3tIaL)4
z8N>_WxrDCi=7n67?X4;WsC+5hyI@`g>LYY(AH*S}<>&a|eBsLrLC^zbm=XVsuJC^n
zJMvAi{l7`e|M7F-v#(W9VIYwlur;WDp_9#J*jkHr^}Votc|h
zF}lFLJ<2xWuaDh?W~(c-<51M4nc$RrbuQ{W3KP!B
zH9KJt%l89}s8l5;^lSUC>Tf2;cO2?+BUE9bR}6ol!rvuM)b?rLU3v2RtS|w(Vgacy
zEz=?45B$AqcJh^IEfiI}f(wj3y|*fv&j%hF=0&uu2}6TmI@aMNVeB{q(M)h;gzbmf
z8}LKULYuPlR$>-BCUj8G%qY>*UAqT<8U#ayg^U(pxL{$8w1aq^D+Bybt8&$Q@)Q$N
z^y+EZU}VSRi%Djs60LPkkUVVB6Q+bm~SX`lzybMHJY}%i~H3anyM-Q#uvGmyg*{*;n-ZK&5%}CY
zO=M7NFyK3+I41LB1K<*)OwFe1o*(xUVR#$m<_e{cqa-f{#@%M)?d(jW1uWGc&^yK(
zxzDqdl`h4*FJ#U9CnDH){Wa7#NCtr*`X%B^{5LPPkfzzu#n#AcuIXd8BgM_w!tGBd
zNdcRwa&t2r;dqA!rZI|op`NLU;U#Tq)_w4*ls<;Jt^-G`{n#HGE8BitJGH{HKYh)|
zGJ;#*tQE1-MhpU`-$9j7>+iYl`tjBdV4_*()#BXs6Zp+TMSwDhGEdw}Rj?orI>#)3MO0-VZH8olnsKy)vERpU!13Et0@hqMyB_yf
zSsvRfTh52wyT!pd10i6$MaBU?`>G
zF7}C-9N`Z6N$|3990GCa13$!G-7$w+Q79zAZ3+av;QTpaKAGrmArQ=>7y(Gq)R45x
zhBDaSQ}uzPCge9=9)I7`eM*Xd8gWmrIy+1g2@9E(>KABzs#UseyZ4HXa*aQ~gn~Ds
zZ8cN;x1o}_od%Dr&7DwD7%t}#SUS511TqwgTJQQlr?Z9#C1Y}3L64b1Ec=dcMeOVQ
zygVU(v$hYka#orzAOH9=yqhpHU!cfS(a_G3ht$Ane~{a!Tv-HEjrFX#j4pwB++%qr
z3xs!BWgMePGA}4_G!;ni;uv$VWG<;lOpyIU5{O>=22fj;xxBF`2?Y}4WAgV2(nWDo
zrE}G5i)xRghv}ooTqnJRqOPPWgq4owZ}W~NIIQ;+o<-CUJ;;#F5h33MOAv{dIRh`7
zH7j!pE67X>6xhf#meeHIEznsO0@HFYfXMy3TMTr*89Lh@RVih0BLLaW2wA=(XQQ)U
zamna~*WL@;?aScYynDRUFVKyaI|nw&HSt*?i)>9L@sFLZ9y&S^JAX{Q%!zwB+h7|Q
z>RKLJzO8UfTs9ipvKq%dmIOj`>rK50+-3@L8z%MatS;R^#(zu;cmhvhzKj8yx1e5s
z3wr#WM2T^uv)hQtsE$<{?xbf+4IdGLs@5klbn2Faq6Q6>o70$F1)bmdRV$*d*Mw`r
z@J-q8K;c@140IaK%j&8dq+2oz9Wvg)ro6p02j
z1-J~{t?=F&HaE=Le0QEsXUd+$=b-Pg3Fnk`gR>pgmRP|NZof*s@VMH@MB{JvNgHXv
zhA=d1JYuqmcoRC%-@^c&9~OrHJy?~QJdH-D?@j6@ew3y9vX