Chronos: document regular update (#4241)

* add information changes

* update typos

* add typo changes

* update documents

* update chronos.md

* add updates

* fix rst

* add speed-up md

* fix broken links and typos, add tutorials
This commit is contained in:
Junwei Deng 2022-04-24 15:55:23 +08:00 committed by GitHub
parent 96fa40b4f7
commit ada7b4b978
8 changed files with 277 additions and 85 deletions

View file

@ -21,6 +21,11 @@ View anomaly detection [notebook][AIOps_anomaly_detect_unsupervised] and [AEDete
DBScanDetector uses DBSCAN clustering algortihm for anomaly detection.
```eval_rst
.. note::
Users may install `scikit-learn-intelex` to accelerate this detector. Chronos will detect if `scikit-learn-intelex` is installed to decide if using it. More details please refer to: https://intel.github.io/scikit-learn-intelex/installation.html
```
View anomaly detection [notebook][AIOps_anomaly_detect_unsupervised] and [DBScanDetector API Doc](../../PythonAPI/Chronos/anomaly_detectors.html#chronos-model-anomaly-dbscan-detector) for more details.

View file

@ -9,13 +9,12 @@ You can use _Chronos_ to do:
- **Time Series Forecasting** (using [Standalone Forecasters](./forecasting.html#use-standalone-forecaster-pipeline), [Auto Models](./forecasting.html#use-auto-forecasting-model) (with HPO) or [AutoTS](./forecasting.html#use-autots-pipeline) (full AutoML enabled pipelines))
- **Anomaly Detection** (using [Anomaly Detectors](./anomaly_detection.html#anomaly-detection))
- **Synthetic Data Generation** (using [Simulators](./simulation.html#generate-synthetic-data))
Furthermore, Chronos is adapted to integrate many optimized library and best known methods(BKMs) for accuracy and performance improvement.
- **Speed up or tune your customized time-series model** (using TSTrainer and [AutoTS](./forecasting.html#use-autots-pipeline))
---
### **2. Install**
Install `bigdl-chronos` from PyPI. We recommened to install with a conda virtual environment.
Install `bigdl-chronos` from PyPI. We recommened to install with a conda virtual environment. To install Conda, please refer to https://docs.conda.io/en/latest/miniconda.html#.
```bash
conda create -n my_env python=3.7
conda activate my_env
@ -27,6 +26,16 @@ You may also install `bigdl-chronos` with target `[all]` to install the addition
pip install bigdl-chronos[all]
# nightly built version
pip install --pre --upgrade bigdl-chronos[all]
# set env variables for your conda environment
source bigdl-nano-init
```
Some dependencies are optional and not included in `bigdl-chronos[all]`. You may install them when you want to use corresponding functionalities. This includes:
```bash
pip install tsfresh==0.17.0
pip install bigdl-nano[tensorflow]
pip install pmdarima==1.8.2
pip install prophet==1.0.1
pip install neural-compressor==1.8.1
```
```eval_rst
.. note::
@ -34,6 +43,12 @@ pip install --pre --upgrade bigdl-chronos[all]
Chronos is thoroughly tested on Ubuntu (16.04/18.04/20.04). If you are a Windows user, the most convenient way to use Chronos on a windows laptop might be using WSL2, you may refer to https://docs.microsoft.com/en-us/windows/wsl/setup/environment or just install a ubuntu virtual machine.
```
```eval_rst
.. note::
**Supported Python Version**:
Chronos is thoroughly tested on Python3.6/3.7. Still, it is highly recommended to use python3.7.
```
---
### **3. Run**
Various python programming environments are supported to run a _Chronos_ application.
@ -83,6 +98,7 @@ View [Orca Context](../../Orca/Overview/orca-context.md) for more details. Note
```python
from bigdl.orca import init_orca_context, stop_orca_context
if __name__ == "__main__":
# run in local mode
init_orca_context(cluster_mode="local", cores=4, init_ray_on_spark=True)
# run on K8s cluster
@ -107,8 +123,9 @@ from bigdl.chronos.autots import AutoTSEstimator
from bigdl.orca import init_orca_context, stop_orca_context
from sklearn.preprocessing import StandardScaler
if __name__ == "__main__":
# initial orca context
init_orca_context(cluster_mode="local", cores=4, memory="8g")
init_orca_context(cluster_mode="local", cores=4, memory="8g", init_ray_on_spark=True)
# load dataset
tsdata_train, tsdata_val, tsdata_test = get_public_dataset(name='nyc_taxi')
@ -124,7 +141,7 @@ autotsest = AutoTSEstimator(model="tcn",
future_seq_len=10)
# AutoTSEstimator fitting
tsppl = autotsest.fit(tsdata_train,
tsppl = autotsest.fit(data=tsdata_train,
validation_data=tsdata_val)
# Evaluation
@ -141,6 +158,7 @@ _Chronos_ provides flexible components for forecasting, detection, simulation an
- [Time Series Anomaly Detection Overview](./anomaly_detection.html)
- [Generate Synthetic Sequential Data Overview](./simulation.html)
- [Useful Functionalities Overview](./useful_functionalities.html)
- [Speed up Chronos built-in/customized models](./speed_up.html)
- [Chronos API Doc](../../PythonAPI/Chronos/index.html)
### **6. Examples and Demos**
@ -156,6 +174,7 @@ _Chronos_ provides flexible components for forecasting, detection, simulation an
- [Use ONNXRuntime to accelerate the inference of AutoTSEstimator][onnx_autotsestimator_nyc_taxi]
- [Use ONNXRuntime to accelerate the inference of Seq2SeqForecaster][onnx_forecaster_network_traffic]
- [Generate synthetic data with DPGANSimulator in a data-driven fashion][simulator]
- [Quantizate your forecaster to speed up inference][quantization]
- Use cases
- [Unsupervised Anomaly Detection][AIOps_anomaly_detect_unsupervised]
- [Unsupervised Anomaly Detection based on Forecasts][AIOps_anomaly_detect_unsupervised_forecast_based]
@ -165,6 +184,8 @@ _Chronos_ provides flexible components for forecasting, detection, simulation an
- [Network Traffic Forecasting (using multivariate time series data)][network_traffic_model_forecasting]
- [Network Traffic Forecasting (using multistep time series data)][network_traffic_multivariate_multistep_tcnforecaster]
- [Network Traffic Forecasting with Customized Model][network_traffic_autots_customized_model]
- [Help pytorch-forecasting improve the training speed of DeepAR model][pytorch_forecasting_deepar]
- [Help pytorch-forecasting improve the training speed of TFT model][pytorch_forecasting_tft]
<!--Reference links in article-->
[autolstm_nyc_taxi]: <https://github.com/intel-analytics/BigDL/blob/main/python/chronos/example/auto_model/autolstm_nyc_taxi.py>
@ -182,3 +203,6 @@ _Chronos_ provides flexible components for forecasting, detection, simulation an
[network_traffic_model_forecasting]: <https://github.com/intel-analytics/BigDL/blob/main/python/chronos/use-case/network_traffic/network_traffic_model_forecasting.ipynb>
[network_traffic_multivariate_multistep_tcnforecaster]: <https://github.com/intel-analytics/BigDL/blob/main/python/chronos/use-case/network_traffic/network_traffic_multivariate_multistep_tcnforecaster.ipynb>
[network_traffic_autots_customized_model]: <https://github.com/intel-analytics/BigDL/blob/main/python/chronos/use-case/network_traffic/network_traffic_autots_customized_model.ipynb>
[quantization]: <https://github.com/intel-analytics/BigDL/blob/main/python/chronos/example/quantization/quantization_tcnforecaster_nyc_taxi.py>
[pytorch_forecasting_deepar]: <https://github.com/intel-analytics/BigDL/tree/main/python/chronos/use-case/pytorch-forecasting/DeepAR>
[pytorch_forecasting_tft]: <https://github.com/intel-analytics/BigDL/tree/main/python/chronos/use-case/pytorch-forecasting/TFT>

View file

@ -5,6 +5,7 @@ Chronos Deep Dive
* `Time Series Forecasting <forecasting.html>`__ introduces how to build a time series forecasting application.
* `Time Series Anomaly Detection <anomaly_detection.html>`__ introduces how to build a anomaly detection application.
* `Generate Synthetic Sequential Data <simulation.html>`__ introduces how to build a series data generation application.
* `Speed up Chronos built-in/customized models <speed_up.html>`__ introduces how to speed up chronos built-in models/customized time-series models
* `Useful Functionalities <useful_functionalities.html>`__ introduces some functionalities provided by Chronos that can help you improve accuracy/performance or scale the application to a larger data.
.. toctree::
@ -15,4 +16,5 @@ Chronos Deep Dive
forecasting.md
anomaly_detection.md
simulation.md
speed_up.md
useful_functionalities.md

View file

@ -25,11 +25,11 @@ There're three ways to do forecasting:
| Model | Style | Multi-Variate | Multi-Step | Exogenous Variables | Distributed | ONNX | Quantization | Auto Models | AutoTS | Backend |
| ----------------- | ----- | ------------- | ---------- | ------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- |
| LSTM | RR | ✅ | ❌ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | pytorch |
| Seq2Seq | RR | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | ✅ | ✅ | pytorch |
| LSTM | RR | ✅ | ❌ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | pytorch/tf2 |
| Seq2Seq | RR | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | ✅ | ✅ | pytorch/tf2 |
| TCN | RR | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | pytorch |
| NBeats | RR | ❌ | ✅ | ❌ | ✅ | ✅ | ✅ | ❌ | ❌ | pytorch |
| MTNet | RR | ✅ | ❌ | ✅ | ✅ | ❌ | ❌ | ❌ | ✳️\*\* | tensorflow |
| MTNet | RR | ✅ | ❌ | ✅ | ❌ | ❌ | ❌ | ❌ | ✳️\*\* | tf2 |
| TCMF | TS | ✅ | ✅ | ✅ | ✳️\* | ❌ | ❌ | ❌ | ❌ | pytorch |
| Prophet | TS | ❌ | ✅ | ❌ | ❌ | ❌ | ❌ | ✅ | ❌ | prophet |
| ARIMA | TS | ❌ | ✅ | ❌ | ❌ | ❌ | ❌ | ✅ | ❌ | pmdarima |
@ -111,9 +111,10 @@ auto_estimator = AutoTSEstimator(model='lstm',
```
We prebuild three defualt search space for each build-in model, which you can use the by setting `search_space` to "minimal""normal", or "large" or define your own search space in a dictionary. The larger the search space, the better accuracy you will get and the more time will be cost.
`past_seq_len` can be set as a hp sample function, the proper range is highly related to your data. A range between 0.5 cycle and 3 cycle is reasonable.
`past_seq_len` can be set as a hp sample function, the proper range is highly related to your data. A range between 0.5 cycle and 2 cycle is reasonable. You may set it to `"auto"`, then a cycle length will be detected automatically and this parameter will be set to a random search between 0.5 cycle and 2 cycle length.
`selected_features` is set to `"auto"` by default, where the `AutoTSEstimator` will find the best subset of extra features to help the forecasting task.
`selected_features` is set to "auto" by default, where the `AutoTSEstimator` will find the best subset of extra features to help the forecasting task.
##### **2.3 Fit on AutoTSEstimator**
Fitting on `AutoTSEstimator` is fairly easy. A `TSPipeline` will be returned once fitting is completed.
```python
@ -172,56 +173,86 @@ The input data can be easily get from `TSDataset`.
View [Quick Start](../QuickStart/chronos-tsdataset-forecaster-quickstart.md) for a more detailed example. Refer to [API docs](../../PythonAPI/Chronos/forecasters.html) of each Forecaster for detailed usage instructions and examples.
<span id="LSTMForecaster"></span>
###### **3.1 LSTMForecaster**
##### **3.1 LSTMForecaster**
LSTMForecaster wraps a vanilla LSTM model, and is suitable for univariate time series forecasting.
View Network Traffic Prediction [notebook][network_traffic_model_forecasting] and [LSTMForecaster API Doc](../../PythonAPI/Chronos/forecasters.html#lstmforecaster) for more details.
<span id="Seq2SeqForecaster"></span>
###### **3.2 Seq2SeqForecaster**
##### **3.2 Seq2SeqForecaster**
Seq2SeqForecaster wraps a sequence to sequence model based on LSTM, and is suitable for multivariant & multistep time series forecasting.
View [Seq2SeqForecaster API Doc](../../PythonAPI/Chronos/forecasters.html#seq2seqforecaster) for more details.
<span id="TCNForecaster"></span>
###### **3.3 TCNForecaster**
##### **3.3 TCNForecaster**
Temporal Convolutional Networks (TCN) is a neural network that use convolutional architecture rather than recurrent networks. It supports multi-step and multi-variant cases. Causal Convolutions enables large scale parallel computing which makes TCN has less inference time than RNN based model such as LSTM.
View Network Traffic multivariate multistep Prediction [notebook][network_traffic_multivariate_multistep_tcnforecaster] and [TCNForecaster API Doc](../../PythonAPI/Chronos/forecasters.html#tcnforecaster) for more details.
<span id="MTNetForecaster"></span>
###### **3.4 MTNetForecaster**
##### **3.4 MTNetForecaster**
```eval_rst
.. note::
**Additional Dependencies**:
You need to install `bigdl-nano[tensorflow]` to enable this built-in model.
``pip install bigdl-nano[tensorflow]``
```
MTNetForecaster wraps a MTNet model. The model architecture mostly follows the [MTNet paper](https://arxiv.org/abs/1809.02105) with slight modifications, and is suitable for multivariate time series forecasting.
View Network Traffic Prediction [notebook][network_traffic_model_forecasting] and [MTNetForecaster API Doc](../../PythonAPI/Chronos/forecasters.html#mtnetforecaster) for more details.
<span id="TCMFForecaster"></span>
###### **3.5 TCMFForecaster**
##### **3.5 TCMFForecaster**
TCMFForecaster wraps a model architecture that follows implementation of the paper [DeepGLO paper](https://arxiv.org/abs/1905.03806) with slight modifications. It is especially suitable for extremely high dimensional (up-to millions) multivariate time series forecasting.
View High-dimensional Electricity Data Forecasting [example][run_electricity] and [TCMFForecaster API Doc](../../PythonAPI/Chronos/forecasters.html#tcmfforecaster) for more details.
<span id="ARIMAForecaster"></span>
###### **3.6 ARIMAForecaster**
##### **3.6 ARIMAForecaster**
```eval_rst
.. note::
**Additional Dependencies**:
You need to install `pmdarima` to enable this built-in model.
``pip install pmdarima==1.8.2``
```
ARIMAForecaster wraps a ARIMA model and is suitable for univariate time series forecasting. It works best with data that show evidence of non-stationarity in the sense of mean (and an initial differencing step (corresponding to the "I, integrated" part of the model) can be applied one or more times to eliminate the non-stationarity of the mean function.
View [ARIMAForecaster API Doc](../../PythonAPI/Chronos/forecasters.html#arimaforecaster) for more details.
<span id="ProphetForecaster"></span>
###### **3.7 ProphetForecaster**
##### **3.7 ProphetForecaster**
```eval_rst
.. note::
**Additional Dependencies**:
You need to install `prophet` to enable this built-in model.
``pip install prophet==1.0.1``
```
```eval_rst
.. note::
**Acceleration Note**:
Intel® Distribution for Python may improve the speed of prophet's training and inferencing. You may install it by refering to https://www.intel.com/content/www/us/en/developer/tools/oneapi/distribution-for-python.html.
```
ProphetForecaster wraps the Prophet model ([site](https://github.com/facebook/prophet)) which is an additive model where non-linear trends are fit with yearly, weekly, and daily seasonality, plus holiday effects and is suitable for univariate time series forecasting. It works best with time series that have strong seasonal effects and several seasons of historical data and is robust to missing data and shifts in the trend, and typically handles outliers well.
View Stock Prediction [notebook][stock_prediction_prophet] and [ProphetForecaster API Doc](../../PythonAPI/Chronos/forecasters.html#prophetforecaster) for more details.
<span id="NBeatsForecaster"></span>
###### **3.8 NBeatsForecaster**
##### **3.8 NBeatsForecaster**
Neural basis expansion analysis for interpretable time series forecasting ([N-BEATS](https://arxiv.org/abs/1905.10437)) is a deep neural architecture based on backward and forward residual links and a very deep stack of fully-connected layers. Nbeats can solve univariate time series point forecasting problems, being interpretable, and fast to train.

View file

@ -10,4 +10,9 @@ Chronos provides simulators to generate synthetic time series data for users who
## **1. DPGANSimulator**
`DPGANSimulator` adopt DoppelGANger raised in [Using GANs for Sharing Networked Time Series Data: Challenges, Initial Promise, and Open Questions](http://arxiv.org/abs/1909.13403). The method is data-driven unsupervised method based on deep learning model with GAN (Generative Adversarial Networks) structure. The model features a pair of seperate attribute generator and feature generator and their corresponding discriminators `DPGANSimulator` also supports a rich and comprehensive input data (training data) format and outperform other algorithms in many evalution metrics.
```eval_rst
.. note::
We reimplement this model by pytorch(original implementation was based on tf1) for better performance(both speed and memory).
```
Users may refer to detailed [API doc](../../PythonAPI/Chronos/simulator.html#module-bigdl.chronos.simulator.doppelganger_simulator).

View file

@ -0,0 +1,143 @@
# Speed up Chronos built-in models/customized time-series models
Chronos provides transparent acceleration for Chronos built-in models and customized time-series models. In this deep-dive page, we will introduce how to enable/disable them.
We will focus on **single node acceleration for forecasting models' training and inferencing** in this page. Other topic such as:
- Distributed time series data processing - [XShardsTSDataset (based on Spark, powered by `bigdl.orca.data`)](./useful_functionalities.html#xshardstsdataset)
- Distributed training on a cluster - [Distributed training (based on Ray/Spark/Horovod, powered by `bigdl.orca.learn`)](./useful_functionalities.html#distributed-training)
- Non-forecasting models / non-deep-learning models - [Prophet with intel python](./forecasting.html#prophetforecaster), [DBScan Detector with intel Sklearn](./anomaly_detection.html#dbscandetector), [DPGANSimulator pytorch implementation](./simulation.html#dpgansimulator).
You may refer to other pages listed above.
### **1. Overview**
Time series model, especially those deep learning models, often suffers slow training speed and unsatisfying inference speed. Chronos is adapted to integrate many optimized library and best known methods(BKMs) for performance improvement on built-in models and customized models.
### **2. Training Acceleration**
Training Acceleration is transparent in Chronos's API. Transparentness means that Chronos users will enjoy the acceleration without changing their code(unless some expert users want to set some advanced settings).
```eval_rst
.. note::
**Write your script under** ``if __name__=="__main__":``:
Chronos will automatically utilize the computation resources on the hardware. This may include multi-process training on a single node. Use this header will prevent many strange behavior.
```
#### **2.1 `Forecaster` Training Acceleration**
Currently, transparent acceleration for `LSTMForecaster`, `Seq2SeqForecaster`, `TCNForecaster` and `NBeatsForecaster` is **automatically enabled** and tested. Chronos will set various environment variables and config multi-processing training according to the hardware paremeters(e.g. cores number, ...).
Currently, this function is under active development and **some expert users may want to change some config or disable some acceleration tricks**. Here are some instructions.
Users may unset the environment by:
```bash
source bigdl-nano-unset-env
```
Users may set the the number of process to use in training by:
```python
print(forecaster.num_processes) # num_processes is automatically optimized by Chronos
forecaster.num_processes = 1 # disable multi-processing training
forecaster.num_processes = 10 # You may set it to any number you want
```
Users may set the IPEX(Intel Pytorch Extension) availbility to use in training by:
```python
print(forecaster.use_ipex) # use_ipex is automatically optimized by Chronos
forecaster.use_ipex = True # enable ipex during training
forecaster.use_ipex = False # disable ipex during training
```
#### **2.2 Customized Model Training Acceleration**
We provide an optimized pytorch-lightning Trainer, `TSTrainer`, to accelerate customized time series model defined by pytorch. A typical use-case can be using `pytorch-forecasting`'s built-in models(they are defined in pytorch-lightning LightningModule) and Chronos `TSTrainer` to accelerate the training process.
`TSTrainer` requires very few code changes to your original code. Here is a quick guide:
```python
# from pytorch-lightning import Trainer
from bigdl.chronos.pytorch import TSTrainer as Trainer
trainer = Trainer(...
# set number of processes for training
num_processes=8,
# disable GPU training, TSTrainer currently only available for CPU
gpus=0,
...)
```
We have examples adapted from `pytorch-forecasting`'s examples to show the significant speed-up by using `TSTrainer` in our [use-case](https://github.com/intel-analytics/BigDL/tree/main/python/chronos/use-case/pytorch-forecasting).
#### **2.3 Auto Tuning Acceleration**
We are working on the acceleration of `AutoModel` and `AutoTSEstimator`. Please unset the environment by:
```bash
source bigdl-nano-unset-env
```
### **3. Inference Acceleration**
Inference has become a critical part for time series model's performance. This may be divided to two parts:
- Throughput: how many samples can be predicted in a certain amount of time.
- Latency: how much time is used to predict 1 sample.
Typically, throughput and latency is a trade-off pair. We have three optimization options for inferencing in Chronos.
- **Default**: Generally useful for both throughput and latency.
- **ONNX Runtime**: Users may export their trained(w/wo auto tuning) model to ONNX file and deploy it on other service. Chronos also provides an internal onnxruntime inference support for those users who pursue low latency and higher throughput during inference on a single node.
- **Quantization**: Quantization refers to processes that enable lower precision inference. In Chronos, post-training quantization is supported relied on [Intel® Neural Compressor](https://intel.github.io/neural-compressor/README.html).
```eval_rst
.. note::
**Additional Dependencies**:
You need to install `neural-compressor` to enable quantization related methods.
``pip install neural-compressor==1.8.1``
```
#### **3.1 `Forecaster` Inference Acceleration**
##### **3.1.1 Default Acceleration**
Nothing needs to be done. Chronos has deployed accleration for inferencing. **some expert users may want to change some config or disable some acceleration tricks**. Here are some instructions:
Users may unset the environment by:
```bash
source bigdl-nano-unset-env
```
##### **3.1.2 ONNX Runtime**
LSTM, TCN, Seq2seq and NBeats has supported onnx in their forecasters. When users use these built-in models, they may call `predict_with_onnx`/`evaluate_with_onnx` for prediction or evaluation. They may also call `export_onnx_file` to export the onnx model file and `build_onnx` to change the onnxruntime's setting(not necessary).
```python
f = Forecaster(...)
f.fit(...)
f.predict_with_onnx(...)
```
##### **3.1.3 Quantization**
LSTM, TCN and NBeats has supported quantization in their forecasters.
```python
# init
f = Forecaster(...)
# train the forecaster
f.fit(train_data, ...)
# quantize the forecaster
f.quantize(train_data, ..., framework=...)
# predict with int8 model with better inference throughput
f.predict/predict_with_onnx(test_data, quantize=True)
# predict with fp32
f.predict/predict_with_onnx(test_data, quantize=False)
# save
f.save(checkpoint_file="fp32.model"
quantize_checkpoint_file="int8.model")
# load
f.load(checkpoint_file="fp32.model"
quantize_checkpoint_file="int8.model")
```
Please refer to [Forecaster API Docs](../../PythonAPI/Chronos/forecasters.html) for details.
#### **3.2 `TSPipeline` Inference Acceleration**
Basically same to [`Forecaster`](#31-forecaster-inference-acceleration)
##### **3.1.1 Default Acceleration**
Basically same to [`Forecaster`](#31-forecaster-inference-acceleration)
##### **3.1.2 ONNX Runtime**
```python
tsppl.predict_with_onnx(...)
```
##### **3.1.3 Quantization**
```python
tsppl.quantize(...)
tsppl.predict/predict_with_onnx(test_data, quantize=True/False)
```
Please refer to [TSPipeline API doc](../../PythonAPI/Chronos/autotsestimator.html#tspipeline) for details.

View file

@ -50,17 +50,7 @@ You can enable a tensorboard view in jupyter notebook by the following code.
%tensorboard --logdir <logs_dir>/<name>_leaderboard/
```
#### **2. ONNX/ONNX Runtime support**
Users may export their trained(w/wo auto tuning) model to ONNX file and deploy it on other service. Chronos also provides an internal onnxruntime inference support for those **users who pursue low latency and higher throughput during inference on a single node**.
LSTM, TCN and Seq2seq has supported onnx in their forecasters, auto models and AutoTS. When users use these built-in models, they may call `predict_with_onnx`/`evaluate_with_onnx` for prediction or evaluation. They may also call `export_onnx_file` to export the onnx model file and `build_onnx` to change the onnxruntime's setting(not necessary).
```python
f = Forecaster(...)
f.fit(...)
f.predict_with_onnx(...)
```
#### **3. Distributed training**
#### **2. Distributed training**
LSTM, TCN and Seq2seq users can easily train their forecasters in a distributed fashion to **handle extra large dataset and utilize a cluster**. The functionality is powered by Project Orca.
```python
f = Forecaster(..., distributed=True)
@ -69,7 +59,7 @@ f.predict(...)
f.to_local() # collect the forecaster to single node
f.predict_with_onnx(...) # onnxruntime only supports single node
```
#### **4. XShardsTSDataset**
#### **3. XShardsTSDataset**
```eval_rst
.. warning::
`XShardsTSDataset` is still experimental.
@ -90,29 +80,3 @@ f = Forecaster(..., distributed=True)
f.fit(tsdata_xshards, ...)
f.predict(test_tsdata_xshards, ...)
```
#### **5. Quantization**
Quantization refers to processes that enable lower precision inference. In Chronos, post-training quantization is supported relied on [Intel® Neural Compressor](https://intel.github.io/neural-compressor/README.html).
```python
# init
f = Forecaster(...)
# train the forecaster
f.fit(train_data, ...)
# quantize the forecaster
f.quantize(train_data, ...)
# predict with int8 model with better inference throughput
f.predict(test_data, quantize=True)
# predict with fp32
f.predict(test_data, quantize=False)
# save
f.save(checkpoint_file="fp32.model"
quantize_checkpoint_file="int8.model")
# load
f.load(checkpoint_file="fp32.model"
quantize_checkpoint_file="int8.model")
```

View file

@ -90,6 +90,22 @@
We demonstrates how to leverage Chronos's built-in models ie. MTNet, to do time series forecasting. Then perform anomaly detection on predicted value with [ThresholdDetector][Threshold].
---------------------------
- [**Help pytorch-forecasting improve the training speed of DeepAR model**][pytorch_forecasting_deepar]
> ![](../../../../image/GitHub-Mark-32px.png)[View source on GitHub][pytorch_forecasting_deepar]
Chronos can help a 3rd party time series lib to improve the performance (both training and inferencing) and accuracy. This use-case shows Chronos can easily help pytorch-forecasting speed up the training of DeepAR model.
---------------------------
- [**Help pytorch-forecasting improve the training speed of TFT model**][pytorch_forecasting_tft]
> ![](../../../../image/GitHub-Mark-32px.png)[View source on GitHub][pytorch_forecasting_tft]
Chronos can help a 3rd party time series lib to improve the performance (both training and inferencing) and accuracy. This use-case shows Chronos can easily help pytorch-forecasting speed up the training of TFT model.
[DBScan]: <../../PythonAPI/Chronos/anomaly_detectors.html#dbscandetector>
[AE]: <../../PythonAPI/Chronos/anomaly_detectors.html#aedetector>
@ -109,3 +125,5 @@
[stock_prediction_prophet]: <https://github.com/intel-analytics/BigDL/blob/main/python/chronos/use-case/fsi/stock_prediction_prophet.ipynb>
[AIOps_anomaly_detect_unsupervised]: <https://github.com/intel-analytics/BigDL/blob/main/python/chronos/use-case/AIOps/AIOps_anomaly_detect_unsupervised.ipynb>
[AIOps_anomaly_detect_unsupervised_forecast_based]: <https://github.com/intel-analytics/BigDL/blob/main/python/chronos/use-case/AIOps/AIOps_anomaly_detect_unsupervised_forecast_based.ipynb>
[pytorch_forecasting_deepar]: <https://github.com/intel-analytics/BigDL/tree/main/python/chronos/use-case/pytorch-forecasting/DeepAR>
[pytorch_forecasting_tft]: <https://github.com/intel-analytics/BigDL/tree/main/python/chronos/use-case/pytorch-forecasting/TFT>