Skip to content

Commit 72ab641

Browse files
abdulfatirAbdul Fatir Ansaricanerturkmenlostella
authored
⚡ Add support for Chronos-Bolt models (#204)
*Issue #, if available:* N/A *Description of changes:* This PR adds support for Chronos-Bolt models. TODOs: - [x] Update evaluation script - [x] Fix and add tests for Bolt - [x] Update docstrings - [x] Update README example and mention Chronos-Bolt - [x] Update results bar plot in README - [x] Add versions for libraries in `pyproject.toml` - [x] Check that the training and eval scripts work - [x] Change `autogluon` -> `amazon` in model names Post Merge: - [ ] Update Citation style in README, both Github and HuggingFace repos - [ ] Remove note about AutoGluon - [ ] Update READMEs of original Chronos models to refer to Chronos-Bolt NOTE: To be merged after Chronos-Bolt models are available under the `amazon` namespace on HF. By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice. --------- Co-authored-by: Abdul Fatir Ansari <[email protected]> Co-authored-by: Caner Turkmen <[email protected]> Co-authored-by: Lorenzo Stella <[email protected]>
1 parent d0c114c commit 72ab641

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

42 files changed

+6693
-84
lines changed

README.md

Lines changed: 27 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,8 @@
1717

1818
## 🚀 News
1919

20-
- **27 June 2024**: 🚀 [Released datasets](https://huggingface.co/datasets/autogluon/chronos_datasets) used in the paper and an [evaluation script](./scripts/README.md#evaluating-chronos-models) to compute the WQL and MASE scores reported in the paper.
20+
- **26 Nov 2024**: ⚡️ Chronos-Bolt models released [on HuggingFace](https://huggingface.co/collections/amazon/chronos-models-65f1791d630a8d57cb718444). Chronos-Bolt models are more accurate (5% lower error), up to 250x faster and 20x more memory efficient than the original Chronos models of the same size!
21+
- **27 Jun 2024**: 🚀 [Released datasets](https://huggingface.co/datasets/autogluon/chronos_datasets) used in the paper and an [evaluation script](./scripts/README.md#evaluating-chronos-models) to compute the WQL and MASE scores reported in the paper.
2122
- **17 May 2024**: 🐛 Fixed an off-by-one error in bin indices in the `output_transform`. This simple fix significantly improves the overall performance of Chronos. We will update the results in the next revision on ArXiv.
2223
- **10 May 2024**: 🚀 We added the code for pretraining and fine-tuning Chronos models. You can find it in [this folder](./scripts/training). We also added [a script](./scripts/kernel-synth.py) for generating synthetic time series data from Gaussian processes (KernelSynth; see Section 4.2 in the paper for details). Check out the [usage examples](./scripts/).
2324
- **19 Apr 2024**: 🚀 Chronos is now supported on [AutoGluon-TimeSeries](https://auto.gluon.ai/stable/tutorials/timeseries/index.html), the powerful AutoML package for time series forecasting which enables model ensembles, cloud deployments, and much more. Get started with the [tutorial](https://auto.gluon.ai/stable/tutorials/timeseries/forecasting-chronos.html).
@@ -52,62 +53,72 @@ The models in this repository are based on the [T5 architecture](https://arxiv.o
5253
| [**chronos-t5-small**](https://huggingface.co/amazon/chronos-t5-small) | 46M | [t5-efficient-small](https://huggingface.co/google/t5-efficient-small) |
5354
| [**chronos-t5-base**](https://huggingface.co/amazon/chronos-t5-base) | 200M | [t5-efficient-base](https://huggingface.co/google/t5-efficient-base) |
5455
| [**chronos-t5-large**](https://huggingface.co/amazon/chronos-t5-large) | 710M | [t5-efficient-large](https://huggingface.co/google/t5-efficient-large) |
56+
| [**chronos-bolt-tiny**](https://huggingface.co/amazon/chronos-bolt-tiny) | 9M | [t5-efficient-tiny](https://huggingface.co/google/t5-efficient-tiny) |
57+
| [**chronos-bolt-mini**](https://huggingface.co/amazon/chronos-bolt-mini) | 21M | [t5-efficient-mini](https://huggingface.co/google/t5-efficient-mini) |
58+
| [**chronos-bolt-small**](https://huggingface.co/amazon/chronos-bolt-small) | 48M | [t5-efficient-small](https://huggingface.co/google/t5-efficient-small) |
59+
| [**chronos-bolt-base**](https://huggingface.co/amazon/chronos-bolt-base) | 205M | [t5-efficient-base](https://huggingface.co/google/t5-efficient-base) |
5560

5661
</div>
5762

5863
### Zero-Shot Results
5964

60-
The following figure showcases the remarkable **zero-shot** performance of Chronos models on 27 datasets against local models, task-specific models and other pretrained models. For details on the evaluation setup and other results, please refer to [the paper](https://arxiv.org/abs/2403.07815).
65+
The following figure showcases the remarkable **zero-shot** performance of Chronos and Chronos-Bolt models on 27 datasets against local models, task-specific models and other pretrained models. For details on the evaluation setup and other results, please refer to [the paper](https://arxiv.org/abs/2403.07815).
6166

6267
<p align="center">
63-
<img src="figures/zero_shot-agg_scaled_score.png" width="80%">
68+
<img src="figures/zero_shot-agg_scaled_score.svg" width="100%">
6469
<br />
6570
<span>
66-
Fig. 2: Performance of different models on Benchmark II, comprising 27 datasets <b>not seen</b> by Chronos models during training. This benchmark provides insights into the zero-shot performance of Chronos models against local statistical models, which fit parameters individually for each time series, task-specific models <i>trained on each task</i>, and pretrained models trained on a large corpus of time series. Pretrained Models (Other) indicates that some (or all) of the datasets in Benchmark II may have been in the training corpus of these models. The probabilistic (WQL) and point (MASE) forecasting metrics were normalized using the scores of the Seasonal Naive baseline and aggregated through a geometric mean to obtain the Agg. Relative WQL and MASE, respectively.
71+
Fig. 2: Performance of different models on Benchmark II, comprising 27 datasets <b>not seen</b> by Chronos and Chronos-Bolt models during training. This benchmark provides insights into the zero-shot performance of Chronos and Chronos-Bolt models against local statistical models, which fit parameters individually for each time series, task-specific models <i>trained on each task</i>, and pretrained models trained on a large corpus of time series. Pretrained Models (Other) indicates that some (or all) of the datasets in Benchmark II may have been in the training corpus of these models. The probabilistic (WQL) and point (MASE) forecasting metrics were normalized using the scores of the Seasonal Naive baseline and aggregated through a geometric mean to obtain the Agg. Relative WQL and MASE, respectively.
6772
</span>
6873
</p>
6974

7075
## 📈 Usage
7176

72-
To perform inference with Chronos models, install this package by running:
77+
To perform inference with Chronos or Chronos-Bolt models, install this package by running:
7378

7479
```
7580
pip install git+https://github.com/amazon-science/chronos-forecasting.git
7681
```
7782
> [!TIP]
78-
> The recommended way of using Chronos for production use cases is through [AutoGluon](https://auto.gluon.ai), which features ensembling with other statistical and machine learning models for time series forecasting as well as seamless deployments on AWS with SageMaker 🧠. Check out the AutoGluon Chronos [tutorial](https://auto.gluon.ai/stable/tutorials/timeseries/forecasting-chronos.html).
83+
> This repository is intended for research purposes and provides a minimal interface to Chronos models. The recommended way of using Chronos for production use cases is through [AutoGluon](https://auto.gluon.ai), which features effortless fine-tuning, augmenting Chronos models with exogenous information through covariate regressors, ensembling with other statistical and machine learning models, as well as seamless deployments on AWS with SageMaker 🧠. Check out the AutoGluon Chronos [tutorial](https://auto.gluon.ai/stable/tutorials/timeseries/forecasting-chronos.html).
7984
8085
### Forecasting
8186

82-
A minimal example showing how to perform forecasting using Chronos models:
87+
A minimal example showing how to perform forecasting using Chronos and Chronos-Bolt models:
8388

8489
```python
8590
import pandas as pd # requires: pip install pandas
8691
import torch
87-
from chronos import ChronosPipeline
92+
from chronos import BaseChronosPipeline
8893

89-
pipeline = ChronosPipeline.from_pretrained(
90-
"amazon/chronos-t5-small",
94+
pipeline = BaseChronosPipeline.from_pretrained(
95+
"amazon/chronos-t5-small", # use "amazon/chronos-bolt-small" for the corresponding Chronos-Bolt model
9196
device_map="cuda", # use "cpu" for CPU inference and "mps" for Apple Silicon
9297
torch_dtype=torch.bfloat16,
9398
)
9499

95-
df = pd.read_csv("https://raw.githubusercontent.com/AileenNielsen/TimeSeriesAnalysisWithPython/master/data/AirPassengers.csv")
100+
df = pd.read_csv(
101+
"https://raw.githubusercontent.com/AileenNielsen/TimeSeriesAnalysisWithPython/master/data/AirPassengers.csv"
102+
)
96103

97104
# context must be either a 1D tensor, a list of 1D tensors,
98105
# or a left-padded 2D tensor with batch as the first dimension
99-
# forecast shape: [num_series, num_samples, prediction_length]
106+
# The original Chronos models generate forecast samples, so forecast has shape
107+
# [num_series, num_samples, prediction_length].
108+
# Chronos-Bolt models generate quantile forecasts, so forecast has shape
109+
# [num_series, num_quantiles, prediction_length].
100110
forecast = pipeline.predict(
101-
context=torch.tensor(df["#Passengers"]),
102-
prediction_length=12,
103-
num_samples=20,
111+
context=torch.tensor(df["#Passengers"]), prediction_length=12
104112
)
105113
```
106114

107115
More options for `pipeline.predict` can be found with:
108116

109117
```python
110-
print(ChronosPipeline.predict.__doc__)
118+
from chronos import ChronosPipeline, ChronosBoltPipeline
119+
120+
print(ChronosPipeline.predict.__doc__) # for Chronos models
121+
print(ChronosBoltPipeline.predict.__doc__) # for Chronos-Bolt models
111122
```
112123

113124
We can now visualize the forecast:
-318 KB
Binary file not shown.

0 commit comments

Comments
 (0)