A decoder-only foundation model for time-series forecasting
2023-10-14Code Available6· sign in to hype
Abhimanyu Das, Weihao Kong, Rajat Sen, Yichen Zhou
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/google-research/timesfmOfficialjax★ 10,100
- github.com/etna-team/etnapytorch★ 193
Abstract
Motivated by recent advances in large language models for Natural Language Processing (NLP), we design a time-series foundation model for forecasting whose out-of-the-box zero-shot performance on a variety of public datasets comes close to the accuracy of state-of-the-art supervised forecasting models for each individual dataset. Our model is based on pretraining a patched-decoder style attention model on a large time-series corpus, and can work well across different forecasting history lengths, prediction lengths and temporal granularities.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| ETTh1 (336) Multivariate | TimesFM | MAE | 0.44 | — | Unverified |