Only the Curve Shape Matters: Training Foundation Models for Zero-Shot Multivariate Time Series Forecasting through Next Curve Shape Prediction
Cheng Feng, Long Huang, Denis Krompass
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
We present General Time Transformer (GTT), an encoder-only style foundation model for zero-shot multivariate time series forecasting. GTT is pretrained on a large dataset of 200M high-quality time series samples spanning diverse domains. In our proposed framework, the task of multivariate time series forecasting is formulated as a channel-wise next curve shape prediction problem, where each time series sample is represented as a sequence of non-overlapping curve shapes with a unified numerical magnitude. GTT is trained to predict the next curve shape based on a window of past curve shapes in a channel-wise manner. Experimental results demonstrate that GTT exhibits superior zero-shot multivariate forecasting capabilities on unseen time series datasets, even surpassing state-of-the-art supervised baselines. Additionally, we investigate the impact of varying GTT model parameters and training dataset scales, observing that the scaling law also holds in the context of zero-shot multivariate time series forecasting.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| ETTh1 (336) Multivariate | GTT-Large | MSE | 0.42 | — | Unverified |
| ETTh1 (336) Multivariate | GTT-Large(Fine-tune) | MSE | 0.43 | — | Unverified |
| ETTh1 (336) Multivariate | GTT-Smal | MSE | 0.46 | — | Unverified |
| ETTh1 (336) Multivariate | GTT-Tiny | MSE | 0.47 | — | Unverified |
| ETTh1 (336) Multivariate | GTT-Large(100M traing samples) | MSE | 0.47 | — | Unverified |
| ETTh1 (336) Multivariate | GTT-Large(50M traing samples) | MSE | 0.48 | — | Unverified |