SOTAVerified

Only the Curve Shape Matters: Training Foundation Models for Zero-Shot Multivariate Time Series Forecasting through Next Curve Shape Prediction

2024-02-12Unverified0· sign in to hype

Cheng Feng, Long Huang, Denis Krompass

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We present General Time Transformer (GTT), an encoder-only style foundation model for zero-shot multivariate time series forecasting. GTT is pretrained on a large dataset of 200M high-quality time series samples spanning diverse domains. In our proposed framework, the task of multivariate time series forecasting is formulated as a channel-wise next curve shape prediction problem, where each time series sample is represented as a sequence of non-overlapping curve shapes with a unified numerical magnitude. GTT is trained to predict the next curve shape based on a window of past curve shapes in a channel-wise manner. Experimental results demonstrate that GTT exhibits superior zero-shot multivariate forecasting capabilities on unseen time series datasets, even surpassing state-of-the-art supervised baselines. Additionally, we investigate the impact of varying GTT model parameters and training dataset scales, observing that the scaling law also holds in the context of zero-shot multivariate time series forecasting.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
ETTh1 (336) MultivariateGTT-LargeMSE0.42Unverified
ETTh1 (336) MultivariateGTT-Large(Fine-tune)MSE0.43Unverified
ETTh1 (336) MultivariateGTT-SmalMSE0.46Unverified
ETTh1 (336) MultivariateGTT-TinyMSE0.47Unverified
ETTh1 (336) MultivariateGTT-Large(100M traing samples)MSE0.47Unverified
ETTh1 (336) MultivariateGTT-Large(50M traing samples)MSE0.48Unverified

Reproductions