SOTAVerified

Spatial-Temporal-Decoupled Masked Pre-training for Spatiotemporal Forecasting

2023-12-01Code Available2· sign in to hype

Haotian Gao, Renhe Jiang, Zheng Dong, Jinliang Deng, Yuxin Ma, Xuan Song

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Spatiotemporal forecasting techniques are significant for various domains such as transportation, energy, and weather. Accurate prediction of spatiotemporal series remains challenging due to the complex spatiotemporal heterogeneity. In particular, current end-to-end models are limited by input length and thus often fall into spatiotemporal mirage, i.e., similar input time series followed by dissimilar future values and vice versa. To address these problems, we propose a novel self-supervised pre-training framework Spatial-Temporal-Decoupled Masked Pre-training (STD-MAE) that employs two decoupled masked autoencoders to reconstruct spatiotemporal series along the spatial and temporal dimensions. Rich-context representations learned through such reconstruction could be seamlessly integrated by downstream predictors with arbitrary architectures to augment their performances. A series of quantitative and qualitative evaluations on six widely used benchmarks (PEMS03, PEMS04, PEMS07, PEMS08, METR-LA, and PEMS-BAY) are conducted to validate the state-of-the-art performance of STD-MAE. Codes are available at https://github.com/Jimmy-7664/STD-MAE.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
EXPY-TKYSTD-MAE1 step MAE5.73Unverified
METR-LASTD-MAEMAE @ 12 step3.4Unverified
PeMS04STD-MAE12 Steps MAE17.8Unverified
PeMS07STD-MAEMAE@1h18.31Unverified
PEMS-BAYSTD-MAEMAE @ 12 step1.77Unverified
PeMSD3STD-MAE12 steps MAE13.8Unverified
PeMSD4STD-MAE12 steps MAE17.8Unverified
PeMSD7STD-MAE12 steps MAE18.31Unverified
PeMSD7(L)STD-MAE12 steps MAE2.64Unverified
PeMSD7(M)STD-MAE12 steps MAE2.52Unverified
PeMSD8STD-MAE12 steps MAE13.44Unverified

Reproductions