SOTAVerified

STAEformer: Spatio-Temporal Adaptive Embedding Makes Vanilla Transformer SOTA for Traffic Forecasting

2023-08-21Code Available2· sign in to hype

Hangchen Liu, Zheng Dong, Renhe Jiang, Jiewen Deng, Jinliang Deng, Quanjun Chen, Xuan Song

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

With the rapid development of the Intelligent Transportation System (ITS), accurate traffic forecasting has emerged as a critical challenge. The key bottleneck lies in capturing the intricate spatio-temporal traffic patterns. In recent years, numerous neural networks with complicated architectures have been proposed to address this issue. However, the advancements in network architectures have encountered diminishing performance gains. In this study, we present a novel component called spatio-temporal adaptive embedding that can yield outstanding results with vanilla transformers. Our proposed Spatio-Temporal Adaptive Embedding transformer (STAEformer) achieves state-of-the-art performance on five real-world traffic forecasting datasets. Further experiments demonstrate that spatio-temporal adaptive embedding plays a crucial role in traffic forecasting by effectively capturing intrinsic spatio-temporal relations and chronological information in traffic time series.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
METR-LASTAEformerMAE @ 12 step3.34Unverified
PeMS04STAEformer12 Steps MAE18.22Unverified
PeMS07STAEformerMAE@1h19.14Unverified
PeMS08STAEformerMAE@1h13.46Unverified
PEMS-BAYSTAEformerMAE @ 12 step1.91Unverified
PeMSD7STAEformer12 steps MAE19.14Unverified

Reproductions