SOTAVerified

MSPred: Video Prediction at Multiple Spatio-Temporal Scales with Hierarchical Recurrent Networks

2022-03-17Code Available0· sign in to hype

Angel Villar-Corrales, Ani Karapetyan, Andreas Boltres, Sven Behnke

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Autonomous systems not only need to understand their current environment, but should also be able to predict future actions conditioned on past states, for instance based on captured camera frames. However, existing models mainly focus on forecasting future video frames for short time-horizons, hence being of limited use for long-term action planning. We propose Multi-Scale Hierarchical Prediction (MSPred), a novel video prediction model able to simultaneously forecast future possible outcomes of different levels of granularity at different spatio-temporal scales. By combining spatial and temporal downsampling, MSPred efficiently predicts abstract representations such as human poses or locations over long time horizons, while still maintaining a competitive performance for video frame prediction. In our experiments, we demonstrate that MSPred accurately predicts future video frames as well as high-level representations (e.g. keypoints or semantics) on bin-picking and action recognition datasets, while consistently outperforming popular approaches for future frame prediction. Furthermore, we ablate different modules and design choices in MSPred, experimentally validating that combining features of different spatial and temporal granularity leads to a superior performance. Code and models to reproduce our experiments can be found in https://github.com/AIS-Bonn/MSPred.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
KTHMSPredSSIM0.95Unverified
Moving MNISTMSPredMSE34.44Unverified
SynpickVPMSPredLPIPS0.03Unverified

Reproductions