SOTAVerified

Video Object Detection with an Aligned Spatial-Temporal Memory

2017-12-18ECCV 2018Code Available0· sign in to hype

Fanyi Xiao, Yong Jae Lee

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We introduce Spatial-Temporal Memory Networks for video object detection. At its core, a novel Spatial-Temporal Memory module (STMM) serves as the recurrent computation unit to model long-term temporal appearance and motion dynamics. The STMM's design enables full integration of pretrained backbone CNN weights, which we find to be critical for accurate detection. Furthermore, in order to tackle object motion in videos, we propose a novel MatchTrans module to align the spatial-temporal memory from frame to frame. Our method produces state-of-the-art results on the benchmark ImageNet VID dataset, and our ablative studies clearly demonstrate the contribution of our different design choices. We release our code and models at http://fanyix.cs.ucdavis.edu/project/stmn/project.html.

Tasks

Reproductions