SOTAVerified

Learn to cycle: Time-consistent feature discovery for action recognition

2020-06-15Code Available0· sign in to hype

Alexandros Stergiou, Ronald Poppe

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Generalizing over temporal variations is a prerequisite for effective action recognition in videos. Despite significant advances in deep neural networks, it remains a challenge to focus on short-term discriminative motions in relation to the overall performance of an action. We address this challenge by allowing some flexibility in discovering relevant spatio-temporal features. We introduce Squeeze and Recursion Temporal Gates (SRTG), an approach that favors inputs with similar activations with potential temporal variations. We implement this idea with a novel CNN block that uses an LSTM to encapsulate feature dynamics, in conjunction with a temporal gate that is responsible for evaluating the consistency of the discovered dynamics and the modeled features. We show consistent improvement when using SRTG blocks, with only a minimal increase in the number of GFLOPs. On Kinetics-700, we perform on par with current state-of-the-art models, and outperform these on HACS, Moments in Time, UCF-101 and HMDB-51.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
HACSSRTG r(2+1)d-101Top 1 Accuracy84.33Unverified
HACSSRTG r(2+1)d-50Top 1 Accuracy83.77Unverified
HACSSRTG r3d-101Top 1 Accuracy81.66Unverified
HACSSRTG r(2+1)d-34Top 1 Accuracy80.39Unverified
HACSSRTG r3d-50Top 1 Accuracy80.36Unverified
HACSSRTG r3d-34Top 1 Accuracy78.6Unverified

Reproductions