SOTAVerified

Learning Self-Similarity in Space and Time as Generalized Motion for Video Action Recognition

2021-02-14ICCV 2021Code Available1· sign in to hype

Heeseung Kwon, Manjin Kim, Suha Kwak, Minsu Cho

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Spatio-temporal convolution often fails to learn motion dynamics in videos and thus an effective motion representation is required for video understanding in the wild. In this paper, we propose a rich and robust motion representation based on spatio-temporal self-similarity (STSS). Given a sequence of frames, STSS represents each local region as similarities to its neighbors in space and time. By converting appearance features into relational values, it enables the learner to better recognize structural patterns in space and time. We leverage the whole volume of STSS and let our model learn to extract an effective motion representation from it. The proposed neural block, dubbed SELFY, can be easily inserted into neural architectures and trained end-to-end without additional supervision. With a sufficient volume of the neighborhood in space and time, it effectively captures long-term interaction and fast motion in the video, leading to robust action recognition. Our experimental analysis demonstrates its superiority over previous methods for motion modeling as well as its complementarity to spatio-temporal features from direct convolution. On the standard action recognition benchmarks, Something-Something-V1 & V2, Diving-48, and FineGym, the proposed method achieves the state-of-the-art results.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Something-Something V1SELFYNet-TSM-R50En (8+16 frames, ImageNet pretrained, 2 clips)Top 1 Accuracy56.6Unverified
Something-Something V1SELFYNet-TSM-R50En (8+16 frames, ImageNet pretrained, a single clip)Top 1 Accuracy55.8Unverified
Something-Something V1SELFYNet-TSM-R50 (16 frames, ImageNet pretrained)Top 1 Accuracy54.3Unverified
Something-Something V2SELFYNet-TSM-R50En (8+16 frames, ImageNet pretrained, 2 clips)Top-1 Accuracy67.7Unverified
Something-Something V2SELFYNet-TSM-R50En (8+16 frames, ImageNet pretrained, a single clip)Top-1 Accuracy67.4Unverified
Something-Something V2SELFYNet-TSM-R50 (16 frames, ImageNet pretrained)Top-1 Accuracy65.7Unverified

Reproductions