SOTAVerified

More Is Less: Learning Efficient Video Representations by Big-Little Network and Depthwise Temporal Aggregation

2019-12-02NeurIPS 2019Code Available0· sign in to hype

Quanfu Fan, Chun-Fu Chen, Hilde Kuehne, Marco Pistoia, David Cox

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Current state-of-the-art models for video action recognition are mostly based on expensive 3D ConvNets. This results in a need for large GPU clusters to train and evaluate such architectures. To address this problem, we present a lightweight and memory-friendly architecture for action recognition that performs on par with or better than current architectures by using only a fraction of resources. The proposed architecture is based on a combination of a deep subnet operating on low-resolution frames with a compact subnet operating on high-resolution frames, allowing for high efficiency and accuracy at the same time. We demonstrate that our approach achieves a reduction by 34 times in FLOPs and 2 times in memory usage compared to the baseline. This enables training deeper models with more input frames under the same computational budget. To further obviate the need for large-scale 3D convolutions, a temporal aggregation module is proposed to model temporal dependencies in a video at very small additional computational costs. Our models achieve strong performance on several action recognition benchmarks including Kinetics, Something-Something and Moments-in-time. The code and models are available at https://github.com/IBM/bLVNet-TAM.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Something-Something V2bLVNetTop-1 Accuracy65.2Unverified

Reproductions