SOTAVerified

Learning Spatio-Temporal Representation with Pseudo-3D Residual Networks

2017-11-28ICCV 2017Code Available1· sign in to hype

Zhaofan Qiu, Ting Yao, Tao Mei

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Convolutional Neural Networks (CNN) have been regarded as a powerful class of models for image recognition problems. Nevertheless, it is not trivial when utilizing a CNN for learning spatio-temporal video representation. A few studies have shown that performing 3D convolutions is a rewarding approach to capture both spatial and temporal dimensions in videos. However, the development of a very deep 3D CNN from scratch results in expensive computational cost and memory demand. A valid question is why not recycle off-the-shelf 2D networks for a 3D CNN. In this paper, we devise multiple variants of bottleneck building blocks in a residual learning framework by simulating 333 convolutions with 133 convolutional filters on spatial domain (equivalent to 2D CNN) plus 311 convolutions to construct temporal connections on adjacent feature maps in time. Furthermore, we propose a new architecture, named Pseudo-3D Residual Net (P3D ResNet), that exploits all the variants of blocks but composes each in different placement of ResNet, following the philosophy that enhancing structural diversity with going deep could improve the power of neural networks. Our P3D ResNet achieves clear improvements on Sports-1M video classification dataset against 3D CNN and frame-based 2D CNN by 5.3% and 1.8%, respectively. We further examine the generalization performance of video representation produced by our pre-trained P3D ResNet on five different benchmarks and three different tasks, demonstrating superior performances over several state-of-the-art techniques.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
ActivityNetP3DmAP78.9Unverified
Sports-1MP3DVideo hit@1 66.4Unverified
UCF101P3D (ImageNet + Sports1M)3-fold Accuracy88.6Unverified

Reproductions