SOTAVerified

ConvNet Architecture Search for Spatiotemporal Feature Learning

2017-08-16Code Available1· sign in to hype

Du Tran, Jamie Ray, Zheng Shou, Shih-Fu Chang, Manohar Paluri

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Learning image representations with ConvNets by pre-training on ImageNet has proven useful across many visual understanding tasks including object detection, semantic segmentation, and image captioning. Although any image representation can be applied to video frames, a dedicated spatiotemporal representation is still vital in order to incorporate motion patterns that cannot be captured by appearance based models alone. This paper presents an empirical ConvNet architecture search for spatiotemporal feature learning, culminating in a deep 3-dimensional (3D) Residual ConvNet. Our proposed architecture outperforms C3D by a good margin on Sports-1M, UCF101, HMDB51, THUMOS14, and ASLAN while being 2 times faster at inference time, 2 times smaller in model size, and having a more compact representation.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
HMDB-51Res3DAverage accuracy of 3 splits54.9Unverified
UCF101Res3D3-fold Accuracy85.8Unverified

Reproductions