SOTAVerified

Weakly Supervised Action Localization by Sparse Temporal Pooling Network

2017-12-14CVPR 2018Code Available0· sign in to hype

Phuc Nguyen, Ting Liu, Gautam Prasad, Bohyung Han

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We propose a weakly supervised temporal action localization algorithm on untrimmed videos using convolutional neural networks. Our algorithm learns from video-level class labels and predicts temporal intervals of human actions with no requirement of temporal localization annotations. We design our network to identify a sparse subset of key segments associated with target actions in a video using an attention module and fuse the key segments through adaptive temporal pooling. Our loss function is comprised of two terms that minimize the video-level action classification error and enforce the sparsity of the segment selection. At inference time, we extract and score temporal proposals using temporal class activations and class-agnostic attentions to estimate the time intervals that correspond to target actions. The proposed algorithm attains state-of-the-art results on the THUMOS14 dataset and outstanding performance on ActivityNet1.3 even with its weak supervision.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
ActivityNet-1.3STPNmAP@0.529.3Unverified
THUMOS 2014STPNmAP@0.1:0.727Unverified

Reproductions