SOTAVerified

Summarizing Videos with Attention

2018-12-05Code Available0· sign in to hype

Jiri Fajtl, Hajar Sadeghi Sokeh, Vasileios Argyriou, Dorothy Monekosso, Paolo Remagnino

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

In this work we propose a novel method for supervised, keyshots based video summarization by applying a conceptually simple and computationally efficient soft, self-attention mechanism. Current state of the art methods leverage bi-directional recurrent networks such as BiLSTM combined with attention. These networks are complex to implement and computationally demanding compared to fully connected networks. To that end we propose a simple, self-attention based network for video summarization which performs the entire sequence to sequence transformation in a single feed forward pass and single backward pass during training. Our method sets a new state of the art results on two benchmarks TvSum and SumMe, commonly used in this domain.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
SumMeVASNetF1-score (Canonical)49.71Unverified
TvSumVASNetF1-score (Canonical)61.42Unverified

Reproductions