SOTAVerified

Integrate the temporal scheme for unsupervised video summarization via attention mechanism

2025-02-26IEEE Access 2025Code Available0· sign in to hype

Bang Q. Vo, Viet H. Vo

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

In this work, we present a novel unsupervised scheme named SegSum, designed for video summarization through the creation of video skims. Most contemporary methods involve training a summarizer to assign importance scores to individual video frames, which are then aggregated to calculate scores for video segments produced by methods like Kernel Temporal Segmentation(KTS). Nonetheless, this methodology restricts the summarizer’s access to vital information essential for generating the summary—specifically, spatial-temporal relationships in video segments. Our proposed method incorporates the segment information obtained from KTS into the learning process of the summarizer based on concentrated attention architecture in deep learning models. In our experiment, we extensively evaluated our method across several datasets and many architectural frameworks for unsupervised video summarization. By incorporating a concentrated attention module, we managed to secure top F1-scores on established benchmarks, recording 54% on the SumMe dataset and 62% on the TVSum dataset. Furthermore, even with a straightforward Regressor network, SegSum demonstrates competitive performance, producing summaries that closely align with human annotations.

Tasks

Reproductions