SOTAVerified

Relational Reasoning Over Spatial-Temporal Graphs for Video Summarization

2022-04-06IEEE Transactions on Image Processing 2022Unverified0· sign in to hype

Wencheng Zhu, Yucheng Han, Jiwen Lu, Jie zhou

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

In this paper, we propose a dynamic graph modeling approach to learn spatial-temporal representations for video summarization. Most existing video summarization methods extract image-level features with ImageNet pre-trained deep models. Differently, our method exploits object-level and relation-level information to capture spatial-temporal dependencies. Specifically, our method builds spatial graphs on the detected object proposals. Then, we construct a temporal graph by using the aggregated representations of spatial graphs. Afterward, we perform relational reasoning over spatial and temporal graphs with graph convolutional networks and extract spatial-temporal representations for importance score prediction and key shot selection. To eliminate relation clutters caused by densely connected nodes, we further design a self-attention edge pooling module, which disregards meaningless relations of graphs. We conduct extensive experiments on two popular benchmarks, including the SumMe and TVSum datasets. Experimental results demonstrate that the proposed method achieves superior performance against state-of-the-art video summarization methods.

Tasks

Reproductions