SOTAVerified

Unsupervised Video Summarization

Unsupervised video summarization approaches overcome the need for ground-truth data (whose production requires time-demanding and laborious manual annotation procedures), based on learning mechanisms that require only an adequately large collection of original videos for their training. Specifically, the training is based on heuristic rules, like the sparsity, the representativeness, and the diversity of the utilized input features/characteristics.

Papers

Showing 1120 of 31 papers

TitleStatusHype
Masked Autoencoder for Unsupervised Video Summarization0
Learning to Summarize Videos by Contrasting Clips0
Contrastive Losses Are Natural Criteria for Unsupervised Video SummarizationCode1
Summarizing Videos using Concentrated Attention and Considering the Uniqueness and Diversity of the Video FramesCode1
ERA: Entity Relationship Aware Video Summarization with Wasserstein GANCode0
Self-Attention Recurrent Summarization Network with Reinforcement Learning for Video Summarization TaskCode1
Unsupervised Video Summarization via Multi-source FeaturesCode1
Unsupervised Video Summarization with a Convolutional Attentive Adversarial Network0
AC-SUM-GAN: Connecting Actor-Critic and Generative Adversarial Networks for Unsupervised Video SummarizationCode1
Global-and-Local Relative Position Embedding for Unsupervised Video Summarization0
Show:102550
← PrevPage 2 of 4Next →

No leaderboard results yet.