SOTAVerified

Supervised Video Summarization

Supervised video summarization rely on datasets with human-labeled ground-truth annotations (either in the form of video summaries, as in the case of the SumMe dataset, or in the form of frame-level importance scores, as in the case of the TVSum dataset), based on which they try to discover the underlying criterion for video frame/fragment selection and video summarization.

Source: Video Summarization Using Deep Neural Networks: A Survey

Papers

Showing 1120 of 28 papers

TitleStatusHype
Relational Reasoning Over Spatial-Temporal Graphs for Video Summarization0
Joint Video Summarization and Moment Localization by Cross-Task Sample Transfer0
A Stacking Ensemble Approach for Supervised Video Summarization0
Diverse Sequential Subset Selection for Supervised Video Summarization0
FullTransNet: Full Transformer with Local-Global Attention for Video Summarization0
Hierarchical Multimodal Transformer to Summarize Videos0
How Good is a Video Summary? A New Benchmarking Dataset and Evaluation Framework Towards Realistic Video Summarization0
How Local is the Local Diversity? Reinforcing Sequential Determinantal Point Processes with Dynamic Ground Sets for Supervised Video Summarization0
TRIM: A Self-Supervised Video Summarization Framework Maximizing Temporal Relative Information and Representativeness0
Use of Affective Visual Information for Summarization of Human-Centric Videos0
Show:102550
← PrevPage 2 of 3Next →

No leaderboard results yet.