SOTAVerified

Video Description

The goal of automatic Video Description is to tell a story about events happening in a video. While early Video Description methods produced captions for short clips that were manually segmented to contain a single event of interest, more recently dense video captioning has been proposed to both segment distinct events in time and describe them in a series of coherent sentences. This problem is a generalization of dense image region captioning and has many practical applications, such as generating textual summaries for the visually impaired, or detecting and describing important events in surveillance footage.

Source: Joint Event Detection and Description in Continuous Video Streams

Papers

Showing 110 of 104 papers

TitleStatusHype
DANTE-AD: Dual-Vision Attention Network for Long-Term Audio Description0
HOIGen-1M: A Large-scale Dataset for Human-Object Interaction Video Generation0
Cross-Modal Learning for Music-to-Music-Video Description Generation0
VideoA11y: Method and Dataset for Accessible Video Description0
AVD2: Accident Video Diffusion for Accident Video Description0
Enhancing Video Understanding: Deep Neural Networks for Spatiotemporal Analysis0
Towards Zero-Shot & Explainable Video Description by Reasoning over Graphs of Events in Space and Time0
Tarsier2: Advancing Large Vision-Language Models from Detailed Video Description to Comprehensive Video UnderstandingCode4
Implicit Location-Caption Alignment via Complementary Masking for Weakly-Supervised Dense Video CaptioningCode0
StoryTeller: Improving Long Video Description through Global Audio-Visual Character IdentificationCode2
Show:102550
← PrevPage 1 of 11Next →

No leaderboard results yet.