SOTAVerified

Video Description

The goal of automatic Video Description is to tell a story about events happening in a video. While early Video Description methods produced captions for short clips that were manually segmented to contain a single event of interest, more recently dense video captioning has been proposed to both segment distinct events in time and describe them in a series of coherent sentences. This problem is a generalization of dense image region captioning and has many practical applications, such as generating textual summaries for the visually impaired, or detecting and describing important events in surveillance footage.

Source: Joint Event Detection and Description in Continuous Video Streams

Papers

Showing 110 of 104 papers

TitleStatusHype
Tarsier2: Advancing Large Vision-Language Models from Detailed Video Description to Comprehensive Video UnderstandingCode4
Tarsier: Recipes for Training and Evaluating Large Video Description ModelsCode4
Panda-70M: Captioning 70M Videos with Multiple Cross-Modality TeachersCode4
Hawk: Learning to Understand Open-World Video AnomaliesCode3
StoryTeller: Improving Long Video Description through Global Audio-Visual Character IdentificationCode2
TrafficVLM: A Controllable Visual Language Model for Traffic Video CaptioningCode2
FunQA: Towards Surprising Video ComprehensionCode1
Fine-grained Audible Video DescriptionCode1
Thinking Hallucination for Video CaptioningCode1
What's in a Caption? Dataset-Specific Linguistic Diversity and Its Effect on Visual Description Models and MetricsCode1
Show:102550
← PrevPage 1 of 11Next →

No leaderboard results yet.