SOTAVerified

Video Description

The goal of automatic Video Description is to tell a story about events happening in a video. While early Video Description methods produced captions for short clips that were manually segmented to contain a single event of interest, more recently dense video captioning has been proposed to both segment distinct events in time and describe them in a series of coherent sentences. This problem is a generalization of dense image region captioning and has many practical applications, such as generating textual summaries for the visually impaired, or detecting and describing important events in surveillance footage.

Source: Joint Event Detection and Description in Continuous Video Streams

Papers

Showing 5160 of 104 papers

TitleStatusHype
PV-VTT: A Privacy-Centric Dataset for Mission-Specific Anomaly Detection and Natural Language Interpretation0
Relational Graph Learning for Grounded Video Description Generation0
Saarland: Vector-based models of semantic textual similarity0
Semantic Neighborhoods as Hypergraphs0
SHEF-Multimodal: Grounding Machine Translation on Images0
SRIUBC: Simple Similarity Features for Semantic Textual Similarity0
Synchronized Audio-Visual Frames with Fractional Positional Encoding for Transformers in Video-to-Text Translation0
Task-Driven Dynamic Fusion: Reducing Ambiguity in Video Description0
Technical Report: Competition Solution For Modelscope-Sora0
The Role of the Input in Natural Language Video Description0
Show:102550
← PrevPage 6 of 11Next →

No leaderboard results yet.