SOTAVerified

Video Description

The goal of automatic Video Description is to tell a story about events happening in a video. While early Video Description methods produced captions for short clips that were manually segmented to contain a single event of interest, more recently dense video captioning has been proposed to both segment distinct events in time and describe them in a series of coherent sentences. This problem is a generalization of dense image region captioning and has many practical applications, such as generating textual summaries for the visually impaired, or detecting and describing important events in surveillance footage.

Source: Joint Event Detection and Description in Continuous Video Streams

Papers

Showing 4150 of 104 papers

TitleStatusHype
HENRY-CORE: Domain Adaptation and Stacking for Text Similarity0
Hierarchical Boundary-Aware Neural Encoder for Video Captioning0
HOIGen-1M: A Large-scale Dataset for Human-Object Interaction Video Generation0
Bridge Video and Text with Cascade Syntactic Structure0
AVD2: Accident Video Diffusion for Accident Video Description0
FIOVA: A Multi-Annotator Benchmark for Human-Aligned Video Captioning0
Prediction and Description of Near-Future Activities in Video0
Incorporating Background Knowledge into Video Description Generation0
Incorporating Global Visual Features into Attention-based Neural Machine Translation.0
Analyzing Political Figures in Real-Time: Leveraging YouTube Metadata for Sentiment Analysis0
Show:102550
← PrevPage 5 of 11Next →

No leaderboard results yet.