SOTAVerified

Video Description

The goal of automatic Video Description is to tell a story about events happening in a video. While early Video Description methods produced captions for short clips that were manually segmented to contain a single event of interest, more recently dense video captioning has been proposed to both segment distinct events in time and describe them in a series of coherent sentences. This problem is a generalization of dense image region captioning and has many practical applications, such as generating textual summaries for the visually impaired, or detecting and describing important events in surveillance footage.

Source: Joint Event Detection and Description in Continuous Video Streams

Papers

Showing 51100 of 104 papers

TitleStatusHype
PV-VTT: A Privacy-Centric Dataset for Mission-Specific Anomaly Detection and Natural Language Interpretation0
Relational Graph Learning for Grounded Video Description Generation0
Saarland: Vector-based models of semantic textual similarity0
Semantic Neighborhoods as Hypergraphs0
SHEF-Multimodal: Grounding Machine Translation on Images0
SRIUBC: Simple Similarity Features for Semantic Textual Similarity0
Synchronized Audio-Visual Frames with Fractional Positional Encoding for Transformers in Video-to-Text Translation0
Task-Driven Dynamic Fusion: Reducing Ambiguity in Video Description0
Technical Report: Competition Solution For Modelscope-Sora0
The Role of the Input in Natural Language Video Description0
Towards Zero-Shot & Explainable Video Description by Reasoning over Graphs of Events in Space and Time0
Unbox the Blackbox: Predict and Interpret YouTube Viewership Using Deep Learning0
Vectors of Locally Aggregated Centers for Compact Video Representation0
VideoA11y: Method and Dataset for Accessible Video Description0
VideoCLIP-XL: Advancing Long Description Understanding for Video CLIP Models0
Video Description: A Survey of Methods, Datasets and Evaluation Metrics0
VideoMCC: a New Benchmark for Video Comprehension0
Visual-aware Attention Dual-stream Decoder for Video Captioning0
A Comprehensive Review on Recent Methods and Challenges of Video Description0
X-VARS: Introducing Explainability in Football Refereeing with Multi-Modal Large Language Model0
ActionHub: A Large-scale Action Video Description Dataset for Zero-shot Action Recognition0
Active Learning for Video Description With Cluster-Regularized Ensemble Ranking0
A Dataset for Telling the Stories of Social Media Videos0
A Labelled Dataset for Sentiment Analysis of Videos on YouTube, TikTok, and Other Sources about the 2024 Outbreak of Measles0
A Multi-scale Multiple Instance Video Description Network0
Analyzing Political Figures in Real-Time: Leveraging YouTube Metadata for Sentiment Analysis0
An Efficient Keyframes Selection Based Framework for Video Captioning0
End-to-End Video Captioning0
A Thousand Frames in Just a Few Words: Lingual Description of Videos through Latent Topics and Sparse Object Stitching0
Attend and Interact: Higher-Order Object Interactions for Video Understanding0
Attention Based Encoder Decoder Model for Video Captioning in Nepali (2023)0
Attention-Based Multimodal Fusion for Video Description0
Attentive Sequence to Sequence Translation for Localizing Clips of Interest by Natural Language Descriptions0
AVD2: Accident Video Diffusion for Accident Video Description0
Better Exploiting Motion for Better Action Recognition0
Bidirectional Long-Short Term Memory for Video Description0
Boosting Video Captioning with Dynamic Loss Network0
Bridge Video and Text with Cascade Syntactic Structure0
FIOVA: A Multi-Annotator Benchmark for Human-Aligned Video Captioning0
Prediction and Description of Near-Future Activities in Video0
CLearViD: Curriculum Learning for Video Description0
Coherent Multi-Sentence Video Description with Variable Level of Detail0
Cross-Modal Learning for Music-to-Music-Video Description Generation0
DANTE-AD: Dual-Vision Attention Network for Long-Term Audio Description0
Efficient data-driven encoding of scene motion using Eccentricity0
Enhancing Video Understanding: Deep Neural Networks for Spatiotemporal Analysis0
Generating Video Description using Sequence-to-sequence Model with Temporal Attention0
HENRY-CORE: Domain Adaptation and Stacking for Text Similarity0
Hierarchical Boundary-Aware Neural Encoder for Video Captioning0
HOIGen-1M: A Large-scale Dataset for Human-Object Interaction Video Generation0
Show:102550
← PrevPage 2 of 3Next →

No leaderboard results yet.