SOTAVerified

Video Description

The goal of automatic Video Description is to tell a story about events happening in a video. While early Video Description methods produced captions for short clips that were manually segmented to contain a single event of interest, more recently dense video captioning has been proposed to both segment distinct events in time and describe them in a series of coherent sentences. This problem is a generalization of dense image region captioning and has many practical applications, such as generating textual summaries for the visually impaired, or detecting and describing important events in surveillance footage.

Source: Joint Event Detection and Description in Continuous Video Streams

Papers

Showing 150 of 104 papers

TitleStatusHype
DANTE-AD: Dual-Vision Attention Network for Long-Term Audio Description0
HOIGen-1M: A Large-scale Dataset for Human-Object Interaction Video Generation0
Cross-Modal Learning for Music-to-Music-Video Description Generation0
VideoA11y: Method and Dataset for Accessible Video Description0
AVD2: Accident Video Diffusion for Accident Video Description0
Enhancing Video Understanding: Deep Neural Networks for Spatiotemporal Analysis0
Towards Zero-Shot & Explainable Video Description by Reasoning over Graphs of Events in Space and Time0
Tarsier2: Advancing Large Vision-Language Models from Detailed Video Description to Comprehensive Video UnderstandingCode4
Implicit Location-Caption Alignment via Complementary Masking for Weakly-Supervised Dense Video CaptioningCode0
StoryTeller: Improving Long Video Description through Global Audio-Visual Character IdentificationCode2
PV-VTT: A Privacy-Centric Dataset for Mission-Specific Anomaly Detection and Natural Language Interpretation0
FIOVA: A Multi-Annotator Benchmark for Human-Aligned Video Captioning0
VideoCLIP-XL: Advancing Long Description Understanding for Video CLIP Models0
Technical Report: Competition Solution For Modelscope-Sora0
Kubrick: Multimodal Agent Collaborations for Synthetic Video Generation0
SUSTechGAN: Image Generation for Object Detection in Adverse Conditions of Autonomous DrivingCode0
https://arxiv.org/abs/2407.00634Code0
Tarsier: Recipes for Training and Evaluating Large Video Description ModelsCode4
LLAVIDAL: A Large LAnguage VIsion Model for Daily Activities of Living0
A Labelled Dataset for Sentiment Analysis of Videos on YouTube, TikTok, and Other Sources about the 2024 Outbreak of Measles0
Hawk: Learning to Understand Open-World Video AnomaliesCode3
TrafficVLM: A Controllable Visual Language Model for Traffic Video CaptioningCode2
X-VARS: Introducing Explainability in Football Refereeing with Multi-Modal Large Language Model0
JMI at SemEval 2024 Task 3: Two-step approach for multimodal ECAC using in-context learning with GPT and instruction-tuned Llama modelsCode0
Panda-70M: Captioning 70M Videos with Multiple Cross-Modality TeachersCode4
Multi-modal News Understanding with Professionally Labelled Videos (ReutersViLNews)0
ActionHub: A Large-scale Action Video Description Dataset for Zero-shot Action Recognition0
Attention Based Encoder Decoder Model for Video Captioning in Nepali (2023)0
Multi Sentence Description of Complex Manipulation Action Videos0
CLearViD: Curriculum Learning for Video Description0
Analyzing Political Figures in Real-Time: Leveraging YouTube Metadata for Sentiment Analysis0
FunQA: Towards Surprising Video ComprehensionCode1
MSVD-Indonesian: A Benchmark for Multimodal Video-Text Tasks in IndonesianCode0
Edit As You Wish: Video Caption Editing with Multi-grained User ControlCode0
Fine-grained Audible Video DescriptionCode1
Thinking Hallucination for Video CaptioningCode1
What's in a Caption? Dataset-Specific Linguistic Diversity and Its Effect on Visual Description Models and MetricsCode1
Learn to Understand Negation in Video RetrievalCode0
Synchronized Audio-Visual Frames with Fractional Positional Encoding for Transformers in Video-to-Text Translation0
Relational Graph Learning for Grounded Video Description Generation0
An Efficient Keyframes Selection Based Framework for Video Captioning0
NarrationBot and InfoBot: A Hybrid System for Automated Video Description0
Visual-aware Attention Dual-stream Decoder for Video Captioning0
Boosting Video Captioning with Dynamic Loss Network0
Efficient data-driven encoding of scene motion using Eccentricity0
The Role of the Input in Natural Language Video Description0
Unbox the Blackbox: Predict and Interpret YouTube Viewership Using Deep Learning0
MSVD-Turkish: A Comprehensive Multimodal Dataset for Integrated Vision and Language Research in Turkish0
A Comprehensive Review on Recent Methods and Challenges of Video Description0
Identity-Aware Multi-Sentence Video DescriptionCode1
Show:102550
← PrevPage 1 of 3Next →

No leaderboard results yet.