SOTAVerified

Video Description

The goal of automatic Video Description is to tell a story about events happening in a video. While early Video Description methods produced captions for short clips that were manually segmented to contain a single event of interest, more recently dense video captioning has been proposed to both segment distinct events in time and describe them in a series of coherent sentences. This problem is a generalization of dense image region captioning and has many practical applications, such as generating textual summaries for the visually impaired, or detecting and describing important events in surveillance footage.

Source: Joint Event Detection and Description in Continuous Video Streams

Papers

Showing 150 of 104 papers

TitleStatusHype
Panda-70M: Captioning 70M Videos with Multiple Cross-Modality TeachersCode4
Tarsier2: Advancing Large Vision-Language Models from Detailed Video Description to Comprehensive Video UnderstandingCode4
Tarsier: Recipes for Training and Evaluating Large Video Description ModelsCode4
Hawk: Learning to Understand Open-World Video AnomaliesCode3
StoryTeller: Improving Long Video Description through Global Audio-Visual Character IdentificationCode2
TrafficVLM: A Controllable Visual Language Model for Traffic Video CaptioningCode2
FunQA: Towards Surprising Video ComprehensionCode1
VATEX: A Large-Scale, High-Quality Multilingual Dataset for Video-and-Language ResearchCode1
Fine-grained Audible Video DescriptionCode1
Identity-Aware Multi-Sentence Video DescriptionCode1
What's in a Caption? Dataset-Specific Linguistic Diversity and Its Effect on Visual Description Models and MetricsCode1
Using Descriptive Video Services to Create a Large Data Source for Video Annotation ResearchCode1
Grounded Video DescriptionCode1
Delving Deeper into the Decoder for Video CaptioningCode1
Audio Visual Scene-Aware Dialog (AVSD) Challenge at DSTC7Code1
Thinking Hallucination for Video CaptioningCode1
Describing Unseen Videos via Multi-Modal Cooperative Dialog AgentsCode0
A Mid-level Video Representation based on Binary Descriptors: A Case Study for Pornography DetectionCode0
Implicit Location-Caption Alignment via Complementary Masking for Weakly-Supervised Dense Video CaptioningCode0
Video Description using Bidirectional Recurrent Neural NetworksCode0
https://arxiv.org/abs/2407.00634Code0
VizSeq: A Visual Analysis Toolkit for Text Generation TasksCode0
TGIF: A New Dataset and Benchmark on Animated GIF DescriptionCode0
Adversarial Inference for Multi-Sentence Video DescriptionCode0
Predicting Visual Features from Text for Image and Video Caption RetrievalCode0
MSVD-Indonesian: A Benchmark for Multimodal Video-Text Tasks in IndonesianCode0
SUSTechGAN: Image Generation for Object Detection in Adverse Conditions of Autonomous DrivingCode0
Egocentric Video Description based on Temporally-Linked SequencesCode0
JMI at SemEval 2024 Task 3: Two-step approach for multimodal ECAC using in-context learning with GPT and instruction-tuned Llama modelsCode0
Describing Videos by Exploiting Temporal StructureCode0
Edit As You Wish: Video Caption Editing with Multi-grained User ControlCode0
Learn to Understand Negation in Video RetrievalCode0
Memory-augmented Attention Modelling for VideosCode0
End-to-End Audio Visual Scene-Aware Dialog using Multimodal Attention-Based Video FeaturesCode0
Improving LSTM-based Video Description with Linguistic Knowledge Mined from TextCode0
Attention-Based Multimodal Fusion for Video Description0
DANTE-AD: Dual-Vision Attention Network for Long-Term Audio Description0
Attention Based Encoder Decoder Model for Video Captioning in Nepali (2023)0
Cross-Modal Learning for Music-to-Music-Video Description Generation0
Coherent Multi-Sentence Video Description with Variable Level of Detail0
Attend and Interact: Higher-Order Object Interactions for Video Understanding0
CLearViD: Curriculum Learning for Video Description0
Prediction and Description of Near-Future Activities in Video0
A Thousand Frames in Just a Few Words: Lingual Description of Videos through Latent Topics and Sparse Object Stitching0
A Labelled Dataset for Sentiment Analysis of Videos on YouTube, TikTok, and Other Sources about the 2024 Outbreak of Measles0
Active Learning for Video Description With Cluster-Regularized Ensemble Ranking0
FIOVA: A Multi-Annotator Benchmark for Human-Aligned Video Captioning0
Incorporating Background Knowledge into Video Description Generation0
Incorporating Global Visual Features into Attention-based Neural Machine Translation.0
HOIGen-1M: A Large-scale Dataset for Human-Object Interaction Video Generation0
Show:102550
← PrevPage 1 of 3Next →

No leaderboard results yet.