SOTAVerified

Video Understanding

A crucial task of Video Understanding is to recognise and localise (in space and time) different actions or events appearing in the video.

Source: Action Detection from a Robot-Car Perspective

Papers

Showing 151200 of 1149 papers

TitleStatusHype
M^3-VOS: Multi-Phase, Multi-Transition, and Multi-Scenery Video Object SegmentationCode1
Self-supervised Learning of Echocardiographic Video Representations via Online Cluster DistillationCode1
CyberV: Cybernetics for Test-time Scaling in Video UnderstandingCode1
SiLVR: A Simple Language-based Video Reasoning FrameworkCode1
DisTime: Distribution-based Time Representation for Video Large Language ModelsCode1
VideoCAD: A Large-Scale Video Dataset for Learning UI Interactions and 3D Reasoning from CAD SoftwareCode1
PreFM: Online Audio-Visual Event Parsing via Predictive Future ModelingCode1
VideoReasonBench: Can MLLMs Perform Vision-Centric Complex Video Reasoning?Code1
VidText: Towards Comprehensive Evaluation for Video Text UnderstandingCode1
MUSEG: Reinforcing Video Temporal Understanding via Timestamp-Aware Multi-Segment GroundingCode1
Fact-R1: Towards Explainable Video Misinformation Detection with Deep ReasoningCode1
LoVR: A Benchmark for Long Video Retrieval in Multimodal ContextsCode1
Uncertainty-Weighted Image-Event Multimodal Fusion for Video Anomaly DetectionCode1
TEMPURA: Temporal Event Masked Prediction and Understanding for Reasoning in ActionCode1
VideoHallu: Evaluating and Mitigating Multi-modal Hallucinations on Synthetic Video UnderstandingCode1
VideoMultiAgents: A Multi-Agent Framework for Video Question AnsweringCode1
IV-Bench: A Benchmark for Image-Grounded Video Perception and Reasoning in Multimodal LLMsCode1
VistaDPO: Video Hierarchical Spatial-Temporal Direct Preference Optimization for Large Video ModelsCode1
Multimodal Long Video Modeling Based on Temporal Dynamic ContextCode1
F^3Set: Towards Analyzing Fast, Frequent, and Fine-grained Events from VideosCode1
Slow-Fast Architecture for Video Multi-Modal Large Language ModelsCode1
BOLT: Boost Large Vision-Language Model Without Training for Long-form Video UnderstandingCode1
Exploring Hallucination of Large Multimodal Models in Video Understanding: Benchmark, Analysis and MitigationCode1
PAVE: Patching and Adapting Video Large Language ModelsCode1
MammAlps: A multi-view video behavior monitoring dataset of wild mammals in the Swiss AlpsCode1
V2P-Bench: Evaluating Video-Language Understanding with Visual Prompts for Better Human-Model InteractionCode1
STOP: Integrated Spatial-Temporal Dynamic Prompting for Video UnderstandingCode1
Agentic Keyframe Search for Video Question AnsweringCode1
Hybrid-Level Instruction Injection for Video Token Compression in Multi-modal Large Language ModelsCode1
Does Your Vision-Language Model Get Lost in the Long Video Sampling Dilemma?Code1
TimeLoc: A Unified End-to-End Framework for Precise Timestamp Localization in Long VideosCode1
Modeling Fine-Grained Hand-Object Dynamics for Egocentric Video Representation LearningCode1
Task Graph Maximum Likelihood Estimation for Procedural Activity Understanding in Egocentric VideosCode1
VRoPE: Rotary Position Embedding for Video Large Language ModelsCode1
video-SALMONN-o1: Reasoning-enhanced Audio-visual Large Language ModelCode1
Hier-EgoPack: Hierarchical Egocentric Video Understanding with Diverse Task PerspectivesCode1
TUMTraffic-VideoQA: A Benchmark for Unified Spatio-Temporal Video Understanding in Traffic ScenesCode1
-Video: A Training-Free Approach to Long Video Understanding via Continuous-Time Memory ConsolidationCode1
Facial Dynamics in Video: Instruction Tuning for Improved Facial Expression Perception and Contextual AwarenessCode1
AVS-Mamba: Exploring Temporal and Multi-modal Mamba for Audio-Visual SegmentationCode1
MECD+: Unlocking Event-Level Causal Graph Discovery for Video ReasoningCode1
VidChain: Chain-of-Tasks with Metric-based Direct Preference Optimization for Dense Video CaptioningCode1
From My View to Yours: Ego-Augmented Learning in Large Vision Language Models for Understanding Exocentric Daily Living ActivitiesCode1
Unifying Specialized Visual Encoders for Video Language ModelsCode1
Language-Guided Audio-Visual Learning for Long-Term Sports AssessmentCode1
Mamba4D: Efficient 4D Point Cloud Video Understanding with Disentangled Spatial-Temporal State Space ModelsCode1
ReTaKe: Reducing Temporal and Knowledge Redundancy for Long Video UnderstandingCode1
Do Language Models Understand Time?Code1
FlashVTG: Feature Layering and Adaptive Score Handling Network for Video Temporal GroundingCode1
Inst-IT: Boosting Multimodal Instance Understanding via Explicit Visual Prompt Instruction TuningCode1
Show:102550
← PrevPage 4 of 23Next →

No leaderboard results yet.