SOTAVerified

Video Understanding

A crucial task of Video Understanding is to recognise and localise (in space and time) different actions or events appearing in the video.

Source: Action Detection from a Robot-Car Perspective

Papers

Showing 176200 of 1149 papers

TitleStatusHype
V2P-Bench: Evaluating Video-Language Understanding with Visual Prompts for Better Human-Model InteractionCode1
STOP: Integrated Spatial-Temporal Dynamic Prompting for Video UnderstandingCode1
Agentic Keyframe Search for Video Question AnsweringCode1
Hybrid-Level Instruction Injection for Video Token Compression in Multi-modal Large Language ModelsCode1
Does Your Vision-Language Model Get Lost in the Long Video Sampling Dilemma?Code1
TimeLoc: A Unified End-to-End Framework for Precise Timestamp Localization in Long VideosCode1
Modeling Fine-Grained Hand-Object Dynamics for Egocentric Video Representation LearningCode1
Task Graph Maximum Likelihood Estimation for Procedural Activity Understanding in Egocentric VideosCode1
VRoPE: Rotary Position Embedding for Video Large Language ModelsCode1
video-SALMONN-o1: Reasoning-enhanced Audio-visual Large Language ModelCode1
Hier-EgoPack: Hierarchical Egocentric Video Understanding with Diverse Task PerspectivesCode1
TUMTraffic-VideoQA: A Benchmark for Unified Spatio-Temporal Video Understanding in Traffic ScenesCode1
-Video: A Training-Free Approach to Long Video Understanding via Continuous-Time Memory ConsolidationCode1
Facial Dynamics in Video: Instruction Tuning for Improved Facial Expression Perception and Contextual AwarenessCode1
AVS-Mamba: Exploring Temporal and Multi-modal Mamba for Audio-Visual SegmentationCode1
MECD+: Unlocking Event-Level Causal Graph Discovery for Video ReasoningCode1
VidChain: Chain-of-Tasks with Metric-based Direct Preference Optimization for Dense Video CaptioningCode1
From My View to Yours: Ego-Augmented Learning in Large Vision Language Models for Understanding Exocentric Daily Living ActivitiesCode1
Unifying Specialized Visual Encoders for Video Language ModelsCode1
Language-Guided Audio-Visual Learning for Long-Term Sports AssessmentCode1
Mamba4D: Efficient 4D Point Cloud Video Understanding with Disentangled Spatial-Temporal State Space ModelsCode1
ReTaKe: Reducing Temporal and Knowledge Redundancy for Long Video UnderstandingCode1
Do Language Models Understand Time?Code1
FlashVTG: Feature Layering and Adaptive Score Handling Network for Video Temporal GroundingCode1
Inst-IT: Boosting Multimodal Instance Understanding via Explicit Visual Prompt Instruction TuningCode1
Show:102550
← PrevPage 8 of 46Next →

No leaderboard results yet.