SOTAVerified

Video Understanding

A crucial task of Video Understanding is to recognise and localise (in space and time) different actions or events appearing in the video.

Source: Action Detection from a Robot-Car Perspective

Papers

Showing 5175 of 1149 papers

TitleStatusHype
LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via a Hybrid ArchitectureCode3
Harnessing Temporal Causality for Advanced Temporal Action DetectionCode3
SlowFast-LLaVA: A Strong Training-Free Baseline for Video Large Language ModelsCode3
VideoGPT+: Integrating Image and Video Encoders for Enhanced Video UnderstandingCode3
Flash-VStream: Memory-Based Real-Time Understanding for Long Video StreamsCode3
MLVU: Benchmarking Multi-task Long Video UnderstandingCode3
Hawk: Learning to Understand Open-World Video AnomaliesCode3
MA-LMM: Memory-Augmented Large Multimodal Model for Long-Term Video UnderstandingCode3
Video Mamba Suite: State Space Model as a Versatile Alternative for Video UnderstandingCode3
Video ReCap: Recursive Captioning of Hour-Long VideosCode3
Video-ChatGPT: Towards Detailed Video Understanding via Large Vision and Language ModelsCode3
VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-TrainingCode3
Omni-Video: Democratizing Unified Video Understanding and GenerationCode2
LLaVA-Scissor: Token Compression with Semantic Connected Components for Video LLMsCode2
video-SALMONN 2: Captioning-Enhanced Audio-Visual Large Language ModelsCode2
VideoDeepResearch: Long Video Understanding With Agentic Tool UsingCode2
Reinforcement Learning Tuning for VideoLLMs: Reward Design and Data EfficiencyCode2
One Trajectory, One Token: Grounded Video Tokenization via Panoptic Sub-object TrajectoryCode2
VideoREPA: Learning Physics for Video Generation through Relational Alignment with Foundation ModelsCode2
QuickVideo: Real-Time Long Video Understanding with System Algorithm Co-DesignCode2
Video Compression Commander: Plug-and-Play Inference Acceleration for Video Large Language ModelsCode2
TinyLLaVA-Video-R1: Towards Smaller LMMs for Video ReasoningCode2
Re-thinking Temporal Search for Long-Form Video UnderstandingCode2
Scaling Video-Language Models to 10K Frames via Hierarchical Differential DistillationCode2
SpaceR: Reinforcing MLLMs in Video Spatial ReasoningCode2
Show:102550
← PrevPage 3 of 46Next →

No leaderboard results yet.