SOTAVerified

Video Understanding

A crucial task of Video Understanding is to recognise and localise (in space and time) different actions or events appearing in the video.

Source: Action Detection from a Robot-Car Perspective

Papers

Showing 51100 of 1149 papers

TitleStatusHype
LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via a Hybrid ArchitectureCode3
Harnessing Temporal Causality for Advanced Temporal Action DetectionCode3
SlowFast-LLaVA: A Strong Training-Free Baseline for Video Large Language ModelsCode3
VideoGPT+: Integrating Image and Video Encoders for Enhanced Video UnderstandingCode3
Flash-VStream: Memory-Based Real-Time Understanding for Long Video StreamsCode3
MLVU: Benchmarking Multi-task Long Video UnderstandingCode3
Hawk: Learning to Understand Open-World Video AnomaliesCode3
MA-LMM: Memory-Augmented Large Multimodal Model for Long-Term Video UnderstandingCode3
Video Mamba Suite: State Space Model as a Versatile Alternative for Video UnderstandingCode3
Video ReCap: Recursive Captioning of Hour-Long VideosCode3
Video-ChatGPT: Towards Detailed Video Understanding via Large Vision and Language ModelsCode3
VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-TrainingCode3
Omni-Video: Democratizing Unified Video Understanding and GenerationCode2
LLaVA-Scissor: Token Compression with Semantic Connected Components for Video LLMsCode2
video-SALMONN 2: Captioning-Enhanced Audio-Visual Large Language ModelsCode2
VideoDeepResearch: Long Video Understanding With Agentic Tool UsingCode2
Reinforcement Learning Tuning for VideoLLMs: Reward Design and Data EfficiencyCode2
One Trajectory, One Token: Grounded Video Tokenization via Panoptic Sub-object TrajectoryCode2
VideoREPA: Learning Physics for Video Generation through Relational Alignment with Foundation ModelsCode2
QuickVideo: Real-Time Long Video Understanding with System Algorithm Co-DesignCode2
Video Compression Commander: Plug-and-Play Inference Acceleration for Video Large Language ModelsCode2
TinyLLaVA-Video-R1: Towards Smaller LMMs for Video ReasoningCode2
Scaling Video-Language Models to 10K Frames via Hierarchical Differential DistillationCode2
Re-thinking Temporal Search for Long-Form Video UnderstandingCode2
SpaceR: Reinforcing MLLMs in Video Spatial ReasoningCode2
Exploring the Effect of Reinforcement Learning on Video Understanding: Insights from SEED-Bench-R1Code2
Mobile-VideoGPT: Fast and Accurate Video Understanding Language ModelCode2
ViSpeak: Visual Instruction Feedback in Streaming VideosCode2
AdaReTaKe: Adaptive Redundancy Reduction to Perceive Longer for Video-language UnderstandingCode2
QuoTA: Query-oriented Token Assignment via CoT Query Decouple for Long Video ComprehensionCode2
SVBench: A Benchmark with Temporal Multi-Turn Dialogues for Streaming Video UnderstandingCode2
AIN: The Arabic INclusive Large Multimodal ModelCode2
TinyLLaVA-Video: A Simple Framework of Small-scale Large Multimodal Models for Video UnderstandingCode2
Streaming Video Understanding and Multi-round Interaction with Memory-enhanced KnowledgeCode2
MMVU: Measuring Expert-Level Multi-Discipline Video UnderstandingCode2
OVO-Bench: How Far is Your Video-LLMs from Real-World Online Video Understanding?Code2
Adaptive Keyframe Sampling for Long Video UnderstandingCode2
Online Video Understanding: OVBench and VideoChat-OnlineCode2
FrameFusion: Combining Similarity and Importance for Video Token Reduction on Large Visual Language ModelsCode2
PruneVid: Visual Token Pruning for Efficient Video Large Language ModelsCode2
InstructSeg: Unifying Instructed Visual Segmentation with Multi-modal Large Language ModelsCode2
Uni-AdaFocus: Spatial-temporal Dynamic Computation for Video RecognitionCode2
Neptune: The Long Orbit to Benchmarking Long Video UnderstandingCode2
LinVT: Empower Your Image-level Large Language Model to Understand VideosCode2
AIM: Adaptive Inference of Multi-Modal LLMs via Token Merging and PruningCode2
LongVALE: Vision-Audio-Language-Event Benchmark Towards Time-Aware Omni-Modal Perception of Long VideosCode2
TimeMarker: A Versatile Video-LLM for Long and Short Video Understanding with Superior Temporal Localization AbilityCode2
StreamingBench: Assessing the Gap for MLLMs to Achieve Streaming Video UnderstandingCode2
PPLLaVA: Varied Video Sequence Understanding With Prompt GuidanceCode2
TimeSuite: Improving MLLMs for Long Video Understanding via Grounded TuningCode2
Show:102550
← PrevPage 2 of 23Next →

No leaderboard results yet.