SOTAVerified

Video Understanding

A crucial task of Video Understanding is to recognise and localise (in space and time) different actions or events appearing in the video.

Source: Action Detection from a Robot-Car Perspective

Papers

Showing 101150 of 1149 papers

TitleStatusHype
VideoMultiAgents: A Multi-Agent Framework for Video Question AnsweringCode1
TimeChat-Online: 80% Visual Tokens are Naturally Redundant in Streaming VideosCode3
TimeSoccer: An End-to-End Multimodal Large Language Model for Soccer Commentary Generation0
DyMU: Dynamic Merging and Virtual Unmerging for Efficient VLMs0
IV-Bench: A Benchmark for Image-Grounded Video Perception and Reasoning in Multimodal LLMsCode1
Eagle 2.5: Boosting Long-Context Post-Training for Frontier Vision-Language ModelsCode4
An LMM for Efficient Video Understanding via Reinforced Compression of Video Cubes0
Grounding-MD: Grounded Video-language Pre-training for Open-World Moment Detection0
OmniV-Med: Scaling Medical Vision-Language Model for Universal Visual Understanding0
ResNetVLLM -- Multi-modal Vision LLM for the Video Understanding Task0
Are Vision LLMs Road-Ready? A Comprehensive Benchmark for Safety-Critical Driving Video UnderstandingCode0
How Well Can General Vision-Language Models Learn Medicine By Watching Public Educational Videos?0
VistaDPO: Video Hierarchical Spatial-Temporal Direct Preference Optimization for Large Video ModelsCode1
Prototypes are Balanced Units for Efficient and Effective Partially Relevant Video Retrieval0
Perception Encoder: The best visual embeddings are not at the output of the networkCode8
PerceptionLM: Open-Access Data and Models for Detailed Visual UnderstandingCode7
Self-alignment of Large Video Language Models with Refined Regularized Preference Optimization0
OmniVDiff: Omni Controllable Video Diffusion for Generation and Understanding0
PVUW 2025 Challenge Report: Advances in Pixel-level Understanding of Complex Videos in the Wild0
Mavors: Multi-granularity Video Representation for Multimodal Large Language Model0
Multimodal Long Video Modeling Based on Temporal Dynamic ContextCode1
TinyLLaVA-Video-R1: Towards Smaller LMMs for Video ReasoningCode2
F^3Set: Towards Analyzing Fast, Frequent, and Fine-grained Events from VideosCode1
Towards Efficient and Robust Moment Retrieval System: A Unified Framework for Multi-Granularity Models and Temporal Reranking0
How Can Objects Help Video-Language Understanding?0
VideoExpert: Augmented LLM for Temporal-Sensitive Video Understanding0
SF2T: Self-supervised Fragment Finetuning of Video-LLMs for Fine-Grained Understanding0
VideoChat-R1: Enhancing Spatio-Temporal Perception via Reinforcement Fine-TuningCode3
From 128K to 4M: Efficient Training of Ultra-Long Context Large Language Models0
From Broadcast to Minimap: Achieving State-of-the-Art SoccerNet Game State Reconstruction0
InstructionBench: An Instructional Video Understanding Benchmark0
Re-thinking Temporal Search for Long-Form Video UnderstandingCode2
Learning Audio-guided Video Representation with Gated Attention for Video-Text Retrieval0
Moment Quantization for Video Temporal Grounding0
Scaling Video-Language Models to 10K Frames via Hierarchical Differential DistillationCode2
Aligned Better, Listen Better for Audio-Visual Large Language Models0
Is Temporal Prompting All We Need For Limited Labeled Action Recognition?0
TimeSearch: Hierarchical Video Search with Spotlight and Reflection for Human-like Long Video Understanding0
SpaceR: Reinforcing MLLMs in Video Spatial ReasoningCode2
Slow-Fast Architecture for Video Multi-Modal Large Language ModelsCode1
Exploring the Effect of Reinforcement Learning on Video Understanding: Insights from SEED-Bench-R1Code2
H2VU-Benchmark: A Comprehensive Benchmark for Hierarchical Holistic Video Understanding0
DANTE-AD: Dual-Vision Attention Network for Long-Term Audio Description0
CA^2ST: Cross-Attention in Audio, Space, and Time for Holistic Video Recognition0
OmniMMI: A Comprehensive Multi-modal Interaction Benchmark in Streaming Video Contexts0
BOLT: Boost Large Vision-Language Model Without Training for Long-form Video UnderstandingCode1
Mobile-VideoGPT: Fast and Accurate Video Understanding Language ModelCode2
From Trial to Triumph: Advancing Long Video Understanding via Visual Context Sample Scaling and Self-reward Alignment0
Self-ReS: Self-Reflection in Large Vision-Language Models for Long Video Understanding0
Bootstrap Your Own Views: Masked Ego-Exo Modeling for Fine-grained View-invariant Video RepresentationsCode0
Show:102550
← PrevPage 3 of 23Next →

No leaderboard results yet.