SOTAVerified

Video Understanding

A crucial task of Video Understanding is to recognise and localise (in space and time) different actions or events appearing in the video.

Source: Action Detection from a Robot-Car Perspective

Papers

Showing 101110 of 1149 papers

TitleStatusHype
VidEgoThink: Assessing Egocentric Video Understanding Capabilities for Embodied AICode2
Free Video-LLM: Prompt-guided Visual Perception for Efficient Training-free Video LLMsCode2
TRACE: Temporal Grounding Video LLM via Causal Event ModelingCode2
Grounded-VideoLLM: Sharpening Fine-grained Temporal Grounding in Video Large Language ModelsCode2
E.T. Bench: Towards Open-Ended Event-Level Video-Language UnderstandingCode2
Video-CCAM: Enhancing Video-Language Understanding with Causal Cross-Attention Masks for Short and Long VideosCode2
LongVideoBench: A Benchmark for Long-context Interleaved Video-Language UnderstandingCode2
Video-STaR: Self-Training Enables Video Instruction Tuning with Any SupervisionCode2
OmAgent: A Multi-modal Agent Framework for Complex Video Understanding with Task Divide-and-ConquerCode2
Needle In A Video Haystack: A Scalable Synthetic Evaluator for Video MLLMsCode2
Show:102550
← PrevPage 11 of 115Next →

No leaderboard results yet.