SOTAVerified

Video Understanding

A crucial task of Video Understanding is to recognise and localise (in space and time) different actions or events appearing in the video.

Source: Action Detection from a Robot-Car Perspective

Papers

Showing 171180 of 1149 papers

TitleStatusHype
Slow-Fast Architecture for Video Multi-Modal Large Language ModelsCode1
BOLT: Boost Large Vision-Language Model Without Training for Long-form Video UnderstandingCode1
PAVE: Patching and Adapting Video Large Language ModelsCode1
Exploring Hallucination of Large Multimodal Models in Video Understanding: Benchmark, Analysis and MitigationCode1
MammAlps: A multi-view video behavior monitoring dataset of wild mammals in the Swiss AlpsCode1
V2P-Bench: Evaluating Video-Language Understanding with Visual Prompts for Better Human-Model InteractionCode1
Agentic Keyframe Search for Video Question AnsweringCode1
STOP: Integrated Spatial-Temporal Dynamic Prompting for Video UnderstandingCode1
Hybrid-Level Instruction Injection for Video Token Compression in Multi-modal Large Language ModelsCode1
Does Your Vision-Language Model Get Lost in the Long Video Sampling Dilemma?Code1
Show:102550
← PrevPage 18 of 115Next →

No leaderboard results yet.