SOTAVerified

Video Understanding

A crucial task of Video Understanding is to recognise and localise (in space and time) different actions or events appearing in the video.

Source: Action Detection from a Robot-Car Perspective

Papers

Showing 351375 of 1149 papers

TitleStatusHype
TS-LLaVA: Constructing Visual Tokens through Thumbnail-and-Sampling for Training-Free Video Large Language ModelsCode1
ViBe: A Text-to-Video Benchmark for Evaluating Hallucination in Large Multimodal Models0
Can MLLMs Guide Weakly-Supervised Temporal Action Localization Tasks?0
EVQAScore: Efficient Video Question Answering Data Evaluation0
Video RWKV:Video Action Recognition Based RWKV0
StreamingBench: Assessing the Gap for MLLMs to Achieve Streaming Video UnderstandingCode2
Personalized Video Summarization by Multimodal Video Understanding0
PPLLaVA: Varied Video Sequence Understanding With Prompt GuidanceCode2
Language-Assisted Skeleton Action Understanding for Skeleton-Based Temporal Action SegmentationCode1
Video Token Merging for Long-form Video Understanding0
Situational Scene Graph for Structured Human-centric Situation UnderstandingCode0
TOMATO: Assessing Visual Temporal Reasoning Capabilities in Multimodal Foundation ModelsCode1
Zero-Shot Action Recognition in Surveillance Videos0
Egocentric and Exocentric Methods: A Short Survey0
Adaptive Video Understanding Agent: Enhancing efficiency with dynamic frame sampling and feedback-driven reasoning0
TimeSuite: Improving MLLMs for Long Video Understanding via Grounded TuningCode2
VideoWebArena: Evaluating Long Context Multimodal Agents with Video Understanding Web TasksCode1
CAMEL-Bench: A Comprehensive Arabic LMM BenchmarkCode1
LongVU: Spatiotemporal Adaptive Compression for Long Video-Language UnderstandingCode3
ContextDet: Temporal Action Detection with Adaptive Context Aggregation0
EVA: An Embodied World Model for Future Video Anticipation0
FIOVA: A Multi-Annotator Benchmark for Human-Aligned Video Captioning0
Making Every Frame Matter: Continuous Video Understanding for Large Models via Adaptive State Modeling0
Zero-shot Action Localization via the Confidence of Large Vision-Language Models0
VidEgoThink: Assessing Egocentric Video Understanding Capabilities for Embodied AICode2
Show:102550
← PrevPage 15 of 46Next →

No leaderboard results yet.