SOTAVerified

Video Understanding

A crucial task of Video Understanding is to recognise and localise (in space and time) different actions or events appearing in the video.

Source: Action Detection from a Robot-Car Perspective

Papers

Showing 381390 of 1149 papers

TitleStatusHype
VERIFIED: A Video Corpus Moment Retrieval Benchmark for Fine-Grained Video UnderstandingCode1
TVBench: Redesigning Video-Language Evaluation0
Enhancing Multimodal LLM for Detailed and Accurate Video Captioning using Multi-Round Preference Optimization0
MM-Ego: Towards Building Egocentric Multimodal LLMs0
TRACE: Temporal Grounding Video LLM via Causal Event ModelingCode2
Enhancing Temporal Modeling of Video LLMs via Time GatingCode0
SparseVLM: Visual Token Sparsification for Efficient Vision-Language Model InferenceCode3
AuroraCap: Efficient, Performant Video Detailed Captioning and a New Benchmark0
Grounded-VideoLLM: Sharpening Fine-grained Temporal Grounding in Video Large Language ModelsCode2
Frame-Voyager: Learning to Query Frames for Video Large Language Models0
Show:102550
← PrevPage 39 of 115Next →

No leaderboard results yet.