SOTAVerified

Video Understanding

A crucial task of Video Understanding is to recognise and localise (in space and time) different actions or events appearing in the video.

Source: Action Detection from a Robot-Car Perspective

Papers

Showing 201250 of 1149 papers

TitleStatusHype
VideoICL: Confidence-based Iterative In-context Learning for Out-of-Distribution Video UnderstandingCode1
PhysGame: Uncovering Physical Commonsense Violations in Gameplay VideosCode1
T2Vid: Translating Long Text into Multi-Image is the Catalyst for Video-LLMsCode1
Teaching VLMs to Localize Specific Objects from In-context ExamplesCode1
TS-LLaVA: Constructing Visual Tokens through Thumbnail-and-Sampling for Training-Free Video Large Language ModelsCode1
Language-Assisted Skeleton Action Understanding for Skeleton-Based Temporal Action SegmentationCode1
TOMATO: Assessing Visual Temporal Reasoning Capabilities in Multimodal Foundation ModelsCode1
VideoWebArena: Evaluating Long Context Multimodal Agents with Video Understanding Web TasksCode1
CAMEL-Bench: A Comprehensive Arabic LMM BenchmarkCode1
TemporalBench: Benchmarking Fine-grained Temporal Understanding for Multimodal Video ModelsCode1
VERIFIED: A Video Corpus Moment Retrieval Benchmark for Fine-Grained Video UnderstandingCode1
VideoINSTA: Zero-shot Long Video Understanding via Informative Spatial-Temporal Reasoning with LLMsCode1
From Seconds to Hours: Reviewing MultiModal Large Language Models on Comprehensive Long Video UnderstandingCode1
HAT: History-Augmented Anchor Transformer for Online Temporal Action LocalizationCode1
COM Kitchens: An Unedited Overhead-view Video Dataset as a Vision-Language BenchmarkCode1
Learning Video Context as Interleaved Multimodal SequencesCode1
EgoCVR: An Egocentric Benchmark for Fine-Grained Composed Video RetrievalCode1
VideoMamba: Spatio-Temporal Selective State Space ModelCode1
Hypergraph Multi-modal Large Language Model: Exploiting EEG and Eye-tracking Modalities to Evaluate Heterogeneous Responses for Video UnderstandingCode1
MMAD: Multi-label Micro-Action Detection in VideosCode1
InfiniBench: A Comprehensive Benchmark for Large Multimodal Models in Very Long Video UnderstandingCode1
Snakes and Ladders: Two Steps Up for VideoMambaCode1
Fibottention: Inceptive Visual Representation Learning with Diverse Attention Across HeadsCode1
Towards Event-oriented Long Video UnderstandingCode1
AlanaVLM: A Multimodal Embodied AI Foundation Model for Egocentric Video UnderstandingCode1
Slot State Space ModelsCode1
VideoVista: A Versatile Benchmark for Video Understanding and ReasoningCode1
MMWorld: Towards Multi-discipline Multi-faceted World Model Evaluation in VideosCode1
Differentiable Task Graph Learning: Procedural Activity Representation and Online Mistake Detection from Egocentric VideosCode1
EgoSurgery-Phase: A Dataset of Surgical Phase Recognition from Egocentric Open Surgery VideosCode1
TOPA: Extending Large Language Models for Video Understanding via Text-Only Pre-AlignmentCode1
No Time to Waste: Squeeze Time into Channel for Mobile Video UnderstandingCode1
SFMViT: SlowFast Meet ViT in Chaotic WorldCode1
Task-Driven Exploration: Decoupling and Inter-Task Feedback for Joint Moment Retrieval and Highlight DetectionCode1
Enhancing Traffic Safety with Parallel Dense Video Captioning for End-to-End Event AnalysisCode1
SportsHHI: A Dataset for Human-Human Interaction Detection in Sports VideosCode1
Language Repository for Long Video UnderstandingCode1
Exploring Pre-trained Text-to-Video Diffusion Models for Referring Video Object SegmentationCode1
Towards Neuro-Symbolic Video UnderstandingCode1
Spatio-temporal Prompting Network for Robust Video Feature ExtractionCode1
BehAVE: Behaviour Alignment of Video Game EncodingsCode1
Compositional Video Understanding with Spatiotemporal Structure-based TransformersCode1
A Simple LLM Framework for Long-Range Video Question-AnsweringCode1
Open-Vocabulary Video Relation ExtractionCode1
Shot2Story20K: A New Benchmark for Comprehensive Understanding of Multi-shot VideosCode1
SMILE: Multimodal Dataset for Understanding Laughter in Video with Language ModelsCode1
How Well Does GPT-4V(ision) Adapt to Distribution Shifts? A Preliminary InvestigationCode1
Grounded Question-Answering in Long Egocentric VideosCode1
Action Scene Graphs for Long-Form Understanding of Egocentric VideosCode1
DEVIAS: Learning Disentangled Video Representations of Action and SceneCode1
Show:102550
← PrevPage 5 of 23Next →

No leaderboard results yet.