SOTAVerified

Video Understanding

A crucial task of Video Understanding is to recognise and localise (in space and time) different actions or events appearing in the video.

Source: Action Detection from a Robot-Car Perspective

Papers

Showing 501550 of 1149 papers

TitleStatusHype
Temporal Grounding of Activities using Multimodal Large Language Models0
DeMamba: AI-Generated Video Detection on Million-Scale GenVideo BenchmarkCode2
EgoSurgery-Phase: A Dataset of Surgical Phase Recognition from Egocentric Open Surgery VideosCode1
VideoTree: Adaptive Tree-based Video Representation for LLM Reasoning on Long VideosCode2
MMCTAgent: Multi-modal Critical Thinking Agent Framework for Complex Visual Reasoning0
Hierarchical Action Recognition: A Contrastive Video-Language Approach with Hierarchical Interactions0
Hawk: Learning to Understand Open-World Video AnomaliesCode3
Streaming Long Video Understanding with Large Language Models0
MAMBA4D: Efficient Long-Sequence Point Cloud Video Understanding with Disentangled Spatial-Temporal State Space Models0
Dense Connector for MLLMsCode2
TOPA: Extending Large Language Models for Video Understanding via Text-Only Pre-AlignmentCode1
Anticipating Object State Changes in Long Procedural Videos0
Open-Vocabulary Spatio-Temporal Action Detection0
CinePile: A Long Video Question Answering Dataset and Benchmark0
Challenges in Deploying Long-Context Transformers: A Theoretical Peak Performance Analysis0
No Time to Waste: Squeeze Time into Channel for Mobile Video UnderstandingCode1
Global Motion Understanding in Large-Scale Video Object Segmentation0
RETTA: Retrieval-Enhanced Test-Time Adaptation for Zero-Shot Video Captioning0
A Survey on Backbones for Deep Video Action Recognition0
Video-of-Thought: Step-by-Step Video Reasoning from Perception to Cognition0
Vision Mamba: A Comprehensive Survey and TaxonomyCode2
Snippet-Aware Transformer With Multiple Action Elements for Skeleton-Based Action SegmentationCode0
Foundation Models for Video Understanding: A SurveyCode2
WorldQA: Multimodal World Knowledge in Videos through Long-Chain Reasoning0
How Good is my Video LMM? Complex Video Reasoning and Robustness Evaluation Suite for Video-LMMs0
Learning text-to-video retrieval from image captioning0
Open-Set Video-based Facial Expression Recognition with Human Expression-sensitive Prompting0
MovieChat+: Question-aware Sparse Memory for Long Video Question AnsweringCode4
PLLaVA : Parameter-free LLaVA Extension from Images to Videos for Video Dense CaptioningCode4
SFMViT: SlowFast Meet ViT in Chaotic WorldCode1
IPAD: Industrial Process Anomaly Detection Dataset0
From Image to Video, what do we need in multimodal LLMs?0
Leveraging Temporal Contextualization for Video Action RecognitionCode2
In My Perspective, In My Hands: Accurate Egocentric 2D Hand Pose and Action RecognitionCode0
Task-Driven Exploration: Decoupling and Inter-Task Feedback for Joint Moment Retrieval and Highlight DetectionCode1
Enhancing Traffic Safety with Parallel Dense Video Captioning for End-to-End Event AnalysisCode1
Gaze-Guided Graph Neural Network for Action Anticipation Conditioned on Intention0
A Transformer-Based Model for the Prediction of Human Gaze Behavior on Videos0
MA-LMM: Memory-Augmented Large Multimodal Model for Long-Term Video UnderstandingCode3
SportsHHI: A Dataset for Human-Human Interaction Detection in Sports VideosCode1
Koala: Key frame-conditioned long video-LLM0
BioVL-QR: Egocentric Biochemical Vision-and-Language Dataset Using Micro QR Codes0
OW-VISCapTor: Abstractors for Open-World Video Instance Segmentation and Captioning0
LongVLM: Efficient Long Video Understanding via Large Language ModelsCode2
MiniGPT4-Video: Advancing Multimodal LLMs for Video Understanding with Interleaved Visual-Textual TokensCode4
SnAG: Scalable and Accurate Video GroundingCode4
R^2-Tuning: Efficient Image-to-Video Transfer Learning for Video Temporal Grounding0
R^2-Tuning: Efficient Image-to-Video Transfer Learning for Video Temporal Grounding0
Instrument-tissue Interaction Detection Framework for Surgical Video Understanding0
ST-LLM: Large Language Models Are Effective Temporal LearnersCode2
Show:102550
← PrevPage 11 of 23Next →

No leaderboard results yet.