SOTAVerified

Video Understanding

A crucial task of Video Understanding is to recognise and localise (in space and time) different actions or events appearing in the video.

Source: Action Detection from a Robot-Car Perspective

Papers

Showing 551600 of 1149 papers

TitleStatusHype
Apollo: An Exploration of Video Understanding in Large Multimodal Models0
APVR: Hour-Level Long Video Understanding with Adaptive Pivot Visual Information Retrieval0
Artificial intelligence optical hardware empowers high-resolution hyperspectral video understanding at 1.2 Tb/s0
A SPIKING SEQUENTIAL MODEL: RECURRENT LEAKY INTEGRATE-AND-FIRE0
A Structured Model For Action Detection0
A Study On the Effects of Pre-processing On Spatio-temporal Action Recognition Using Spiking Neural Networks Trained with STDP0
A Survey on Backbones for Deep Video Action Recognition0
A Survey on Generative AI and LLM for Video Generation, Understanding, and Streaming0
A Survey on Mamba Architecture for Vision Applications0
A Survey on Video Analytics in Cloud-Edge-Terminal Collaborative Systems0
A Transformer-Based Model for the Prediction of Human Gaze Behavior on Videos0
Attend and Interact: Higher-Order Object Interactions for Video Understanding0
Attend-Fusion: Efficient Audio-Visual Fusion for Video Classification0
Attention Is Not Enough: Mitigating the Distribution Discrepancy in Asynchronous Multimodal Sequence Fusion0
Audio-Visual Glance Network for Efficient Video Recognition0
Audio-Visual LLM for Video Understanding0
Audio Visual Scene-Aware Dialog Generation with Transformer-based Video Representations0
Audio-visual training for improved grounding in video-text LLMs0
Augmented Transformer with Adaptive Graph for Temporal Action Proposal Generation0
A Unified Framework for Human-centric Point Cloud Video Understanding0
A Unified Model for Video Understanding and Knowledge Embedding with Heterogeneous Knowledge Graph Dataset0
AuroraCap: Efficient, Performant Video Detailed Captioning and a New Benchmark0
Auto-captions on GIF: A Large-scale Video-sentence Dataset for Vision-language Pre-training0
Auto-X3D: Ultra-Efficient Video Understanding via Finer-Grained Neural Architecture Search0
AVD2: Accident Video Diffusion for Accident Video Description0
Empowering LLMs with Pseudo-Untrimmed Videos for Audio-Visual Temporal Understanding0
AV-Reasoner: Improving and Benchmarking Clue-Grounded Audio-Visual Counting for MLLMs0
AVT: Audio-Video Transformer for Multimodal Action Recognition0
BATMAN: Bilateral Attention Transformer in Motion-Appearance Neighboring Space for Video Object Segmentation0
BEARCUBS: A benchmark for computer-using web agents0
BERT for Large-scale Video Segment Classification with Test-time Augmentation0
Beyond Appearance: Geometric Cues for Robust Video Instance Segmentation0
Beyond Boxes: Mask-Guided Spatio-Temporal Feature Aggregation for Video Object Detection0
Beyond still images: Temporal features and input variance resilience0
Beyond the Camera: Neural Networks in World Coordinates0
Beyond Training: Dynamic Token Merging for Zero-Shot Video Understanding0
BioVL-QR: Egocentric Biochemical Vision-and-Language Dataset Using Micro QR Codes0
Breaking Down Video LLM Benchmarks: Knowledge, Spatial Perception, or True Temporal Understanding?0
Breaking the Encoder Barrier for Seamless Video-Language Understanding0
Bridging Audio and Vision: Zero-Shot Audiovisual Segmentation by Connecting Pretrained Models0
Bringing Image Scene Structure to Video via Frame-Clip Consistency of Object Tokens0
Building a Mind Palace: Structuring Environment-Grounded Semantic Graphs for Effective Long Video Analysis with LLMs0
Building Scalable Video Understanding Benchmarks through Sports0
C^3: Compositional Counterfactual Contrastive Learning for Video-grounded Dialogues0
CA^2ST: Cross-Attention in Audio, Space, and Time for Holistic Video Recognition0
CAG-QIL: Context-Aware Actionness Grouping via Q Imitation Learning for Online Temporal Action Localization0
Camera Calibration and Player Localization in SoccerNet-v2 and Investigation of their Representations for Action Spotting0
Can CLIP Count Stars? An Empirical Study on Quantity Bias in CLIP0
FIOVA: A Multi-Annotator Benchmark for Human-Aligned Video Captioning0
Can MLLMs Guide Weakly-Supervised Temporal Action Localization Tasks?0
Show:102550
← PrevPage 12 of 23Next →

No leaderboard results yet.