SOTAVerified

Video Understanding

A crucial task of Video Understanding is to recognise and localise (in space and time) different actions or events appearing in the video.

Source: Action Detection from a Robot-Car Perspective

Papers

Showing 801850 of 1149 papers

TitleStatusHype
Large Language Models for Crash Detection in Video: A Survey of Methods, Datasets, and Challenges0
Large-Scale Video Classification with Feature Space Augmentation coupled with Learned Label Relations and Ensembling0
Large Scale Video Representation Learning via Relational Graph Clustering0
Large-Scale YouTube-8M Video Understanding with Deep Neural Networks0
LASER: A Neuro-Symbolic Framework for Learning Spatial-Temporal Scene Graphs with Weak Supervision0
Learning an Augmented RGB Representation with Cross-Modal Knowledge Distillation for Action Detection0
Learning Audio-guided Video Representation with Gated Attention for Video-Text Retrieval0
Learning Dynamic MRI Reconstruction with Convolutional Network Assisted Reconstruction Swin Transformer0
Learning Dynamics via Graph Neural Networks for Human Pose Estimation and Tracking0
Learning Fine-grained View-Invariant Representations from Unpaired Ego-Exo Videos via Temporal Alignment0
Learning from Multiple Sources for Video Summarisation0
Learning Higher-order Object Interactions for Keypoint-based Video Understanding0
Learning Object State Changes in Videos: An Open-World Perspective0
Learning reusable concepts across different egocentric video understanding tasks0
Learning Space-Time Semantic Correspondences0
Learning text-to-video retrieval from image captioning0
Learning to Focus on the Foreground for Temporal Sentence Grounding0
Learning to Visually Connect Actions and their Effects0
Learning without Prejudice: Avoiding Bias in Webly-Supervised Action Recognition0
Less than Few: Self-Shot Video Instance Segmentation0
Leveraging Foundation Models for Multimodal Graph-Based Action Recognition0
Leveraging Local Temporal Information for Multimodal Scene Classification0
LIGAR: Lightweight General-purpose Action Recognition0
LiveVLM: Efficient Online Video Understanding via Streaming-Oriented KV Cache and Retrieval0
LiVLR: A Lightweight Visual-Linguistic Reasoning Framework for Video Question Answering0
LLaVA-MLB: Mitigating and Leveraging Attention Bias for Training-Free Video LLMs0
LLaVA-Octopus: Unlocking Instruction-Driven Adaptive Projector Fusion for Video Understanding0
LLAVIDAL: A Large LAnguage VIsion Model for Daily Activities of Living0
LLM4Brain: Training a Large Language Model for Brain Video Understanding0
LLMs Meet Long Video: Advancing Long Video Question Answering with An Interactive Visual Adapter in LLMs0
Localizing Events in Videos with Multimodal Queries0
Localizing Unseen Activities in Video via Image Query0
Logic-in-Frames: Dynamic Keyframe Search via Visual Semantic-Logical Verification for Long Video Understanding0
Long Activity Video Understanding using Functional Object-Oriented Network0
LongCaptioning: Unlocking the Power of Long Caption Generation in Large Multimodal Models0
Long-Short Temporal Contrastive Learning of Video Transformers0
LongVILA: Scaling Long-Context Visual Language Models for Long Videos0
LongViTU: Instruction Tuning for Long-Form Video Understanding0
Long-VMNet: Accelerating Long-Form Video Understanding via Fixed Memory0
Look Every Frame All at Once: Video-Ma^2mba for Efficient Long-form Video Understanding with Multi-Axis Gradient Checkpointing0
Low-Fidelity End-to-End Video Encoder Pre-training for Temporal Action Localization0
LVAgent: Long Video Understanding by Multi-Round Dynamical Collaboration of MLLM Agents0
LV-XAttn: Distributed Cross-Attention for Long Visual Inputs in Multimodal Large Language Models0
M^33D: Learning 3D priors using Multi-Modal Masked Autoencoders for 2D image and video understanding0
M^3Net: Multi-view Encoding, Matching, and Fusion for Few-shot Fine-grained Action Recognition0
MaCP: Minimal yet Mighty Adaptation via Hierarchical Cosine Projection0
Making Every Frame Matter: Continuous Video Understanding for Large Models via Adaptive State Modeling0
MAMBA4D: Efficient Long-Sequence Point Cloud Video Understanding with Disentangled Spatial-Temporal State Space Models0
MambaMia: A State-Space-Model-Based Compression for Efficient Video Understanding in Large Multimodal Models0
MASH-VLM: Mitigating Action-Scene Hallucination in Video-LLMs through Disentangled Spatial-Temporal Representations0
Show:102550
← PrevPage 17 of 23Next →

No leaderboard results yet.