SOTAVerified

Video Understanding

A crucial task of Video Understanding is to recognise and localise (in space and time) different actions or events appearing in the video.

Source: Action Detection from a Robot-Car Perspective

Papers

Showing 151200 of 1149 papers

TitleStatusHype
Exploring Hallucination of Large Multimodal Models in Video Understanding: Benchmark, Analysis and MitigationCode1
PAVE: Patching and Adapting Video Large Language ModelsCode1
ACVUBench: Audio-Centric Video Understanding BenchmarkCode0
CRCL: Causal Representation Consistency Learning for Anomaly Detection in Surveillance Videos0
SlowFast-LLaVA-1.5: A Family of Token-Efficient Video Large Language Models for Long-Form Video Understanding0
Video-XL-Pro: Reconstructive Token Compression for Extremely Long Video Understanding0
Breaking the Encoder Barrier for Seamless Video-Language Understanding0
Unbiasing through Textual Descriptions: Mitigating Representation Bias in Video Benchmarks0
MammAlps: A multi-view video behavior monitoring dataset of wild mammals in the Swiss AlpsCode1
V2P-Bench: Evaluating Video-Language Understanding with Visual Prompts for Better Human-Model InteractionCode1
4D-Bench: Benchmarking Multi-modal Large Language Models for 4D Object UnderstandingCode0
Collaborative Temporal Consistency Learning for Point-supervised Natural Language Video Localization0
Temporal Action Detection Model Compression by Progressive Block Drop0
PVChat: Personalized Video Chat with One-Shot Learning0
What can Off-the-Shelves Large Multi-Modal Models do for Dynamic Scene Graph Generation?0
Agentic Keyframe Search for Video Question AnsweringCode1
Hybrid-Level Instruction Injection for Video Token Compression in Multi-modal Large Language ModelsCode1
XAttention: Block Sparse Attention with Antidiagonal ScoringCode3
DocVideoQA: Towards Comprehensive Understanding of Document-Centric Videos through Question Answering0
MASH-VLM: Mitigating Action-Scene Hallucination in Video-LLMs through Disentangled Spatial-Temporal Representations0
STOP: Integrated Spatial-Temporal Dynamic Prompting for Video UnderstandingCode1
FAVOR-Bench: A Comprehensive Benchmark for Fine-Grained Video Motion Understanding0
SpaceVLLM: Endowing Multimodal Large Language Model with Spatio-Temporal Video Grounding Capability0
Improving LLM Video Understanding with 16 Frames Per Second0
Impossible Videos0
ViSpeak: Visual Instruction Feedback in Streaming VideosCode2
VideoMind: A Chain-of-LoRA Agent for Long Video ReasoningCode3
Long-VMNet: Accelerating Long-Form Video Understanding via Fixed Memory0
Towards Scalable Modeling of Compressed Videos for Efficient Action Recognition0
Logic-in-Frames: Dynamic Keyframe Search via Visual Semantic-Logical Verification for Long Video Understanding0
Does Your Vision-Language Model Get Lost in the Long Video Sampling Dilemma?Code1
AdaReTaKe: Adaptive Redundancy Reduction to Perceive Longer for Video-language UnderstandingCode2
Watch and Learn: Leveraging Expert Knowledge and Language for Surgical Video Understanding0
LLaVA-MLB: Mitigating and Leveraging Attention Bias for Training-Free Video LLMs0
Vamba: Understanding Hour-Long Videos with Hybrid Mamba-Transformers0
V-STaR: Benchmarking Video-LLMs on Video Spatio-Temporal Reasoning0
TIME: Temporal-sensitive Multi-dimensional Instruction Tuning and Benchmarking for Video-LLMs0
Keyframe-oriented Vision Token Pruning: Enhancing Efficiency of Large Vision Language Models on Long-Form Video ProcessingCode0
LVAgent: Long Video Understanding by Multi-Round Dynamical Collaboration of MLLM Agents0
Reasoning is All You Need for Video Generalization: A Counterfactual Benchmark with Sub-question Evaluation0
On the Limitations of Vision-Language Models in Understanding Image Transforms0
Measure Twice, Cut Once: Grasping Video Structures and Event Semantics with LLMs for Video Temporal Localization0
FaVChat: Unlocking Fine-Grained Facail Video Understanding with Multimodal Large Language Models0
Everything Can Be Described in Words: A Simple Unified Multi-Modal Framework with Semantic and Temporal Alignment0
VideoScan: Enabling Efficient Streaming Video Understanding via Frame-level Semantic Carriers0
VLog: Video-Language Models by Generative Retrieval of Narration VocabularyCode4
Exo2Ego: Exocentric Knowledge Guided MLLM for Egocentric Video Understanding0
Generative Frame Sampler for Long Video Understanding0
Memory-enhanced Retrieval Augmentation for Long Video Understanding0
QuoTA: Query-oriented Token Assignment via CoT Query Decouple for Long Video ComprehensionCode2
Show:102550
← PrevPage 4 of 23Next →

No leaderboard results yet.