SOTAVerified

Video Understanding

A crucial task of Video Understanding is to recognise and localise (in space and time) different actions or events appearing in the video.

Source: Action Detection from a Robot-Car Perspective

Papers

Showing 301350 of 1149 papers

TitleStatusHype
FocusChat: Text-guided Long Video Understanding via Spatiotemporal Information Filtering0
ShotVL: Human-Centric Highlight Frame Retrieval via Language Queries0
CG-Bench: Clue-grounded Question Answering Benchmark for Long Video Understanding0
Uni-AdaFocus: Spatial-temporal Dynamic Computation for Video RecognitionCode2
Overview of TREC 2024 Medical Video Question Answering (MedVidQA) Track0
Apollo: An Exploration of Video Understanding in Large Multimodal Models0
IQViC: In-context, Question Adaptive Vision Compressor for Long-term Video Understanding LMMs0
B-VLLM: A Vision Large Language Model with Balanced Spatio-Temporal TokensCode0
VCA: Video Curious Agent for Long Video Understanding0
ViCaS: A Dataset for Combining Holistic and Pixel-level Video Understanding using Captions with Grounded Segmentation0
PVC: Progressive Visual Token Compression for Unified Image and Video Processing in Large Vision-Language Models0
Neptune: The Long Orbit to Benchmarking Long Video UnderstandingCode2
COEF-VQ: Cost-Efficient Video Quality Understanding through a Cascaded Multimodal LLM Framework0
3DSRBench: A Comprehensive 3D Spatial Reasoning Benchmark0
Multi-Scale Contrastive Learning for Video Temporal Grounding0
GEXIA: Granularity Expansion and Iterative Approximation for Scalable Multi-grained Video-language Learning0
Towards Long Video Understanding via Fine-detailed Video Story Generation0
Beyond Boxes: Mask-Guided Spatio-Temporal Feature Aggregation for Video Object Detection0
LinVT: Empower Your Image-level Large Language Model to Understand VideosCode2
Expanding Performance Boundaries of Open-Source Multimodal Models with Model, Data, and Test-Time Scaling0
Espresso: High Compression For Rich Extraction From Videos for Your Vision-Language Model0
VisionZip: Longer is Better but Not Necessary in Vision Language ModelsCode3
AIM: Adaptive Inference of Multi-Modal LLMs via Token Merging and PruningCode2
Inst-IT: Boosting Multimodal Instance Understanding via Explicit Visual Prompt Instruction TuningCode1
Streaming Detection of Queried Event StartCode0
VidHalluc: Evaluating Temporal Hallucinations in Multimodal Large Language Models for Video Understanding0
Progress-Aware Video Frame Captioning0
VideoICL: Confidence-based Iterative In-context Learning for Out-of-Distribution Video UnderstandingCode1
PhysGame: Uncovering Physical Commonsense Violations in Gameplay VideosCode1
SEAL: Semantic Attention Learning for Long Video Representation0
Towards Universal Soccer Video UnderstandingCode3
VideoSAVi: Self-Aligned Video Language Models without Human Supervision0
VISTA: Enhancing Long-Duration and High-Resolution Video Understanding by Video Spatiotemporal Augmentation0
STEP: Enhancing Video-LLMs' Compositional Reasoning by Spatio-Temporal Graph-guided Self-Training0
Perception Test 2024: Challenge Summary and a Novel Hour-Long VideoQA Benchmark0
T2Vid: Translating Long Text into Multi-Image is the Catalyst for Video-LLMsCode1
LongVALE: Vision-Audio-Language-Event Benchmark Towards Time-Aware Omni-Modal Perception of Long VideosCode2
Look Every Frame All at Once: Video-Ma^2mba for Efficient Long-form Video Understanding with Multi-Axis Gradient Checkpointing0
TimeMarker: A Versatile Video-LLM for Long and Short Video Understanding with Superior Temporal Localization AbilityCode2
SAVEn-Vid: Synergistic Audio-Visual Integration for Enhanced Understanding in Long Video Context0
OccludeNet: A Causal Journey into Mixed-View Actor-Centric Video Action Recognition under OcclusionsCode0
ReWind: Understanding Long Videos with Instructed Learnable Memory0
Beyond Training: Dynamic Token Merging for Zero-Shot Video Understanding0
Principles of Visual Tokens for Efficient Video Understanding0
Extending Video Masked Autoencoders to 128 frames0
Teaching VLMs to Localize Specific Objects from In-context ExamplesCode1
VideoAutoArena: An Automated Arena for Evaluating Large Multimodal Models in Video Analysis through User Simulation0
Video-RAG: Visually-aligned Retrieval-Augmented Long Video ComprehensionCode3
AdaCM^2: On Understanding Extremely Long-Term Video with Adaptive Cross-Modality Memory Reduction0
DynFocus: Dynamic Cooperative Network Empowers LLMs with Video Understanding0
Show:102550
← PrevPage 7 of 23Next →

No leaderboard results yet.