SOTAVerified

Video Understanding

A crucial task of Video Understanding is to recognise and localise (in space and time) different actions or events appearing in the video.

Source: Action Detection from a Robot-Car Perspective

Papers

Showing 5175 of 1149 papers

TitleStatusHype
Video-ChatGPT: Towards Detailed Video Understanding via Large Vision and Language ModelsCode3
Hawk: Learning to Understand Open-World Video AnomaliesCode3
TimeChat-Online: 80% Visual Tokens are Naturally Redundant in Streaming VideosCode3
XAttention: Block Sparse Attention with Antidiagonal ScoringCode3
Valley2: Exploring Multimodal Models with Scalable Vision-Language DesignCode3
VideoChat-R1: Enhancing Spatio-Temporal Perception via Reinforcement Fine-TuningCode3
EgoLife: Towards Egocentric Life AssistantCode3
SparseVLM: Visual Token Sparsification for Efficient Vision-Language Model InferenceCode3
Flash-VStream: Memory-Based Real-Time Understanding for Long Video StreamsCode3
Flash-VStream: Efficient Real-Time Understanding for Long Video StreamsCode3
MLVU: Benchmarking Multi-task Long Video UnderstandingCode3
Video-RAG: Visually-aligned Retrieval-Augmented Long Video ComprehensionCode3
PG-Video-LLaVA: Pixel Grounding Large Video-Language ModelsCode2
OVO-Bench: How Far is Your Video-LLMs from Real-World Online Video Understanding?Code2
PPLLaVA: Varied Video Sequence Understanding With Prompt GuidanceCode2
One Trajectory, One Token: Grounded Video Tokenization via Panoptic Sub-object TrajectoryCode2
Online Video Understanding: OVBench and VideoChat-OnlineCode2
OmniVid: A Generative Framework for Universal Video UnderstandingCode2
Omni-Video: Democratizing Unified Video Understanding and GenerationCode2
PruneVid: Visual Token Pruning for Efficient Video Large Language ModelsCode2
Chat-UniVi: Unified Visual Representation Empowers Large Language Models with Image and Video UnderstandingCode2
MVBench: A Comprehensive Multi-modal Video Understanding BenchmarkCode2
Needle In A Video Haystack: A Scalable Synthetic Evaluator for Video MLLMsCode2
AIN: The Arabic INclusive Large Multimodal ModelCode2
Multi-granularity Correspondence Learning from Long-term Noisy VideosCode2
Show:102550
← PrevPage 3 of 46Next →

No leaderboard results yet.