SOTAVerified

Video Understanding

A crucial task of Video Understanding is to recognise and localise (in space and time) different actions or events appearing in the video.

Source: Action Detection from a Robot-Car Perspective

Papers

Showing 125 of 1149 papers

TitleStatusHype
CogVLM2: Visual Language Models for Image and Video UnderstandingCode9
World Model on Million-Length Video And Language With Blockwise RingAttentionCode9
Perception Encoder: The best visual embeddings are not at the output of the networkCode8
PerceptionLM: Open-Access Data and Models for Detailed Visual UnderstandingCode7
VideoRAG: Retrieval-Augmented Generation with Extreme Long-Context VideosCode7
GLM-4.1V-Thinking: Towards Versatile Multimodal Reasoning with Scalable Reinforcement LearningCode7
InternVideo2: Scaling Foundation Models for Multimodal Video UnderstandingCode7
CVNets: High Performance Library for Computer VisionCode6
ShareGPT4Video: Improving Video Understanding and Generation with Better CaptionsCode5
OMG-LLaVA: Bridging Image-level, Object-level, Pixel-level Reasoning and UnderstandingCode5
VideoLLaMA 3: Frontier Multimodal Foundation Models for Image and Video UnderstandingCode5
Segment Anything for Videos: A Systematic SurveyCode5
VideoMamba: State Space Model for Efficient Video UnderstandingCode5
MovieChat+: Question-aware Sparse Memory for Long Video Question AnsweringCode4
MiniGPT4-Video: Advancing Multimodal LLMs for Video Understanding with Interleaved Visual-Textual TokensCode4
Tarsier: Recipes for Training and Evaluating Large Video Description ModelsCode4
SnAG: Scalable and Accurate Video GroundingCode4
Flamingo: a Visual Language Model for Few-Shot LearningCode4
Tarsier2: Advancing Large Vision-Language Models from Detailed Video Description to Comprehensive Video UnderstandingCode4
Unified Reward Model for Multimodal Understanding and GenerationCode4
InternVideo: General Video Foundation Models via Generative and Discriminative LearningCode4
Kwai Keye-VL Technical ReportCode4
An Image is Worth 1/2 Tokens After Layer 2: Plug-and-Play Inference Acceleration for Large Vision-Language ModelsCode4
Eagle 2.5: Boosting Long-Context Post-Training for Frontier Vision-Language ModelsCode4
PVUW 2024 Challenge on Complex Video Understanding: Methods and ResultsCode4
Show:102550
← PrevPage 1 of 46Next →

No leaderboard results yet.