SOTAVerified

Video Understanding

A crucial task of Video Understanding is to recognise and localise (in space and time) different actions or events appearing in the video.

Source: Action Detection from a Robot-Car Perspective

Papers

Showing 301325 of 1149 papers

TitleStatusHype
FocusChat: Text-guided Long Video Understanding via Spatiotemporal Information Filtering0
ShotVL: Human-Centric Highlight Frame Retrieval via Language Queries0
CG-Bench: Clue-grounded Question Answering Benchmark for Long Video Understanding0
Uni-AdaFocus: Spatial-temporal Dynamic Computation for Video RecognitionCode2
Overview of TREC 2024 Medical Video Question Answering (MedVidQA) Track0
IQViC: In-context, Question Adaptive Vision Compressor for Long-term Video Understanding LMMs0
Apollo: An Exploration of Video Understanding in Large Multimodal Models0
B-VLLM: A Vision Large Language Model with Balanced Spatio-Temporal TokensCode0
VCA: Video Curious Agent for Long Video Understanding0
ViCaS: A Dataset for Combining Holistic and Pixel-level Video Understanding using Captions with Grounded Segmentation0
PVC: Progressive Visual Token Compression for Unified Image and Video Processing in Large Vision-Language Models0
Neptune: The Long Orbit to Benchmarking Long Video UnderstandingCode2
COEF-VQ: Cost-Efficient Video Quality Understanding through a Cascaded Multimodal LLM Framework0
3DSRBench: A Comprehensive 3D Spatial Reasoning Benchmark0
Multi-Scale Contrastive Learning for Video Temporal Grounding0
GEXIA: Granularity Expansion and Iterative Approximation for Scalable Multi-grained Video-language Learning0
Towards Long Video Understanding via Fine-detailed Video Story Generation0
Beyond Boxes: Mask-Guided Spatio-Temporal Feature Aggregation for Video Object Detection0
LinVT: Empower Your Image-level Large Language Model to Understand VideosCode2
Expanding Performance Boundaries of Open-Source Multimodal Models with Model, Data, and Test-Time Scaling0
Espresso: High Compression For Rich Extraction From Videos for Your Vision-Language Model0
VisionZip: Longer is Better but Not Necessary in Vision Language ModelsCode3
VidHalluc: Evaluating Temporal Hallucinations in Multimodal Large Language Models for Video Understanding0
AIM: Adaptive Inference of Multi-Modal LLMs via Token Merging and PruningCode2
Streaming Detection of Queried Event StartCode0
Show:102550
← PrevPage 13 of 46Next →

No leaderboard results yet.