SOTAVerified

Video Understanding

A crucial task of Video Understanding is to recognise and localise (in space and time) different actions or events appearing in the video.

Source: Action Detection from a Robot-Car Perspective

Papers

Showing 141150 of 1149 papers

TitleStatusHype
OmAgent: A Multi-modal Agent Framework for Complex Video Understanding with Task Divide-and-ConquerCode2
DeMamba: AI-Generated Video Detection on Million-Scale GenVideo BenchmarkCode2
LongVALE: Vision-Audio-Language-Event Benchmark Towards Time-Aware Omni-Modal Perception of Long VideosCode2
LongVideoBench: A Benchmark for Long-context Interleaved Video-Language UnderstandingCode2
Ego-Exo4D: Understanding Skilled Human Activity from First- and Third-Person PerspectivesCode2
Online Video Understanding: OVBench and VideoChat-OnlineCode2
PG-Video-LLaVA: Pixel Grounding Large Video-Language ModelsCode2
TRACE: Temporal Grounding Video LLM via Causal Event ModelingCode2
Boosting Single Image Super-Resolution via Partial Channel ShiftingCode1
Leveraging triplet loss for unsupervised action segmentationCode1
Show:102550
← PrevPage 15 of 115Next →

No leaderboard results yet.