SOTAVerified

Video Understanding

A crucial task of Video Understanding is to recognise and localise (in space and time) different actions or events appearing in the video.

Source: Action Detection from a Robot-Car Perspective

Papers

Showing 701750 of 1149 papers

TitleStatusHype
DOAD: Decoupled One Stage Action Detection Network0
Procedure-Aware Pretraining for Instructional Video UnderstandingCode1
Whether and When does Endoscopy Domain Pretraining Make Sense?Code1
Streaming Video ModelCode1
TimeBalance: Temporally-Invariant and Temporally-Distinctive Video Representations for Semi-Supervised Action RecognitionCode1
System-status-aware Adaptive Network for Online Streaming Video Understanding0
Selective Structured State-Spaces for Long-Form Video Understanding0
Query-Dependent Video Representation for Moment Retrieval and Highlight DetectionCode2
Weakly Supervised Video Representation Learning with Unaligned Text for Sequential VideosCode1
Leaping Into Memories: Space-Time Deep Feature SynthesisCode0
Dual-path Adaptation from Image to Video TransformersCode1
TemporalMaxer: Maximize Temporal Context with only Max Pooling for Temporal Action LocalizationCode1
Localizing Moments in Long Video Via Multimodal GuidanceCode1
Video4MRI: An Empirical Study on Brain Magnetic Resonance Image Analytics with CNN-based Video Classification Frameworks0
MINOTAUR: Multi-task Video Grounding From Multimodal QueriesCode0
AIM: Adapting Image Models for Efficient Video Action RecognitionCode2
Semi-Parametric Video-Grounded Text Generation0
Building Scalable Video Understanding Benchmarks through Sports0
STPrivacy: Spatio-Temporal Privacy-Preserving Action Recognition0
Test of Time: Instilling Video-Language Models with a Sense of TimeCode1
EgoDistill: Egocentric Head Motion Distillation for Efficient Video Understanding0
Multimodal High-order Relation Transformer for Scene Boundary Detection0
PIDRo: Parallel Isomeric Attention with Dynamic Routing for Text-Video Retrieval0
UniFormerV2: Unlocking the Potential of Image ViTs for Video Understanding0
Boosting Single Image Super-Resolution via Partial Channel ShiftingCode1
Inverse Compositional Learning for Weakly-supervised Relation Grounding0
Self-Supervised Object Detection from Egocentric Videos0
Relational Space-Time Query in Long-Form Videos0
Modeling Video As Stochastic Processes for Fine-Grained Video Representation LearningCode1
Few-Shot Referring Relationships in VideosCode0
Joint Engagement Classification using Video Augmentation Techniques for Multi-person Human-robot Interaction0
Inductive Attention for Video Action Anticipation0
Towards Smooth Video CompositionCode1
Egocentric Video Task Translation0
Contextual Explainable Video Representation: Human Perception-based UnderstandingCode0
PromptonomyViT: Multi-Task Prompt Learning Improves Video Transformers using Synthetic Scene Data0
Transition Is a Process: Pair-to-Video Change Detection Networks for Very High Resolution Remote Sensing Images0
InternVideo: General Video Foundation Models via Generative and Discriminative LearningCode4
Spatio-Temporal Crop Aggregation for Video Representation Learning0
MOMA-LRG: Language-Refined Graphs for Multi-Object Multi-Actor Activity ParsingCode1
Dynamic Appearance: A Video Representation for Action Recognition with Joint Training0
Contrastive Masked Autoencoders for Self-Supervised Video HashingCode1
A Unified Model for Video Understanding and Knowledge Embedding with Heterogeneous Knowledge Graph Dataset0
EVEREST: Efficient Masked Video Autoencoder by Removing Redundant Spatiotemporal TokensCode1
Masked Autoencoders for Egocentric Video Understanding @ Ego4D Challenge 2022Code0
InternVideo-Ego4D: A Pack of Champion Solutions to Ego4D ChallengesCode1
UniFormerV2: Spatiotemporal Learning by Arming Image ViTs with Video UniFormerCode2
Exploring State Change Capture of Heterogeneous Backbones @ Ego4D Hands and Objects Challenge 20220
Grounded Video Situation Recognition0
VTC: Improving Video-Text Retrieval with User CommentsCode1
Show:102550
← PrevPage 15 of 23Next →

No leaderboard results yet.