SOTAVerified

Video Grounding

Video grounding is the task of linking spoken language descriptions to specific video segments. In video grounding, the model is given a video and a natural language description, such as a sentence or a caption, and its goal is to identify the specific segment of the video that corresponds to the description. This can involve tasks such as localizing the objects or actions mentioned in the description within the video, or associating a specific time interval with the description.

Papers

Showing 150 of 114 papers

TitleStatusHype
InternVideo2: Scaling Foundation Models for Multimodal Video UnderstandingCode7
Tarsier2: Advancing Large Vision-Language Models from Detailed Video Description to Comprehensive Video UnderstandingCode4
SnAG: Scalable and Accurate Video GroundingCode4
PG-Video-LLaVA: Pixel Grounding Large Video-Language ModelsCode2
Reinforcement Learning Tuning for VideoLLMs: Reward Design and Data EfficiencyCode2
Prior Knowledge Integration via LLM Encoding and Pseudo Event Regulation for Video Moment RetrievalCode2
UMT: Unified Multi-modal Transformers for Joint Video Moment Retrieval and Highlight DetectionCode2
VTimeLLM: Empower LLM to Grasp Video MomentsCode2
Query-Dependent Video Representation for Moment Retrieval and Highlight DetectionCode2
TimeZero: Temporal Video Grounding with Reasoning-Guided LVLMCode2
LLaVA-ST: A Multimodal Large Language Model for Fine-Grained Spatial-Temporal UnderstandingCode2
Context-Guided Spatio-Temporal Video GroundingCode2
TubeDETR: Spatio-Temporal Video Grounding with TransformersCode1
HawkEye: Training Video-Text LLMs for Grounding Text in VideosCode1
TimeLoc: A Unified End-to-End Framework for Precise Timestamp Localization in Long VideosCode1
Text-Visual Prompting for Efficient 2D Temporal Video GroundingCode1
VidChain: Chain-of-Tasks with Metric-based Direct Preference Optimization for Dense Video CaptioningCode1
Detecting Moments and Highlights in Videos via Natural Language QueriesCode1
Human-centric Spatio-Temporal Video Grounding With Visual TransformersCode1
Grounded Question-Answering in Long Egocentric VideosCode1
Dense Regression Network for Video GroundingCode1
Where Does It Exist: Spatio-Temporal Video Grounding for Multi-Form SentencesCode1
Weakly-Supervised Temporal Article GroundingCode1
Knowing Your Target: Target-Aware Transformer Makes Better Spatio-Temporal Video GroundingCode1
Embracing Consistency: A One-Stage Approach for Spatio-Temporal Video GroundingCode1
CONE: An Efficient COarse-to-fiNE Alignment Framework for Long Video Temporal GroundingCode1
Knowing Where to Focus: Event-aware Transformer for Video GroundingCode1
VidLanKD: Improving Language Understanding via Video-Distilled Knowledge TransferCode1
Negative Sample Matters: A Renaissance of Metric Learning for Temporal GroundingCode1
VLG-Net: Video-Language Graph Matching Network for Video GroundingCode1
Object-Shot Enhanced Grounding Network for Egocentric VideoCode1
Animal Kingdom: A Large and Diverse Dataset for Animal Behavior UnderstandingCode1
Explore-And-Match: Bridging Proposal-Based and Proposal-Free With Transformer for Sentence Grounding in VideosCode1
Bridging the Gap: A Unified Video Comprehension Framework for Moment Retrieval and Highlight DetectionCode1
Can I Trust Your Answer? Visually Grounded Video Question AnsweringCode1
Localizing Moments in Long Video Via Multimodal GuidanceCode1
Gaussian Mixture Proposals with Pull-Push Learning Scheme to Capture Diverse Events for Weakly Supervised Temporal Video GroundingCode1
DeCafNet: Delegate and Conquer for Efficient Temporal Grounding in Long VideosCode1
VideoLLM Knows When to Speak: Enhancing Time-Sensitive Video Comprehension with Video-Text Duet Interaction FormatCode1
OmniSTVG: Toward Spatio-Temporal Omni-Object Video GroundingCode1
Boundary-Denoising for Video Activity LocalizationCode0
Consistency of Compositional Generalization across Multiple LevelsCode0
Towards Parameter-Efficient Integration of Pre-Trained Language Models In Temporal Video GroundingCode0
Unified Static and Dynamic Network: Efficient Temporal Filtering for Video GroundingCode0
Dual-Path Temporal Map Optimization for Make-up Temporal Video GroundingCode0
Artemis: Towards Referential Understanding in Complex VideosCode0
Interventional Video Grounding with Dual Contrastive LearningCode0
Read, Watch, and Move: Reinforcement Learning for Temporally Grounding Natural Language Descriptions in VideosCode0
Dense Video Object Captioning from Disjoint SupervisionCode0
A Simple Transformer-Based Model for Ego4D Natural Language Queries ChallengeCode0
Show:102550
← PrevPage 1 of 3Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1InternVideo2-6BR@1,IoU=0.756.45Unverified
2InternVideo2-1BR@1,IoU=0.754.45Unverified
3LLMEPETR@1,IoU=0.749.94Unverified
4QD-DETRR@1,IoU=0.744.98Unverified
5DiffusionVMRR@1,IoU=0.744.49Unverified
6UMTR@1,IoU=0.741.18Unverified
7Moment-DETRR@1,IoU=0.733.02Unverified
#ModelMetricClaimedVerifiedStatus
1DeCafNetR@1,IoU=0.113.25Unverified
2DenoiseLocR@1,IoU=0.111.59Unverified