SOTAVerified

Video Grounding

Video grounding is the task of linking spoken language descriptions to specific video segments. In video grounding, the model is given a video and a natural language description, such as a sentence or a caption, and its goal is to identify the specific segment of the video that corresponds to the description. This can involve tasks such as localizing the objects or actions mentioned in the description within the video, or associating a specific time interval with the description.

Papers

Showing 2650 of 114 papers

TitleStatusHype
Text-Visual Prompting for Efficient 2D Temporal Video GroundingCode1
Localizing Moments in Long Video Via Multimodal GuidanceCode1
Weakly-Supervised Temporal Article GroundingCode1
Embracing Consistency: A One-Stage Approach for Spatio-Temporal Video GroundingCode1
CONE: An Efficient COarse-to-fiNE Alignment Framework for Long Video Temporal GroundingCode1
Animal Kingdom: A Large and Diverse Dataset for Animal Behavior UnderstandingCode1
TubeDETR: Spatio-Temporal Video Grounding with TransformersCode1
Explore-And-Match: Bridging Proposal-Based and Proposal-Free With Transformer for Sentence Grounding in VideosCode1
Detecting Moments and Highlights in Videos via Natural Language QueriesCode1
Negative Sample Matters: A Renaissance of Metric Learning for Temporal GroundingCode1
VidLanKD: Improving Language Understanding via Video-Distilled Knowledge TransferCode1
VLG-Net: Video-Language Graph Matching Network for Video GroundingCode1
Human-centric Spatio-Temporal Video Grounding With Visual TransformersCode1
Dense Regression Network for Video GroundingCode1
Where Does It Exist: Spatio-Temporal Video Grounding for Multi-Form SentencesCode1
VideoITG: Multimodal Video Understanding with Instructed Temporal Grounding0
SAMA: Towards Multi-Turn Referential Grounded Video Chat with Large Language Models0
Enhancing Weakly Supervised Video Grounding via Diverse Inference Strategies for Boundary and Prediction Selection0
VideoGEM: Training-free Action Grounding in Videos0
SpaceVLLM: Endowing Multimodal Large Language Model with Spatio-Temporal Video Grounding Capability0
Contextual Self-paced Learning for Weakly Supervised Spatio-Temporal Video Grounding0
STPro: Spatial and Temporal Progressive Learning for Weakly Supervised Spatio-Temporal Grounding0
Consistency of Compositional Generalization across Multiple LevelsCode0
Multi-Scale Contrastive Learning for Video Temporal Grounding0
Video LLMs for Temporal Reasoning in Long Videos0
Show:102550
← PrevPage 2 of 5Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1InternVideo2-6BR@1,IoU=0.756.45Unverified
2InternVideo2-1BR@1,IoU=0.754.45Unverified
3LLMEPETR@1,IoU=0.749.94Unverified
4QD-DETRR@1,IoU=0.744.98Unverified
5DiffusionVMRR@1,IoU=0.744.49Unverified
6UMTR@1,IoU=0.741.18Unverified
7Moment-DETRR@1,IoU=0.733.02Unverified
#ModelMetricClaimedVerifiedStatus
1DeCafNetR@1,IoU=0.113.25Unverified
2DenoiseLocR@1,IoU=0.111.59Unverified