SOTAVerified

Video Grounding

Video grounding is the task of linking spoken language descriptions to specific video segments. In video grounding, the model is given a video and a natural language description, such as a sentence or a caption, and its goal is to identify the specific segment of the video that corresponds to the description. This can involve tasks such as localizing the objects or actions mentioned in the description within the video, or associating a specific time interval with the description.

Papers

Showing 2650 of 114 papers

TitleStatusHype
CONE: An Efficient COarse-to-fiNE Alignment Framework for Long Video Temporal GroundingCode1
Dense Regression Network for Video GroundingCode1
TimeLoc: A Unified End-to-End Framework for Precise Timestamp Localization in Long VideosCode1
VideoLLM Knows When to Speak: Enhancing Time-Sensitive Video Comprehension with Video-Text Duet Interaction FormatCode1
Gaussian Mixture Proposals with Pull-Push Learning Scheme to Capture Diverse Events for Weakly Supervised Temporal Video GroundingCode1
OmniSTVG: Toward Spatio-Temporal Omni-Object Video GroundingCode1
Can I Trust Your Answer? Visually Grounded Video Question AnsweringCode1
Localizing Moments in Long Video Via Multimodal GuidanceCode1
Animal Kingdom: A Large and Diverse Dataset for Animal Behavior UnderstandingCode1
Negative Sample Matters: A Renaissance of Metric Learning for Temporal GroundingCode1
Explore-And-Match: Bridging Proposal-Based and Proposal-Free With Transformer for Sentence Grounding in VideosCode1
Bridging the Gap: A Unified Video Comprehension Framework for Moment Retrieval and Highlight DetectionCode1
DeCafNet: Delegate and Conquer for Efficient Temporal Grounding in Long VideosCode1
Object-Shot Enhanced Grounding Network for Egocentric VideoCode1
VidLanKD: Improving Language Understanding via Video-Distilled Knowledge TransferCode1
EVOQUER: Enhancing Temporal Grounding with Video-Pivoted BackQuery Generation0
EtC: Temporal Boundary Expand then Clarify for Weakly Supervised Video Grounding with Multimodal Large Language Model0
Enhancing Weakly Supervised Video Grounding via Diverse Inference Strategies for Boundary and Prediction Selection0
End-to-End Modeling via Information Tree for One-Shot Natural Language Spatial Video Grounding0
SynopGround: A Large-Scale Dataset for Multi-Paragraph Video Grounding from TV Dramas and Synopses0
Multi-Scale Self-Contrastive Learning with Hard Negative Mining for Weakly-Supervised Query-based Video Grounding0
End-to-End Dense Video Grounding via Parallel Regression0
Collaborative Static and Dynamic Vision-Language Streams for Spatio-Temporal Video Grounding0
Iterative Proposal Refinement for Weakly-Supervised Video Grounding0
DiffusionVMR: Diffusion Model for Joint Video Moment Retrieval and Highlight Detection0
Show:102550
← PrevPage 2 of 5Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1InternVideo2-6BR@1,IoU=0.756.45Unverified
2InternVideo2-1BR@1,IoU=0.754.45Unverified
3LLMEPETR@1,IoU=0.749.94Unverified
4QD-DETRR@1,IoU=0.744.98Unverified
5DiffusionVMRR@1,IoU=0.744.49Unverified
6UMTR@1,IoU=0.741.18Unverified
7Moment-DETRR@1,IoU=0.733.02Unverified
#ModelMetricClaimedVerifiedStatus
1DeCafNetR@1,IoU=0.113.25Unverified
2DenoiseLocR@1,IoU=0.111.59Unverified