SOTAVerified

Video Grounding

Video grounding is the task of linking spoken language descriptions to specific video segments. In video grounding, the model is given a video and a natural language description, such as a sentence or a caption, and its goal is to identify the specific segment of the video that corresponds to the description. This can involve tasks such as localizing the objects or actions mentioned in the description within the video, or associating a specific time interval with the description.

Papers

Showing 76100 of 114 papers

TitleStatusHype
Multi-Scale Contrastive Learning for Video Temporal Grounding0
Multi-Scale Self-Contrastive Learning with Hard Negative Mining for Weakly-Supervised Query-based Video Grounding0
Multi-sentence Video Grounding for Long Video Generation0
No-frills Temporal Video Grounding: Multi-Scale Neighboring Attention and Zoom-in Boundary Detection0
Not All Frames Are Equal: Weakly-Supervised Video Grounding With Contextual Similarity and Visual Clustering Losses0
Object-Aware Multi-Branch Relation Networks for Spatio-Temporal Video Grounding0
On Pursuit of Designing Multi-modal Transformer for Video Grounding0
On the Effects of Video Grounding on Language Models0
Parallel Attention Network with Sequence Matching for Video Grounding0
Position-aware Location Regression Network for Temporal Video Grounding0
SAMA: Towards Multi-Turn Referential Grounded Video Chat with Large Language Models0
Semi-Supervised Video Paragraph Grounding With Contrastive Encoder0
Seq2Time: Sequential Knowledge Transfer for Video LLM Temporal Grounding0
Team PKU-WICT-MIPL PIC Makeup Temporal Video Grounding Challenge 2022 Technical Report0
Unsupervised Temporal Video Grounding with Deep Semantic Clustering0
VideoGEM: Training-free Action Grounding in Videos0
Video-GroundingDINO: Towards Open-Vocabulary Spatio-Temporal Video Grounding0
VideoGrounding-DINO: Towards Open-Vocabulary Spatio-Temporal Video Grounding0
VideoITG: Multimodal Video Understanding with Instructed Temporal Grounding0
Video LLMs for Temporal Reasoning in Long Videos0
Video-of-Thought: Step-by-Step Video Reasoning from Perception to Cognition0
ViGT: Proposal-free Video Grounding with Learnable Token in Transformer0
SynopGround: A Large-Scale Dataset for Multi-Paragraph Video Grounding from TV Dramas and Synopses0
Dense Video Object Captioning from Disjoint SupervisionCode0
A Simple Transformer-Based Model for Ego4D Natural Language Queries ChallengeCode0
Show:102550
← PrevPage 4 of 5Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1InternVideo2-6BR@1,IoU=0.756.45Unverified
2InternVideo2-1BR@1,IoU=0.754.45Unverified
3LLMEPETR@1,IoU=0.749.94Unverified
4QD-DETRR@1,IoU=0.744.98Unverified
5DiffusionVMRR@1,IoU=0.744.49Unverified
6UMTR@1,IoU=0.741.18Unverified
7Moment-DETRR@1,IoU=0.733.02Unverified
#ModelMetricClaimedVerifiedStatus
1DeCafNetR@1,IoU=0.113.25Unverified
2DenoiseLocR@1,IoU=0.111.59Unverified