SOTAVerified

Video Grounding

Video grounding is the task of linking spoken language descriptions to specific video segments. In video grounding, the model is given a video and a natural language description, such as a sentence or a caption, and its goal is to identify the specific segment of the video that corresponds to the description. This can involve tasks such as localizing the objects or actions mentioned in the description within the video, or associating a specific time interval with the description.

Papers

Showing 5160 of 114 papers

TitleStatusHype
Can I Trust Your Answer? Visually Grounded Video Question AnsweringCode1
DiffusionVMR: Diffusion Model for Joint Video Moment Retrieval and Highlight Detection0
Knowing Where to Focus: Event-aware Transformer for Video GroundingCode1
ViGT: Proposal-free Video Grounding with Learnable Token in Transformer0
G2L: Semantically Aligned and Uniform Video Grounding via Geodesic and Game Theory0
No-frills Temporal Video Grounding: Multi-Scale Neighboring Attention and Zoom-in Boundary Detection0
Dense Video Object Captioning from Disjoint Supervision0
Boundary-Denoising for Video Activity LocalizationCode0
Query-Dependent Video Representation for Moment Retrieval and Highlight DetectionCode2
Generation-Guided Multi-Level Unified Network for Video Grounding0
Show:102550
← PrevPage 6 of 12Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1InternVideo2-6BR@1,IoU=0.756.45Unverified
2InternVideo2-1BR@1,IoU=0.754.45Unverified
3LLMEPETR@1,IoU=0.749.94Unverified
4QD-DETRR@1,IoU=0.744.98Unverified
5DiffusionVMRR@1,IoU=0.744.49Unverified
6UMTR@1,IoU=0.741.18Unverified
7Moment-DETRR@1,IoU=0.733.02Unverified
#ModelMetricClaimedVerifiedStatus
1DeCafNetR@1,IoU=0.113.25Unverified
2DenoiseLocR@1,IoU=0.111.59Unverified