SOTAVerified

Video Grounding

Video grounding is the task of linking spoken language descriptions to specific video segments. In video grounding, the model is given a video and a natural language description, such as a sentence or a caption, and its goal is to identify the specific segment of the video that corresponds to the description. This can involve tasks such as localizing the objects or actions mentioned in the description within the video, or associating a specific time interval with the description.

Papers

Showing 76100 of 114 papers

TitleStatusHype
Towards Parameter-Efficient Integration of Pre-Trained Language Models In Temporal Video GroundingCode0
CONE: An Efficient COarse-to-fiNE Alignment Framework for Long Video Temporal GroundingCode1
Video-Guided Curriculum Learning for Spoken Video GroundingCode0
Exploiting Feature Diversity for Make-up Temporal Video Grounding0
Team PKU-WICT-MIPL PIC Makeup Temporal Video Grounding Challenge 2022 Technical Report0
STVGFormer: Spatio-Temporal Video Grounding with Static-Dynamic Cross-Modal Understanding0
Gaussian Kernel-based Cross Modal Network for Spatio-Temporal Video Grounding0
Animal Kingdom: A Large and Diverse Dataset for Animal Behavior UnderstandingCode1
Position-aware Location Regression Network for Temporal Video Grounding0
TubeDETR: Spatio-Temporal Video Grounding with TransformersCode1
UMT: Unified Multi-modal Transformers for Joint Video Moment Retrieval and Highlight DetectionCode2
End-to-End Modeling via Information Tree for One-Shot Natural Language Spatial Video Grounding0
Multi-Scale Self-Contrastive Learning with Hard Negative Mining for Weakly-Supervised Query-based Video Grounding0
Explore-And-Match: Bridging Proposal-Based and Proposal-Free With Transformer for Sentence Grounding in VideosCode1
Unsupervised Temporal Video Grounding with Deep Semantic Clustering0
Semi-Supervised Video Paragraph Grounding With Contrastive Encoder0
Multi-Level Representation Learning With Semantic Alignment for Referring Video Object Segmentation0
LocFormer: Enabling Transformers to Perform Temporal Moment Localization on Long Untrimmed Videos With a Feature Sampling Approach0
Detecting Moments and Highlights in Videos via Natural Language QueriesCode1
End-to-End Dense Video Grounding via Parallel Regression0
On Pursuit of Designing Multi-modal Transformer for Video Grounding0
Negative Sample Matters: A Renaissance of Metric Learning for Temporal GroundingCode1
EVOQUER: Enhancing Temporal Grounding with Video-Pivoted BackQuery Generation0
Support-Set Based Cross-Supervision for Video Grounding0
VidLanKD: Improving Language Understanding via Video-Distilled Knowledge TransferCode1
Show:102550
← PrevPage 4 of 5Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1InternVideo2-6BR@1,IoU=0.756.45Unverified
2InternVideo2-1BR@1,IoU=0.754.45Unverified
3LLMEPETR@1,IoU=0.749.94Unverified
4QD-DETRR@1,IoU=0.744.98Unverified
5DiffusionVMRR@1,IoU=0.744.49Unverified
6UMTR@1,IoU=0.741.18Unverified
7Moment-DETRR@1,IoU=0.733.02Unverified
#ModelMetricClaimedVerifiedStatus
1DeCafNetR@1,IoU=0.113.25Unverified
2DenoiseLocR@1,IoU=0.111.59Unverified