SOTAVerified

Video Grounding

Video grounding is the task of linking spoken language descriptions to specific video segments. In video grounding, the model is given a video and a natural language description, such as a sentence or a caption, and its goal is to identify the specific segment of the video that corresponds to the description. This can involve tasks such as localizing the objects or actions mentioned in the description within the video, or associating a specific time interval with the description.

Papers

Showing 51100 of 114 papers

TitleStatusHype
Interventional Video Grounding with Dual Contrastive LearningCode0
Towards Parameter-Efficient Integration of Pre-Trained Language Models In Temporal Video GroundingCode0
Unified Static and Dynamic Network: Efficient Temporal Filtering for Video GroundingCode0
Video-Guided Curriculum Learning for Spoken Video GroundingCode0
Dual-Path Temporal Map Optimization for Make-up Temporal Video GroundingCode0
MINOTAUR: Multi-task Video Grounding From Multimodal QueriesCode0
ViGT: Proposal-free Video Grounding with Learnable Token in Transformer0
SynopGround: A Large-Scale Dataset for Multi-Paragraph Video Grounding from TV Dramas and Synopses0
WINNER: Weakly-Supervised hIerarchical decompositioN and aligNment for Spatio-tEmporal Video gRounding0
Augmented 2D-TAN: A Two-stage Approach for Human-centric Spatio-Temporal Video Grounding0
AutoTVG: A New Vision-language Pre-training Paradigm for Temporal Video Grounding0
Cascaded Prediction Network via Segment Tree for Temporal Video Grounding0
Co-Grounding Networks with Semantic Attention for Referring Expression Comprehension in Videos0
Collaborative Static and Dynamic Vision-Language Streams for Spatio-Temporal Video Grounding0
Contextual Self-paced Learning for Weakly Supervised Spatio-Temporal Video Grounding0
Described Spatial-Temporal Video Detection0
DiffusionVMR: Diffusion Model for Joint Video Moment Retrieval and Highlight Detection0
End-to-End Dense Video Grounding via Parallel Regression0
End-to-End Modeling via Information Tree for One-Shot Natural Language Spatial Video Grounding0
Enhancing Weakly Supervised Video Grounding via Diverse Inference Strategies for Boundary and Prediction Selection0
EtC: Temporal Boundary Expand then Clarify for Weakly Supervised Video Grounding with Multimodal Large Language Model0
EVOQUER: Enhancing Temporal Grounding with Video-Pivoted BackQuery Generation0
Exploiting Feature Diversity for Make-up Temporal Video Grounding0
G2L: Semantically Aligned and Uniform Video Grounding via Geodesic and Game Theory0
Gaussian Kernel-based Cross Modal Network for Spatio-Temporal Video Grounding0
Exploiting Auxiliary Caption for Video Grounding0
Generation-Guided Multi-Level Unified Network for Video Grounding0
Graph2Vid: Flow graph to Video Grounding for Weakly-supervised Multi-Step Localization0
Hierarchical Semantic Correspondence Networks for Video Paragraph Grounding0
Iterative Proposal Refinement for Weakly-Supervised Video Grounding0
Language-free Training for Zero-shot Video Grounding0
LLM4VG: Large Language Models Evaluation for Video Grounding0
LocFormer: Enabling Transformers to Perform Temporal Moment Localization on Long Untrimmed Videos With a Feature Sampling Approach0
Multi-Level Representation Learning With Semantic Alignment for Referring Video Object Segmentation0
Multi-Modal Domain Adaptation Across Video Scenes for Temporal Video Grounding0
Multi-Scale Contrastive Learning for Video Temporal Grounding0
Multi-Scale Self-Contrastive Learning with Hard Negative Mining for Weakly-Supervised Query-based Video Grounding0
Multi-sentence Video Grounding for Long Video Generation0
No-frills Temporal Video Grounding: Multi-Scale Neighboring Attention and Zoom-in Boundary Detection0
Not All Frames Are Equal: Weakly-Supervised Video Grounding With Contextual Similarity and Visual Clustering Losses0
Object-Aware Multi-Branch Relation Networks for Spatio-Temporal Video Grounding0
On Pursuit of Designing Multi-modal Transformer for Video Grounding0
On the Effects of Video Grounding on Language Models0
Parallel Attention Network with Sequence Matching for Video Grounding0
Position-aware Location Regression Network for Temporal Video Grounding0
SAMA: Towards Multi-Turn Referential Grounded Video Chat with Large Language Models0
Semi-Supervised Video Paragraph Grounding With Contrastive Encoder0
Seq2Time: Sequential Knowledge Transfer for Video LLM Temporal Grounding0
SimBase: A Simple Baseline for Temporal Video Grounding0
Simplify Implant Depth Prediction as Video Grounding: A Texture Perceive Implant Depth Prediction Network0
Show:102550
← PrevPage 2 of 3Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1InternVideo2-6BR@1,IoU=0.756.45Unverified
2InternVideo2-1BR@1,IoU=0.754.45Unverified
3LLMEPETR@1,IoU=0.749.94Unverified
4QD-DETRR@1,IoU=0.744.98Unverified
5DiffusionVMRR@1,IoU=0.744.49Unverified
6UMTR@1,IoU=0.741.18Unverified
7Moment-DETRR@1,IoU=0.733.02Unverified
#ModelMetricClaimedVerifiedStatus
1DeCafNetR@1,IoU=0.113.25Unverified
2DenoiseLocR@1,IoU=0.111.59Unverified