SOTAVerified

Video Grounding

Video grounding is the task of linking spoken language descriptions to specific video segments. In video grounding, the model is given a video and a natural language description, such as a sentence or a caption, and its goal is to identify the specific segment of the video that corresponds to the description. This can involve tasks such as localizing the objects or actions mentioned in the description within the video, or associating a specific time interval with the description.

Papers

Showing 150 of 114 papers

TitleStatusHype
VideoITG: Multimodal Video Understanding with Instructed Temporal Grounding0
Reinforcement Learning Tuning for VideoLLMs: Reward Design and Data EfficiencyCode2
SAMA: Towards Multi-Turn Referential Grounded Video Chat with Large Language Models0
DeCafNet: Delegate and Conquer for Efficient Temporal Grounding in Long VideosCode1
Object-Shot Enhanced Grounding Network for Egocentric VideoCode1
Enhancing Weakly Supervised Video Grounding via Diverse Inference Strategies for Boundary and Prediction Selection0
VideoGEM: Training-free Action Grounding in Videos0
SpaceVLLM: Endowing Multimodal Large Language Model with Spatio-Temporal Video Grounding Capability0
TimeZero: Temporal Video Grounding with Reasoning-Guided LVLMCode2
OmniSTVG: Toward Spatio-Temporal Omni-Object Video GroundingCode1
TimeLoc: A Unified End-to-End Framework for Precise Timestamp Localization in Long VideosCode1
Knowing Your Target: Target-Aware Transformer Makes Better Spatio-Temporal Video GroundingCode1
Contextual Self-paced Learning for Weakly Supervised Spatio-Temporal Video Grounding0
LLaVA-ST: A Multimodal Large Language Model for Fine-Grained Spatial-Temporal UnderstandingCode2
Tarsier2: Advancing Large Vision-Language Models from Detailed Video Description to Comprehensive Video UnderstandingCode4
VidChain: Chain-of-Tasks with Metric-based Direct Preference Optimization for Dense Video CaptioningCode1
STPro: Spatial and Temporal Progressive Learning for Weakly Supervised Spatio-Temporal Grounding0
Consistency of Compositional Generalization across Multiple LevelsCode0
Multi-Scale Contrastive Learning for Video Temporal Grounding0
Video LLMs for Temporal Reasoning in Long Videos0
VideoLLM Knows When to Speak: Enhancing Time-Sensitive Video Comprehension with Video-Text Duet Interaction FormatCode1
Seq2Time: Sequential Knowledge Transfer for Video LLM Temporal Grounding0
SimBase: A Simple Baseline for Temporal Video Grounding0
SynopGround: A Large-Scale Dataset for Multi-Paragraph Video Grounding from TV Dramas and Synopses0
Prior Knowledge Integration via LLM Encoding and Pseudo Event Regulation for Video Moment RetrievalCode2
Multi-sentence Video Grounding for Long Video Generation0
Described Spatial-Temporal Video Detection0
AutoTVG: A New Vision-language Pre-training Paradigm for Temporal Video Grounding0
Simplify Implant Depth Prediction as Video Grounding: A Texture Perceive Implant Depth Prediction Network0
Artemis: Towards Referential Understanding in Complex VideosCode0
Video-of-Thought: Step-by-Step Video Reasoning from Perception to Cognition0
SnAG: Scalable and Accurate Video GroundingCode4
SpikeMba: Multi-Modal Spiking Saliency Mamba for Temporal Video Grounding0
InternVideo2: Scaling Foundation Models for Multimodal Video UnderstandingCode7
Unified Static and Dynamic Network: Efficient Temporal Filtering for Video GroundingCode0
HawkEye: Training Video-Text LLMs for Grounding Text in VideosCode1
Context-Guided Spatio-Temporal Video GroundingCode2
VideoGrounding-DINO: Towards Open-Vocabulary Spatio-Temporal Video Grounding0
Video-GroundingDINO: Towards Open-Vocabulary Spatio-Temporal Video Grounding0
Gaussian Mixture Proposals with Pull-Push Learning Scheme to Capture Diverse Events for Weakly Supervised Temporal Video GroundingCode1
Multi-Modal Domain Adaptation Across Video Scenes for Temporal Video Grounding0
LLM4VG: Large Language Models Evaluation for Video Grounding0
Cross-modal Contrastive Learning with Asymmetric Co-attention Network for Video Moment RetrievalCode0
Grounded Question-Answering in Long Egocentric VideosCode1
EtC: Temporal Boundary Expand then Clarify for Weakly Supervised Video Grounding with Multimodal Large Language Model0
VTimeLLM: Empower LLM to Grasp Video MomentsCode2
Bridging the Gap: A Unified Video Comprehension Framework for Moment Retrieval and Highlight DetectionCode1
PG-Video-LLaVA: Pixel Grounding Large Video-Language ModelsCode2
Exploring Iterative Refinement with Diffusion Models for Video GroundingCode0
Dual-Path Temporal Map Optimization for Make-up Temporal Video GroundingCode0
Show:102550
← PrevPage 1 of 3Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1InternVideo2-6BR@1,IoU=0.756.45Unverified
2InternVideo2-1BR@1,IoU=0.754.45Unverified
3LLMEPETR@1,IoU=0.749.94Unverified
4QD-DETRR@1,IoU=0.744.98Unverified
5DiffusionVMRR@1,IoU=0.744.49Unverified
6UMTR@1,IoU=0.741.18Unverified
7Moment-DETRR@1,IoU=0.733.02Unverified
#ModelMetricClaimedVerifiedStatus
1DeCafNetR@1,IoU=0.113.25Unverified
2DenoiseLocR@1,IoU=0.111.59Unverified