SOTAVerified

Video Grounding

Video grounding is the task of linking spoken language descriptions to specific video segments. In video grounding, the model is given a video and a natural language description, such as a sentence or a caption, and its goal is to identify the specific segment of the video that corresponds to the description. This can involve tasks such as localizing the objects or actions mentioned in the description within the video, or associating a specific time interval with the description.

Papers

Showing 110 of 114 papers

TitleStatusHype
VideoITG: Multimodal Video Understanding with Instructed Temporal Grounding0
Reinforcement Learning Tuning for VideoLLMs: Reward Design and Data EfficiencyCode2
SAMA: Towards Multi-Turn Referential Grounded Video Chat with Large Language Models0
DeCafNet: Delegate and Conquer for Efficient Temporal Grounding in Long VideosCode1
Object-Shot Enhanced Grounding Network for Egocentric VideoCode1
Enhancing Weakly Supervised Video Grounding via Diverse Inference Strategies for Boundary and Prediction Selection0
VideoGEM: Training-free Action Grounding in Videos0
SpaceVLLM: Endowing Multimodal Large Language Model with Spatio-Temporal Video Grounding Capability0
TimeZero: Temporal Video Grounding with Reasoning-Guided LVLMCode2
OmniSTVG: Toward Spatio-Temporal Omni-Object Video GroundingCode1
Show:102550
← PrevPage 1 of 12Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1InternVideo2-6BR@1,IoU=0.756.45Unverified
2InternVideo2-1BR@1,IoU=0.754.45Unverified
3LLMEPETR@1,IoU=0.749.94Unverified
4QD-DETRR@1,IoU=0.744.98Unverified
5DiffusionVMRR@1,IoU=0.744.49Unverified
6UMTR@1,IoU=0.741.18Unverified
7Moment-DETRR@1,IoU=0.733.02Unverified
#ModelMetricClaimedVerifiedStatus
1DeCafNetR@1,IoU=0.113.25Unverified
2DenoiseLocR@1,IoU=0.111.59Unverified