SOTAVerified

Visual Grounding

Visual Grounding (VG) aims to locate the most relevant object or region in an image, based on a natural language query. The query can be a phrase, a sentence, or even a multi-round dialogue. There are three main challenges in VG:

  • What is the main focus in a query?
  • How to understand an image?
  • How to locate an object?

Papers

Showing 126150 of 571 papers

TitleStatusHype
Paint Outside the Box: Synthesizing and Selecting Training Data for Visual Grounding0
3D Scene Graph Guided Vision-Language Pre-training0
Interpreting Object-level Foundation Models via Visual Precision SearchCode2
BIP3D: Bridging 2D Images and 3D Perception for Embodied IntelligenceCode3
Solving Zero-Shot 3D Visual Grounding as Constraint Satisfaction ProblemsCode1
Visual Contexts Clarify Ambiguous Expressions: A Benchmark DatasetCode0
GeoGround: A Unified Large Vision-Language Model for Remote Sensing Visual GroundingCode2
Motion-Grounded Video Reasoning: Understanding and Perceiving Motion at Pixel Level0
VideoGLaMM: A Large Multimodal Model for Pixel-Level Visual Grounding in Videos0
LidaRefer: Outdoor 3D Visual Grounding for Autonomous Driving with Transformers0
Fine-Grained Spatial and Verbal Losses for 3D Visual Grounding0
Phrase Decoupling Cross-Modal Hierarchical Matching and Progressive Position Correction for Visual GroundingCode0
Parameter-Efficient Fine-Tuning Medical Multimodal Large Language Models for Medical Visual Grounding0
Few-Shot Multimodal Explanation for Visual Question AnsweringCode0
Griffon-G: Bridging Vision-Language and Vision-Centric Tasks via Large Multimodal Models0
Joint Top-Down and Bottom-Up Frameworks for 3D Visual Grounding0
VLM-Grounder: A VLM Agent for Zero-Shot 3D Visual GroundingCode2
VividMed: Vision Language Model with Versatile Visual Grounding for MedicineCode1
MC-Bench: A Benchmark for Multi-Context Visual Grounding in the Era of MLLMsCode0
Context-Infused Visual Grounding for ArtCode0
VidEgoThink: Assessing Egocentric Video Understanding Capabilities for Embodied AICode2
Learning to Ground VLMs without Forgetting0
Neural Material Adaptor for Visual Grounding of Intrinsic Dynamics0
GRAPPA: Generalizing and Adapting Robot Policies via Online Agentic Guidance0
Context-Aware Command Understanding for Tabletop Scenarios0
Show:102550
← PrevPage 6 of 23Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Florence-2-large-ftAccuracy (%)95.3Unverified
2mPLUG-2Accuracy (%)92.8Unverified
3X2-VLM (large)Accuracy (%)92.1Unverified
4XFM (base)Accuracy (%)90.4Unverified
5X2-VLM (base)Accuracy (%)90.3Unverified
6X-VLM (base)Accuracy (%)89Unverified
7HYDRAIoU61.7Unverified
8HYDRAIoU61.1Unverified
#ModelMetricClaimedVerifiedStatus
1Florence-2-large-ftAccuracy (%)92Unverified
2mPLUG-2Accuracy (%)86.05Unverified
3X2-VLM (large)Accuracy (%)81.8Unverified
4XFM (base)Accuracy (%)79.8Unverified
5X2-VLM (base)Accuracy (%)78.4Unverified
6X-VLM (base)Accuracy (%)76.91Unverified
#ModelMetricClaimedVerifiedStatus
1Florence-2-large-ftAccuracy (%)93.4Unverified
2mPLUG-2Accuracy (%)90.33Unverified
3X2-VLM (large)Accuracy (%)87.6Unverified
4XFM (base)Accuracy (%)86.1Unverified
5X2-VLM (base)Accuracy (%)85.2Unverified
6X-VLM (base)Accuracy (%)84.51Unverified