SOTAVerified

Visual Grounding

Visual Grounding (VG) aims to locate the most relevant object or region in an image, based on a natural language query. The query can be a phrase, a sentence, or even a multi-round dialogue. There are three main challenges in VG:

  • What is the main focus in a query?
  • How to understand an image?
  • How to locate an object?

Papers

Showing 301325 of 571 papers

TitleStatusHype
Fine-Grained Spatial and Verbal Losses for 3D Visual Grounding0
Phrase Decoupling Cross-Modal Hierarchical Matching and Progressive Position Correction for Visual GroundingCode0
Parameter-Efficient Fine-Tuning Medical Multimodal Large Language Models for Medical Visual Grounding0
Few-Shot Multimodal Explanation for Visual Question AnsweringCode0
Joint Top-Down and Bottom-Up Frameworks for 3D Visual Grounding0
Griffon-G: Bridging Vision-Language and Vision-Centric Tasks via Large Multimodal Models0
Context-Infused Visual Grounding for ArtCode0
MC-Bench: A Benchmark for Multi-Context Visual Grounding in the Era of MLLMsCode0
Learning to Ground VLMs without Forgetting0
Neural Material Adaptor for Visual Grounding of Intrinsic Dynamics0
GRAPPA: Generalizing and Adapting Robot Policies via Online Agentic Guidance0
Context-Aware Command Understanding for Tabletop Scenarios0
VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks0
Adaptive Masking Enhances Visual GroundingCode0
World to Code: Multi-modal Data Generation via Self-Instructed Compositional Captioning and FilteringCode0
Individuation in Neural Models with and without Visual Grounding0
ZALM3: Zero-Shot Enhancement of Vision-Language Alignment via In-Context Information in Multi-Turn Multimodal Medical Dialogue0
HiFi-CS: Towards Open Vocabulary Visual Grounding For Robotic Grasping Using Vision-Language ModelsCode0
Bayesian Self-Training for Semi-Supervised 3D Segmentation0
Shaking Up VLMs: Comparing Transformers and Structured State Space Models for Vision & Language ModelingCode0
Visual Prompting in Multimodal Large Language Models: A Survey0
NanoMVG: USV-Centric Low-Power Multi-Task Visual Grounding based on Prompt-Guided Camera and 4D mmWave Radar0
ResVG: Enhancing Relation and Semantic Understanding in Multiple Instances for Visual GroundingCode0
M4CXR: Exploring Multi-task Potentials of Multi-modal Large Language Models for Chest X-ray Interpretation0
MMR: Evaluating Reading Ability of Large Multimodal Models0
Show:102550
← PrevPage 13 of 23Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Florence-2-large-ftAccuracy (%)95.3Unverified
2mPLUG-2Accuracy (%)92.8Unverified
3X2-VLM (large)Accuracy (%)92.1Unverified
4XFM (base)Accuracy (%)90.4Unverified
5X2-VLM (base)Accuracy (%)90.3Unverified
6X-VLM (base)Accuracy (%)89Unverified
7HYDRAIoU61.7Unverified
8HYDRAIoU61.1Unverified
#ModelMetricClaimedVerifiedStatus
1Florence-2-large-ftAccuracy (%)92Unverified
2mPLUG-2Accuracy (%)86.05Unverified
3X2-VLM (large)Accuracy (%)81.8Unverified
4XFM (base)Accuracy (%)79.8Unverified
5X2-VLM (base)Accuracy (%)78.4Unverified
6X-VLM (base)Accuracy (%)76.91Unverified
#ModelMetricClaimedVerifiedStatus
1Florence-2-large-ftAccuracy (%)93.4Unverified
2mPLUG-2Accuracy (%)90.33Unverified
3X2-VLM (large)Accuracy (%)87.6Unverified
4XFM (base)Accuracy (%)86.1Unverified
5X2-VLM (base)Accuracy (%)85.2Unverified
6X-VLM (base)Accuracy (%)84.51Unverified