SOTAVerified

Visual Grounding

Visual Grounding (VG) aims to locate the most relevant object or region in an image, based on a natural language query. The query can be a phrase, a sentence, or even a multi-round dialogue. There are three main challenges in VG:

  • What is the main focus in a query?
  • How to understand an image?
  • How to locate an object?

Papers

Showing 501525 of 571 papers

TitleStatusHype
ScanERU: Interactive 3D Visual Grounding based on Embodied Reference UnderstandingCode0
You Only Look & Listen Once: Towards Fast and Accurate Visual GroundingCode0
AS3D: 2D-Assisted Cross-Modal Understanding with Semantic-Spatial Scene Graphs for 3D Visual GroundingCode0
Enhancing Interpretability and Interactivity in Robot Manipulation: A Neurosymbolic ApproachCode0
SeCG: Semantic-Enhanced 3D Visual Grounding via Cross-modal Graph AttentionCode0
Finding beans in burgers: Deep semantic-visual embedding with localizationCode0
Few-Shot Multimodal Explanation for Visual Question AnsweringCode0
Multi-Attribute Interactions Matter for 3D Visual GroundingCode0
Unveiling the Compositional Ability Gap in Vision-Language Reasoning ModelCode0
Composing Pick-and-Place Tasks By Grounding LanguageCode0
Exploring Phrase-Level Grounding with Text-to-Image Diffusion ModelCode0
World to Code: Multi-modal Data Generation via Self-Instructed Compositional Captioning and FilteringCode0
Modularized Textual Grounding for Counterfactual ResilienceCode0
Mismatch Quest: Visual and Textual Feedback for Image-Text MisalignmentCode0
Measuring Faithful and Plausible Visual Grounding in VQACode0
MC-Bench: A Benchmark for Multi-Context Visual Grounding in the Era of MLLMsCode0
Self-view Grounding Given a Narrated 360° VideoCode0
Dual Attention Networks for Visual Reference Resolution in Visual DialogCode0
Semantic query-by-example speech search using visual groundingCode0
DrishtiKon: Multi-Granular Visual Grounding for Text-Rich Document ImagesCode0
MB-ORES: A Multi-Branch Object Reasoner for Visual Grounding in Remote SensingCode0
M^3D: A Multimodal, Multilingual and Multitask Dataset for Grounded Document-level Information ExtractionCode0
Leveraging Vision-Language Models for Visual Grounding and Analysis of Automotive UICode0
Shaking Up VLMs: Comparing Transformers and Structured State Space Models for Vision & Language ModelingCode0
Leverage Points in Modality Shifts: Comparing Language-only and Multimodal Word RepresentationsCode0
Show:102550
← PrevPage 21 of 23Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Florence-2-large-ftAccuracy (%)95.3Unverified
2mPLUG-2Accuracy (%)92.8Unverified
3X2-VLM (large)Accuracy (%)92.1Unverified
4XFM (base)Accuracy (%)90.4Unverified
5X2-VLM (base)Accuracy (%)90.3Unverified
6X-VLM (base)Accuracy (%)89Unverified
7HYDRAIoU61.7Unverified
8HYDRAIoU61.1Unverified
#ModelMetricClaimedVerifiedStatus
1Florence-2-large-ftAccuracy (%)92Unverified
2mPLUG-2Accuracy (%)86.05Unverified
3X2-VLM (large)Accuracy (%)81.8Unverified
4XFM (base)Accuracy (%)79.8Unverified
5X2-VLM (base)Accuracy (%)78.4Unverified
6X-VLM (base)Accuracy (%)76.91Unverified
#ModelMetricClaimedVerifiedStatus
1Florence-2-large-ftAccuracy (%)93.4Unverified
2mPLUG-2Accuracy (%)90.33Unverified
3X2-VLM (large)Accuracy (%)87.6Unverified
4XFM (base)Accuracy (%)86.1Unverified
5X2-VLM (base)Accuracy (%)85.2Unverified
6X-VLM (base)Accuracy (%)84.51Unverified