SOTAVerified

Visual Grounding

Visual Grounding (VG) aims to locate the most relevant object or region in an image, based on a natural language query. The query can be a phrase, a sentence, or even a multi-round dialogue. There are three main challenges in VG:

  • What is the main focus in a query?
  • How to understand an image?
  • How to locate an object?

Papers

Showing 251275 of 571 papers

TitleStatusHype
DenseGrounding: Improving Dense Language-Vision Semantics for Ego-Centric 3D Visual Grounding0
3DWG: 3D Weakly Supervised Visual Grounding via Category and Instance-Level Alignment0
Learning Language Structures through Grounding0
EAGLE: Enhanced Visual Grounding Minimizes Hallucinations in Instructional Multimodal Models0
Improved Visual Grounding through Self-Consistent Explanations0
Image-Grounded Conversations: Multimodal Context for Natural Question and Response Generation0
Image Difference Grounding with Natural Language0
Illustrative Language Understanding: Large-Scale Visual Grounding with Image Search0
Bear the Query in Mind: Visual Grounding with Query-conditioned Convolution0
3D Scene Graph Guided Vision-Language Pre-training0
Multimodal Reference Visual Grounding0
Decoupled Spatial Temporal Graphs for Generic Visual Grounding0
Bayesian Self-Training for Semi-Supervised 3D Segmentation0
HPE-CogVLM: Advancing Vision Language Models with a Head Pose Grounding Task0
Barking Up The Syntactic Tree: Enhancing VLM Training with Syntactic Losses0
D3Net: A Unified Speaker-Listener Architecture for 3D Dense Captioning and Visual Grounding0
Multi-Granularity Modularized Network for Abstract Visual Reasoning0
HENASY: Learning to Assemble Scene-Entities for Egocentric Video-Language Model0
D2AF: A Dual-Driven Annotation and Filtering Framework for Visual Grounding0
HalluSegBench: Counterfactual Visual Reasoning for Segmentation Hallucination Evaluation0
Cycle-Consistency Learning for Captioning and Grounding0
A Visual Tour Of Current Challenges In Multimodal Language Models0
Guiding Visual Question Answering with Attention Priors0
A Vision Centric Remote Sensing Benchmark0
Multimodal Unified Attention Networks for Vision-and-Language Interactions0
Show:102550
← PrevPage 11 of 23Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Florence-2-large-ftAccuracy (%)95.3Unverified
2mPLUG-2Accuracy (%)92.8Unverified
3X2-VLM (large)Accuracy (%)92.1Unverified
4XFM (base)Accuracy (%)90.4Unverified
5X2-VLM (base)Accuracy (%)90.3Unverified
6X-VLM (base)Accuracy (%)89Unverified
7HYDRAIoU61.7Unverified
8HYDRAIoU61.1Unverified
#ModelMetricClaimedVerifiedStatus
1Florence-2-large-ftAccuracy (%)92Unverified
2mPLUG-2Accuracy (%)86.05Unverified
3X2-VLM (large)Accuracy (%)81.8Unverified
4XFM (base)Accuracy (%)79.8Unverified
5X2-VLM (base)Accuracy (%)78.4Unverified
6X-VLM (base)Accuracy (%)76.91Unverified
#ModelMetricClaimedVerifiedStatus
1Florence-2-large-ftAccuracy (%)93.4Unverified
2mPLUG-2Accuracy (%)90.33Unverified
3X2-VLM (large)Accuracy (%)87.6Unverified
4XFM (base)Accuracy (%)86.1Unverified
5X2-VLM (base)Accuracy (%)85.2Unverified
6X-VLM (base)Accuracy (%)84.51Unverified