SOTAVerified

Visual Grounding

Visual Grounding (VG) aims to locate the most relevant object or region in an image, based on a natural language query. The query can be a phrase, a sentence, or even a multi-round dialogue. There are three main challenges in VG:

  • What is the main focus in a query?
  • How to understand an image?
  • How to locate an object?

Papers

Showing 376400 of 571 papers

TitleStatusHype
Towards Truly Zero-shot Compositional Visual Reasoning with LLMs as Programmers0
LQMFormer: Language-aware Query Mask Transformer for Referring Image Segmentation0
When Visual Grounding Meets Gigapixel-level Large-scale Scenes: Benchmark and Approach0
Viewpoint-Aware Visual Grounding in 3D Scenes0
Investigating Compositional Challenges in Vision-Language Models for Visual GroundingCode0
Multi-Attribute Interactions Matter for 3D Visual GroundingCode0
Towards CLIP-driven Language-free 3D Visual Grounding via 2D-3D Relational Enhancement and ConsistencyCode0
Omni-Q: Omni-Directional Scene Understanding for Unsupervised Visual Grounding0
G^3-LQ: Marrying Hyperbolic Alignment with Explicit Semantic-Geometric Modeling for 3D Visual Grounding0
Bridging Modality Gap for Visual Grounding with Effecitve Cross-modal Distillation0
Cycle-Consistency Learning for Captioning and Grounding0
Weakly-Supervised 3D Visual Grounding based on Visual Linguistic Alignment0
Visual Grounding of Whole Radiology Reports for 3D CT Images0
Improved Visual Grounding through Self-Consistent Explanations0
Mismatch Quest: Visual and Textual Feedback for Image-Text MisalignmentCode0
Uni3DL: Unified Model for 3D and Language Understanding0
Expand BERT Representation with Visual Information via Grounded Language Learning with Multimodal Partial Alignment0
G2D: From Global to Dense Radiography Representation Learning via Vision-Language Pre-trainingCode0
Behind the Magic, MERLIM: Multi-modal Evaluation Benchmark for Large Image-Language ModelsCode0
Context-Aware Indoor Point Cloud Object Generation through User Instructions0
Enhancing Visual Grounding and Generalization: A Multi-Task Cycle Training Approach for Vision-Language ModelsCode0
A Systematic Evaluation of GPT-4V's Multimodal Capability for Medical Image Analysis0
GROOViST: A Metric for Grounding Objects in Visual StorytellingCode0
Context Does Matter: End-to-end Panoptic Narrative Grounding with Deformable Attention Refined Matching NetworkCode0
InViG: Benchmarking Interactive Visual Grounding with 500K Human-Robot InteractionsCode0
Show:102550
← PrevPage 16 of 23Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Florence-2-large-ftAccuracy (%)95.3Unverified
2mPLUG-2Accuracy (%)92.8Unverified
3X2-VLM (large)Accuracy (%)92.1Unverified
4XFM (base)Accuracy (%)90.4Unverified
5X2-VLM (base)Accuracy (%)90.3Unverified
6X-VLM (base)Accuracy (%)89Unverified
7HYDRAIoU61.7Unverified
8HYDRAIoU61.1Unverified
#ModelMetricClaimedVerifiedStatus
1Florence-2-large-ftAccuracy (%)92Unverified
2mPLUG-2Accuracy (%)86.05Unverified
3X2-VLM (large)Accuracy (%)81.8Unverified
4XFM (base)Accuracy (%)79.8Unverified
5X2-VLM (base)Accuracy (%)78.4Unverified
6X-VLM (base)Accuracy (%)76.91Unverified
#ModelMetricClaimedVerifiedStatus
1Florence-2-large-ftAccuracy (%)93.4Unverified
2mPLUG-2Accuracy (%)90.33Unverified
3X2-VLM (large)Accuracy (%)87.6Unverified
4XFM (base)Accuracy (%)86.1Unverified
5X2-VLM (base)Accuracy (%)85.2Unverified
6X-VLM (base)Accuracy (%)84.51Unverified