SOTAVerified

Visual Grounding

Visual Grounding (VG) aims to locate the most relevant object or region in an image, based on a natural language query. The query can be a phrase, a sentence, or even a multi-round dialogue. There are three main challenges in VG:

  • What is the main focus in a query?
  • How to understand an image?
  • How to locate an object?

Papers

Showing 426450 of 571 papers

TitleStatusHype
Sample-Specific Debiasing for Better Image-Text Models0
Movie Box Office Prediction With Self-Supervised and Visually Grounded Pretraining0
WildRefer: 3D Object Localization in Large-scale Dynamic Scenes with Multi-modal Visual Data and Natural LanguageCode0
ScanERU: Interactive 3D Visual Grounding based on Embodied Reference UnderstandingCode0
Medical Phrase Grounding with Region-Phrase Context Contrastive Alignment0
Parallel Vertex Diffusion for Unified Visual Grounding0
Focusing On Targets For Improving Weakly Supervised Visual Grounding0
Toward Building General Foundation Models for Language, Vision, and Vision-Language Understanding TasksCode0
ViewRefer: Grasp the Multi-view Knowledge for 3D Visual Grounding0
CoSign: Exploring Co-occurrence Signals in Skeleton-based Continuous Sign Language Recognition0
Dynamic Inference With Grounding Based Vision and Language Models0
GAFNet: A Global Fourier Self Attention Based Novel Network for multi-modal downstream tasks0
Using Multiple Instance Learning to Build Multimodal Representations0
UniT3D: A Unified Transformer for 3D Dense Captioning and Visual Grounding0
MNER-QG: An End-to-End MRC framework for Multimodal Named Entity Recognition with Query Grounding0
A survey on knowledge-enhanced multimodal learning0
Visually Grounded VQA by Lattice-based RetrievalCode0
Are Current Decoding Strategies Capable of Facing the Challenges of Visual Dialogue?0
RSVG: Exploring Data and Models for Visual Grounding on Remote Sensing Data0
A Visual Tour Of Current Challenges In Multimodal Language Models0
Like a bilingual baby: The advantage of visually grounding a bilingual language model0
YFACC: A Yorùbá speech-image dataset for cross-lingual keyword localisation through visual grounding0
MAMO: Masked Multimodal Modeling for Fine-Grained Vision-Language Representation Learning0
Enhancing Interpretability and Interactivity in Robot Manipulation: A Neurosymbolic ApproachCode0
Differentiable Parsing and Visual Grounding of Natural Language Instructions for Object Placement0
Show:102550
← PrevPage 18 of 23Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Florence-2-large-ftAccuracy (%)95.3Unverified
2mPLUG-2Accuracy (%)92.8Unverified
3X2-VLM (large)Accuracy (%)92.1Unverified
4XFM (base)Accuracy (%)90.4Unverified
5X2-VLM (base)Accuracy (%)90.3Unverified
6X-VLM (base)Accuracy (%)89Unverified
7HYDRAIoU61.7Unverified
8HYDRAIoU61.1Unverified
#ModelMetricClaimedVerifiedStatus
1Florence-2-large-ftAccuracy (%)92Unverified
2mPLUG-2Accuracy (%)86.05Unverified
3X2-VLM (large)Accuracy (%)81.8Unverified
4XFM (base)Accuracy (%)79.8Unverified
5X2-VLM (base)Accuracy (%)78.4Unverified
6X-VLM (base)Accuracy (%)76.91Unverified
#ModelMetricClaimedVerifiedStatus
1Florence-2-large-ftAccuracy (%)93.4Unverified
2mPLUG-2Accuracy (%)90.33Unverified
3X2-VLM (large)Accuracy (%)87.6Unverified
4XFM (base)Accuracy (%)86.1Unverified
5X2-VLM (base)Accuracy (%)85.2Unverified
6X-VLM (base)Accuracy (%)84.51Unverified