SOTAVerified

Visual Grounding

Visual Grounding (VG) aims to locate the most relevant object or region in an image, based on a natural language query. The query can be a phrase, a sentence, or even a multi-round dialogue. There are three main challenges in VG:

  • What is the main focus in a query?
  • How to understand an image?
  • How to locate an object?

Papers

Showing 451500 of 571 papers

TitleStatusHype
Cost-Effective Language Driven Image Editing with LX-DRIMCode0
Dynamic MDETR: A Dynamic Multimodal Transformer Decoder for Visual Grounding0
Introspective Learning : A Two-Stage Approach for Inference in Neural NetworksCode0
Visual Grounding of Inter-lingual Word-Embeddings0
VLMAE: Vision-Language Masked Autoencoder0
SiRi: A Simple Selective Retraining Mechanism for Transformer-based Visual GroundingCode0
Toward Explainable and Fine-Grained 3D Grounding through Referring Textual Phrases0
RoViST: Learning Robust Metrics for Visual StorytellingCode0
How direct is the link between words and images?0
Tell Me the Evidence? Dual Visual-Linguistic Interaction for Answer Grounding0
Bear the Query in Mind: Visual Grounding with Query-conditioned Convolution0
Language with Vision: a Study on Grounded Word and Sentence EmbeddingsCode0
Guiding Visual Question Answering with Attention Priors0
Sim-To-Real Transfer of Visual Grounding for Human-Aided Ambiguity Resolution0
Weakly-supervised segmentation of referring expressions0
RoViST:Learning Robust Metrics for Visual StorytellingCode0
Flexible Visual GroundingCode0
Attention as Grounding: Exploring Textual and Cross-Modal Attention on Entities and Relations in Language-and-Vision TransformerCode0
To Find Waldo You Need Contextual Cues: Debiasing Who’s WaldoCode0
FindIt: Generalized Localization with Natural Language Queries0
To Find Waldo You Need Contextual Cues: Debiasing Who's WaldoCode0
Suspected Object Matters: Rethinking Model's Prediction for One-stage Visual Grounding0
Seeing the advantage: visually grounding word embeddings to better capture human semantic knowledge0
OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning FrameworkCode0
3DJCG: A Unified Framework for Joint Dense Captioning and Visual Grounding on 3D Point Clouds0
Deconfounded Visual GroundingCode0
RoViST: Learning Robust Metrics for Visual Storytelling0
D3Net: A Unified Speaker-Listener Architecture for 3D Dense Captioning and Visual Grounding0
Less is More: Generating Grounded Navigation Instructions from Landmarks0
Zero-Shot Visual Grounding of Referring Utterances in Dialogue0
Attention as Grounding: Exploring Textual and Cross-Modal Attention on Entities and Relations in Language-and-Vision Transformer0
Efficient Multi-Modal Embeddings from Structured Data0
Discovering the Unknown Knowns: Turning Implicit Knowledge in the Dataset into Explicit Training Examples for Visual Question AnsweringCode0
Retrieve, Caption, Generate: Visual Grounding for Enhancing Commonsense in Text Generation Models0
INVIGORATE: Interactive Visual Grounding and Grasping in Clutter0
A Better Loss for Visual-Textual GroundingCode0
TransRefer3D: Entity-and-Relation Aware Transformer for Fine-Grained 3D Visual Grounding0
Attending Self-Attention: A Case Study of Visually Grounded Supervision in Vision-and-Language Transformers0
Word2Pix: Word to Pixel Cross Attention Transformer in Visual Grounding0
LanguageRefer: Spatial-Language Model for 3D Visual Grounding0
Adventurer's Treasure Hunt: A Transparent System for Visually Grounded Compositional Visual Question Answering based on Scene Graphs0
AIFit: Automatic 3D Human-Interpretable Feedback Models for Fitness Training0
Attention-Based Keyword Localisation in Speech using Visual Grounding0
Semantic sentence similarity: size does not always matter0
Learning Better Visual Dialog Agents with Pretrained Visual-Linguistic RepresentationCode0
Visual Grounding Strategies for Text-Only Natural Language Processing0
Scene-Intuitive Agent for Remote Embodied Visual Grounding0
Decoupled Spatial Temporal Graphs for Generic Visual Grounding0
Few-Shot Visual Grounding for Natural Human-Robot Interaction0
Composing Pick-and-Place Tasks By Grounding LanguageCode0
Show:102550
← PrevPage 10 of 12Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Florence-2-large-ftAccuracy (%)95.3Unverified
2mPLUG-2Accuracy (%)92.8Unverified
3X2-VLM (large)Accuracy (%)92.1Unverified
4XFM (base)Accuracy (%)90.4Unverified
5X2-VLM (base)Accuracy (%)90.3Unverified
6X-VLM (base)Accuracy (%)89Unverified
7HYDRAIoU61.7Unverified
8HYDRAIoU61.1Unverified
#ModelMetricClaimedVerifiedStatus
1Florence-2-large-ftAccuracy (%)92Unverified
2mPLUG-2Accuracy (%)86.05Unverified
3X2-VLM (large)Accuracy (%)81.8Unverified
4XFM (base)Accuracy (%)79.8Unverified
5X2-VLM (base)Accuracy (%)78.4Unverified
6X-VLM (base)Accuracy (%)76.91Unverified
#ModelMetricClaimedVerifiedStatus
1Florence-2-large-ftAccuracy (%)93.4Unverified
2mPLUG-2Accuracy (%)90.33Unverified
3X2-VLM (large)Accuracy (%)87.6Unverified
4XFM (base)Accuracy (%)86.1Unverified
5X2-VLM (base)Accuracy (%)85.2Unverified
6X-VLM (base)Accuracy (%)84.51Unverified