SOTAVerified

Visual Grounding

Visual Grounding (VG) aims to locate the most relevant object or region in an image, based on a natural language query. The query can be a phrase, a sentence, or even a multi-round dialogue. There are three main challenges in VG:

  • What is the main focus in a query?
  • How to understand an image?
  • How to locate an object?

Papers

Showing 451475 of 571 papers

TitleStatusHype
CPT: Colorful Prompt Tuning for Pre-trained Vision-Language ModelsCode1
Multimodal Incremental Transformer with Visual Grounding for Visual Dialogue GenerationCode1
Discovering the Unknown Knowns: Turning Implicit Knowledge in the Dataset into Explicit Training Examples for Visual Question AnsweringCode0
Panoptic Narrative GroundingCode1
Retrieve, Caption, Generate: Visual Grounding for Enhancing Commonsense in Text Generation Models0
INVIGORATE: Interactive Visual Grounding and Grasping in Clutter0
A Better Loss for Visual-Textual GroundingCode0
TransRefer3D: Entity-and-Relation Aware Transformer for Fine-Grained 3D Visual Grounding0
Attending Self-Attention: A Case Study of Visually Grounded Supervision in Vision-and-Language Transformers0
Word2Pix: Word to Pixel Cross Attention Transformer in Visual Grounding0
LanguageRefer: Spatial-Language Model for 3D Visual Grounding0
VidLanKD: Improving Language Understanding via Video-Distilled Knowledge TransferCode1
Adventurer's Treasure Hunt: A Transparent System for Visually Grounded Compositional Visual Question Answering based on Scene Graphs0
AIFit: Automatic 3D Human-Interpretable Feedback Models for Fitness Training0
Semantic sentence similarity: size does not always matter0
Attention-Based Keyword Localisation in Speech using Visual Grounding0
Referring Transformer: A One-step Approach to Multi-task Visual GroundingCode1
Learning Better Visual Dialog Agents with Pretrained Visual-Linguistic RepresentationCode0
SAT: 2D Semantics Assisted Training for 3D Visual GroundingCode1
Connecting What to Say With Where to Look by Modeling Human Attention TracesCode1
MDETR -- Modulated Detection for End-to-End Multi-Modal UnderstandingCode1
TransVG: End-to-End Visual Grounding with TransformersCode1
Look Before You Leap: Learning Landmark Features for One-Stage Visual GroundingCode1
Cyclic Co-Learning of Sounding Object Visual Grounding and Sound SeparationCode1
Visual Grounding Strategies for Text-Only Natural Language Processing0
Show:102550
← PrevPage 19 of 23Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Florence-2-large-ftAccuracy (%)95.3Unverified
2mPLUG-2Accuracy (%)92.8Unverified
3X2-VLM (large)Accuracy (%)92.1Unverified
4XFM (base)Accuracy (%)90.4Unverified
5X2-VLM (base)Accuracy (%)90.3Unverified
6X-VLM (base)Accuracy (%)89Unverified
7HYDRAIoU61.7Unverified
8HYDRAIoU61.1Unverified
#ModelMetricClaimedVerifiedStatus
1Florence-2-large-ftAccuracy (%)92Unverified
2mPLUG-2Accuracy (%)86.05Unverified
3X2-VLM (large)Accuracy (%)81.8Unverified
4XFM (base)Accuracy (%)79.8Unverified
5X2-VLM (base)Accuracy (%)78.4Unverified
6X-VLM (base)Accuracy (%)76.91Unverified
#ModelMetricClaimedVerifiedStatus
1Florence-2-large-ftAccuracy (%)93.4Unverified
2mPLUG-2Accuracy (%)90.33Unverified
3X2-VLM (large)Accuracy (%)87.6Unverified
4XFM (base)Accuracy (%)86.1Unverified
5X2-VLM (base)Accuracy (%)85.2Unverified
6X-VLM (base)Accuracy (%)84.51Unverified