SOTAVerified

Visual Grounding

Visual Grounding (VG) aims to locate the most relevant object or region in an image, based on a natural language query. The query can be a phrase, a sentence, or even a multi-round dialogue. There are three main challenges in VG:

  • What is the main focus in a query?
  • How to understand an image?
  • How to locate an object?

Papers

Showing 301325 of 571 papers

TitleStatusHype
Context Does Matter: End-to-end Panoptic Narrative Grounding with Deformable Attention Refined Matching NetworkCode0
OV-VG: A Benchmark for Open-Vocabulary Visual GroundingCode1
Visual Grounding Helps Learn Word Meanings in Low-Data RegimesCode1
InViG: Benchmarking Interactive Visual Grounding with 500K Human-Robot InteractionsCode0
NICE: Improving Panoptic Narrative Detection and Segmentation with Cascading Collaborative LearningCode0
Set-of-Mark Prompting Unleashes Extraordinary Visual Grounding in GPT-4VCode4
MiniGPT-v2: large language model as a unified interface for vision-language multi-task learningCode7
From CLIP to DINO: Visual Encoders Shout in Multi-modal Large Language ModelsCode2
CoT3DRef: Chain-of-Thoughts Data-Efficient 3D Visual GroundingCode1
Rephrase, Augment, Reason: Visual Grounding of Questions for Vision-Language ModelsCode1
Lightweight In-Context Tuning for Multimodal Unified Models0
LLM-Grounder: Open-Vocabulary 3D Visual Grounding with Large Language Model as an AgentCode2
Object2Scene: Putting Objects in Context for Open-Vocabulary 3D Detection0
PROGrasp: Pragmatic Human-Robot Communication for Object GraspingCode1
Multi3DRefer: Grounding Text Description to Multiple 3D ObjectsCode1
Collecting Visually-Grounded Dialogue with A Game Of SortsCode0
Four Ways to Improve Verbo-visual Fusion for Dense 3D Visual Grounding0
Interpretable Visual Question Answering via Reasoning Supervision0
DetermiNet: A Large-Scale Diagnostic Dataset for Complex Visually-Grounded Referencing using DeterminersCode0
VGDiffZero: Text-to-image Diffusion Models Can Be Zero-shot Visual GroundersCode1
FACET: Fairness in Computer Vision Evaluation Benchmark0
WALL-E: Embodied Robotic WAiter Load Lifting with Large Language Model0
UniPT: Universal Parallel Tuning for Transfer Learning with Efficient Parameter and MemoryCode1
HuBo-VLM: Unified Vision-Language Model designed for HUman roBOt interaction tasksCode0
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and BeyondCode5
Show:102550
← PrevPage 13 of 23Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Florence-2-large-ftAccuracy (%)95.3Unverified
2mPLUG-2Accuracy (%)92.8Unverified
3X2-VLM (large)Accuracy (%)92.1Unverified
4XFM (base)Accuracy (%)90.4Unverified
5X2-VLM (base)Accuracy (%)90.3Unverified
6X-VLM (base)Accuracy (%)89Unverified
7HYDRAIoU61.7Unverified
8HYDRAIoU61.1Unverified
#ModelMetricClaimedVerifiedStatus
1Florence-2-large-ftAccuracy (%)92Unverified
2mPLUG-2Accuracy (%)86.05Unverified
3X2-VLM (large)Accuracy (%)81.8Unverified
4XFM (base)Accuracy (%)79.8Unverified
5X2-VLM (base)Accuracy (%)78.4Unverified
6X-VLM (base)Accuracy (%)76.91Unverified
#ModelMetricClaimedVerifiedStatus
1Florence-2-large-ftAccuracy (%)93.4Unverified
2mPLUG-2Accuracy (%)90.33Unverified
3X2-VLM (large)Accuracy (%)87.6Unverified
4XFM (base)Accuracy (%)86.1Unverified
5X2-VLM (base)Accuracy (%)85.2Unverified
6X-VLM (base)Accuracy (%)84.51Unverified