SOTAVerified

Visual Grounding

Visual Grounding (VG) aims to locate the most relevant object or region in an image, based on a natural language query. The query can be a phrase, a sentence, or even a multi-round dialogue. There are three main challenges in VG:

  • What is the main focus in a query?
  • How to understand an image?
  • How to locate an object?

Papers

Showing 101125 of 571 papers

TitleStatusHype
Are VLMs Ready for Autonomous Driving? An Empirical Study from the Reliability, Data, and Metric PerspectivesCode5
EAGLE: Enhanced Visual Grounding Minimizes Hallucinations in Instructional Multimodal Models0
ViGiL3D: A Linguistically Diverse Dataset for 3D Visual Grounding0
Seeing Speech and Sound: Distinguishing and Locating Audio Sources in Visual Scenes0
Mitigating Object Hallucinations in Large Vision-Language Models with Assembly of Global and Local AttentionCode2
Beyond Human Perception: Understanding Multi-Object World from Monocular ViewCode0
VideoGLaMM : A Large Multimodal Model for Pixel-Level Visual Grounding in Videos0
Ges3ViG : Incorporating Pointing Gestures into Language-Based 3D Visual Grounding for Embodied Reference UnderstandingCode0
Task-aware Cross-modal Feature Refinement Transformer with Large Language Models for Visual Grounding0
Towards Visual Grounding: A SurveyCode3
Referencing Where to Focus: Improving VisualGrounding with Referential Query0
Reasoning to Attend: Try to Understand How <SEG> Token WorksCode2
CoF: Coarse to Fine-Grained Image Understanding for Multi-modal Large Language ModelsCode0
Aria-UI: Visual Grounding for GUI InstructionsCode3
EarthDial: Turning Multi-sensory Earth Observations to Interactive Dialogues0
FiVL: A Framework for Improved Vision-Language AlignmentCode0
GAGS: Granularity-Aware Feature Distillation for Language Gaussian Splatting0
DeepSeek-VL2: Mixture-of-Experts Vision-Language Models for Advanced Multimodal UnderstandingCode9
Progressive Multi-granular Alignments for Grounded Reasoning in Large Vision-Language ModelsCode0
Barking Up The Syntactic Tree: Enhancing VLM Training with Syntactic Losses0
3D Spatial Understanding in MLLMs: Disambiguation and Evaluation0
TACO: Learning Multi-modal Action Models with Synthetic Chains-of-Thought-and-ActionCode2
Expanding Performance Boundaries of Open-Source Multimodal Models with Model, Data, and Test-Time ScalingCode0
M^3D: A Multimodal, Multilingual and Multitask Dataset for Grounded Document-level Information ExtractionCode0
SeeGround: See and Ground for Zero-Shot Open-Vocabulary 3D Visual Grounding0
Show:102550
← PrevPage 5 of 23Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Florence-2-large-ftAccuracy (%)95.3Unverified
2mPLUG-2Accuracy (%)92.8Unverified
3X2-VLM (large)Accuracy (%)92.1Unverified
4XFM (base)Accuracy (%)90.4Unverified
5X2-VLM (base)Accuracy (%)90.3Unverified
6X-VLM (base)Accuracy (%)89Unverified
7HYDRAIoU61.7Unverified
8HYDRAIoU61.1Unverified
#ModelMetricClaimedVerifiedStatus
1Florence-2-large-ftAccuracy (%)92Unverified
2mPLUG-2Accuracy (%)86.05Unverified
3X2-VLM (large)Accuracy (%)81.8Unverified
4XFM (base)Accuracy (%)79.8Unverified
5X2-VLM (base)Accuracy (%)78.4Unverified
6X-VLM (base)Accuracy (%)76.91Unverified
#ModelMetricClaimedVerifiedStatus
1Florence-2-large-ftAccuracy (%)93.4Unverified
2mPLUG-2Accuracy (%)90.33Unverified
3X2-VLM (large)Accuracy (%)87.6Unverified
4XFM (base)Accuracy (%)86.1Unverified
5X2-VLM (base)Accuracy (%)85.2Unverified
6X-VLM (base)Accuracy (%)84.51Unverified