SOTAVerified

Visual Grounding

Visual Grounding (VG) aims to locate the most relevant object or region in an image, based on a natural language query. The query can be a phrase, a sentence, or even a multi-round dialogue. There are three main challenges in VG:

  • What is the main focus in a query?
  • How to understand an image?
  • How to locate an object?

Papers

Showing 226250 of 571 papers

TitleStatusHype
MedRG: Medical Report Grounding with Multi-modal Large Language Model0
VHM: Versatile and Honest Vision Language Model for Remote Sensing Image AnalysisCode2
AgentStudio: A Toolkit for Building General Virtual AgentsCode3
Data-Efficient 3D Visual Grounding via Order-Aware Referring0
Surgical-LVLM: Learning to Adapt Large Vision-Language Model for Grounded Visual Question Answering in Robotic Surgery0
MedPromptX: Grounded Multimodal Prompting for Chest X-ray DiagnosisCode2
VidLA: Video-Language Alignment at Scale0
Lexicon-Level Contrastive Visual-Grounding Improves Language ModelingCode1
Learning from Synthetic Data for Visual Grounding0
Boosting Transferability in Vision-Language Attacks via Diversification along the Intersection Region of Adversarial TrajectoryCode1
HYDRA: A Hyper Agent for Dynamic Compositional Visual ReasoningCode1
WaterVG: Waterway Visual Grounding based on Text-Guided Vision and mmWave Radar0
Right Place, Right Time! Dynamizing Topological Graphs for Embodied Navigation0
SeCG: Semantic-Enhanced 3D Visual Grounding via Cross-modal Graph AttentionCode0
Detecting Concrete Visual Tokens for Multimodal Machine Translation0
MiKASA: Multi-Key-Anchor & Scene-Aware Transformer for 3D Visual GroundingCode1
Adversarial Testing for Visual Grounding via Image-Aware Property Reduction0
ShapeLLM: Universal 3D Object Understanding for Embodied InteractionCode3
OmniACT: A Dataset and Benchmark for Enabling Multimodal Generalist Autonomous Agents for Desktop and Web0
Seeing is Believing: Mitigating Hallucination in Large Vision-Language Models via CLIP-Guided DecodingCode1
The Revolution of Multimodal Large Language Models: A SurveyCode2
Beyond Literal Descriptions: Understanding and Locating Open-World Objects Aligned with Human IntentionsCode1
LLMs as Bridges: Reformulating Grounded Multimodal Named Entity RecognitionCode1
ViGoR: Improving Visual Grounding of Large Vision Language Models with Fine-Grained Reward ModelingCode0
Neural Slot Interpreters: Grounding Object Semantics in Emergent Slot Representations0
Show:102550
← PrevPage 10 of 23Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Florence-2-large-ftAccuracy (%)95.3Unverified
2mPLUG-2Accuracy (%)92.8Unverified
3X2-VLM (large)Accuracy (%)92.1Unverified
4XFM (base)Accuracy (%)90.4Unverified
5X2-VLM (base)Accuracy (%)90.3Unverified
6X-VLM (base)Accuracy (%)89Unverified
7HYDRAIoU61.7Unverified
8HYDRAIoU61.1Unverified
#ModelMetricClaimedVerifiedStatus
1Florence-2-large-ftAccuracy (%)92Unverified
2mPLUG-2Accuracy (%)86.05Unverified
3X2-VLM (large)Accuracy (%)81.8Unverified
4XFM (base)Accuracy (%)79.8Unverified
5X2-VLM (base)Accuracy (%)78.4Unverified
6X-VLM (base)Accuracy (%)76.91Unverified
#ModelMetricClaimedVerifiedStatus
1Florence-2-large-ftAccuracy (%)93.4Unverified
2mPLUG-2Accuracy (%)90.33Unverified
3X2-VLM (large)Accuracy (%)87.6Unverified
4XFM (base)Accuracy (%)86.1Unverified
5X2-VLM (base)Accuracy (%)85.2Unverified
6X-VLM (base)Accuracy (%)84.51Unverified