SOTAVerified

Visual Grounding

Visual Grounding (VG) aims to locate the most relevant object or region in an image, based on a natural language query. The query can be a phrase, a sentence, or even a multi-round dialogue. There are three main challenges in VG:

  • What is the main focus in a query?
  • How to understand an image?
  • How to locate an object?

Papers

Showing 201250 of 571 papers

TitleStatusHype
Visual Grounding Helps Learn Word Meanings in Low-Data RegimesCode1
How Do Multimodal Large Language Models Handle Complex Multimodal Reasoning? Placing Them in An Extensible Escape GameCode1
Pseudo-Q: Generating Pseudo Language Queries for Visual GroundingCode1
Shifting More Attention to Visual Backbone: Query-modulated Refinement Networks for End-to-End Visual GroundingCode1
Learning Better Visual Dialog Agents with Pretrained Visual-Linguistic RepresentationCode0
Smart Vision-Language ReasonersCode0
SOrT-ing VQA Models : Contrastive Gradient Learning for Improved ConsistencyCode0
Dual Attention Networks for Visual Reference Resolution in Visual DialogCode0
Language learning using Speech to Image retrievalCode0
Language-Guided Diffusion Model for Visual GroundingCode0
DrishtiKon: Multi-Granular Visual Grounding for Text-Rich Document ImagesCode0
Language Adaptive Weight Generation for Multi-task Visual GroundingCode0
SiRi: A Simple Selective Retraining Mechanism for Transformer-based Visual GroundingCode0
Semantic sentence similarity: size does not always matterCode0
Beyond task success: A closer look at jointly learning to see, ask, and GuessWhatCode0
Semantic query-by-example speech search using visual groundingCode0
InViG: Benchmarking Interactive Visual Grounding with 500K Human-Robot InteractionsCode0
Self-view Grounding Given a Narrated 360° VideoCode0
Introspective Learning : A Two-Stage Approach for Inference in Neural NetworksCode0
Discovering the Unknown Knowns: Turning Implicit Knowledge in the Dataset into Explicit Training Examples for Visual Question AnsweringCode0
SeCG: Semantic-Enhanced 3D Visual Grounding via Cross-modal Graph AttentionCode0
ScanERU: Interactive 3D Visual Grounding based on Embodied Reference UnderstandingCode0
RSVG: Exploring Data and Models for Visual Grounding on Remote Sensing DataCode0
Beyond Human Perception: Understanding Multi-Object World from Monocular ViewCode0
Seeing the advantage: visually grounding word embeddings to better capture human semantic knowledgeCode0
Shaking Up VLMs: Comparing Transformers and Structured State Space Models for Vision & Language ModelingCode0
DetermiNet: A Large-Scale Diagnostic Dataset for Complex Visually-Grounded Referencing using DeterminersCode0
A Better Loss for Visual-Textual GroundingCode0
Rethinking Diversified and Discriminative Proposal Generation for Visual GroundingCode0
Investigating Compositional Challenges in Vision-Language Models for Visual GroundingCode0
Revisiting Visual Question Answering BaselinesCode0
Behind the Magic, MERLIM: Multi-modal Evaluation Benchmark for Large Image-Language ModelsCode0
ResVG: Enhancing Relation and Semantic Understanding in Multiple Instances for Visual GroundingCode0
Enhancing Interpretability and Interactivity in Robot Manipulation: A Neurosymbolic ApproachCode0
Rethinking 3D Dense Caption and Visual Grounding in A Unified Framework through Prompt-based LocalizationCode0
RoViST:Learning Robust Metrics for Visual StorytellingCode0
HuBo-VLM: Unified Vision-Language Model designed for HUman roBOt interaction tasksCode0
Deconfounded Visual GroundingCode0
Progressive Multi-granular Alignments for Grounded Reasoning in Large Vision-Language ModelsCode0
HiFi-CS: Towards Open Vocabulary Visual Grounding For Robotic Grasping Using Vision-Language ModelsCode0
Phrase Decoupling Cross-Modal Hierarchical Matching and Progressive Position Correction for Visual GroundingCode0
GVCCI: Lifelong Learning of Visual Grounding for Language-Guided Robotic ManipulationCode0
CXReasonBench: A Benchmark for Evaluating Structured Diagnostic Reasoning in Chest X-raysCode0
RoViST: Learning Robust Metrics for Visual StorytellingCode0
Not (yet) the whole story: Evaluating Visual Storytelling Requires More than Measuring Coherence, Grounding, and RepetitionCode0
Language with Vision: a Study on Grounded Word and Sentence EmbeddingsCode0
Grounding of Textual Phrases in Images by ReconstructionCode0
NICE: Improving Panoptic Narrative Detection and Segmentation with Cascading Collaborative LearningCode0
GROOViST: A Metric for Grounding Objects in Visual StorytellingCode0
Griffon-G: Bridging Vision-Language and Vision-Centric Tasks via Large Multimodal ModelsCode0
Show:102550
← PrevPage 5 of 12Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Florence-2-large-ftAccuracy (%)95.3Unverified
2mPLUG-2Accuracy (%)92.8Unverified
3X2-VLM (large)Accuracy (%)92.1Unverified
4XFM (base)Accuracy (%)90.4Unverified
5X2-VLM (base)Accuracy (%)90.3Unverified
6X-VLM (base)Accuracy (%)89Unverified
7HYDRAIoU61.7Unverified
8HYDRAIoU61.1Unverified
#ModelMetricClaimedVerifiedStatus
1Florence-2-large-ftAccuracy (%)92Unverified
2mPLUG-2Accuracy (%)86.05Unverified
3X2-VLM (large)Accuracy (%)81.8Unverified
4XFM (base)Accuracy (%)79.8Unverified
5X2-VLM (base)Accuracy (%)78.4Unverified
6X-VLM (base)Accuracy (%)76.91Unverified
#ModelMetricClaimedVerifiedStatus
1Florence-2-large-ftAccuracy (%)93.4Unverified
2mPLUG-2Accuracy (%)90.33Unverified
3X2-VLM (large)Accuracy (%)87.6Unverified
4XFM (base)Accuracy (%)86.1Unverified
5X2-VLM (base)Accuracy (%)85.2Unverified
6X-VLM (base)Accuracy (%)84.51Unverified