SOTAVerified

Visual Grounding

Visual Grounding (VG) aims to locate the most relevant object or region in an image, based on a natural language query. The query can be a phrase, a sentence, or even a multi-round dialogue. There are three main challenges in VG:

  • What is the main focus in a query?
  • How to understand an image?
  • How to locate an object?

Papers

Showing 276300 of 571 papers

TitleStatusHype
RoViST: Learning Robust Metrics for Visual StorytellingCode0
ScanERU: Interactive 3D Visual Grounding based on Embodied Reference UnderstandingCode0
SeCG: Semantic-Enhanced 3D Visual Grounding via Cross-modal Graph AttentionCode0
Self-view Grounding Given a Narrated 360° VideoCode0
Semantic query-by-example speech search using visual groundingCode0
Shaking Up VLMs: Comparing Transformers and Structured State Space Models for Vision & Language ModelingCode0
SiRi: A Simple Selective Retraining Mechanism for Transformer-based Visual GroundingCode0
Smart Vision-Language ReasonersCode0
SOrT-ing VQA Models : Contrastive Gradient Learning for Improved ConsistencyCode0
To Find Waldo You Need Contextual Cues: Debiasing Who's WaldoCode0
To Find Waldo You Need Contextual Cues: Debiasing Who’s WaldoCode0
Toward Building General Foundation Models for Language, Vision, and Vision-Language Understanding TasksCode0
Towards CLIP-driven Language-free 3D Visual Grounding via 2D-3D Relational Enhancement and ConsistencyCode0
Towards Unified Referring Expression Segmentation Across Omni-Level Visual Target GranularitiesCode0
Uncovering the Full Potential of Visual Grounding Methods in VQACode0
OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning FrameworkCode0
UniMoCo: Unified Modality Completion for Robust Multi-Modal EmbeddingsCode0
Unveiling the Compositional Ability Gap in Vision-Language Reasoning ModelCode0
ViGoR: Improving Visual Grounding of Large Vision Language Models with Fine-Grained Reward ModelingCode0
Enhancing Visual Grounding and Generalization: A Multi-Task Cycle Training Approach for Vision-Language ModelsCode0
Visual Contexts Clarify Ambiguous Expressions: A Benchmark DatasetCode0
Visual Coreference Resolution in Visual Dialog using Neural Module NetworksCode0
Visually Grounded VQA by Lattice-based RetrievalCode0
Visual Word2Vec (vis-w2v): Learning Visually Grounded Word Embeddings Using Abstract ScenesCode0
WildRefer: 3D Object Localization in Large-scale Dynamic Scenes with Multi-modal Visual Data and Natural LanguageCode0
Show:102550
← PrevPage 12 of 23Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Florence-2-large-ftAccuracy (%)95.3Unverified
2mPLUG-2Accuracy (%)92.8Unverified
3X2-VLM (large)Accuracy (%)92.1Unverified
4XFM (base)Accuracy (%)90.4Unverified
5X2-VLM (base)Accuracy (%)90.3Unverified
6X-VLM (base)Accuracy (%)89Unverified
7HYDRAIoU61.7Unverified
8HYDRAIoU61.1Unverified
#ModelMetricClaimedVerifiedStatus
1Florence-2-large-ftAccuracy (%)92Unverified
2mPLUG-2Accuracy (%)86.05Unverified
3X2-VLM (large)Accuracy (%)81.8Unverified
4XFM (base)Accuracy (%)79.8Unverified
5X2-VLM (base)Accuracy (%)78.4Unverified
6X-VLM (base)Accuracy (%)76.91Unverified
#ModelMetricClaimedVerifiedStatus
1Florence-2-large-ftAccuracy (%)93.4Unverified
2mPLUG-2Accuracy (%)90.33Unverified
3X2-VLM (large)Accuracy (%)87.6Unverified
4XFM (base)Accuracy (%)86.1Unverified
5X2-VLM (base)Accuracy (%)85.2Unverified
6X-VLM (base)Accuracy (%)84.51Unverified