SOTAVerified

Referring Expression

Referring expressions places a bounding box around the instance corresponding to the provided description and image.

Papers

Showing 150 of 364 papers

TitleStatusHype
4th PVUW MeViS 3rd Place Report: Sa2VACode5
Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object DetectionCode5
Set-of-Mark Prompting Unleashes Extraordinary Visual Grounding in GPT-4VCode4
RemoteSAM: Towards Segment Anything for Earth ObservationCode3
Towards Visual Grounding: A SurveyCode3
EVF-SAM: Early Vision-Language Fusion for Text-Prompted Segment Anything ModelCode3
PSALM: Pixelwise SegmentAtion with Large Multi-Modal ModelCode3
Universal Instance Perception as Object Discovery and RetrievalCode3
TextRegion: Text-Aligned Region Tokens from Frozen Image-Text ModelsCode2
GroundingSuite: Measuring Complex Multi-Granular Pixel GroundingCode2
Text4Seg: Reimagining Image Segmentation as Text GenerationCode2
SAM4MLLM: Enhance Multi-Modal Large Language Model for Referring Expression SegmentationCode2
Revisiting Referring Expression Comprehension Evaluation in the Era of Large Multimodal ModelsCode2
F-LMM: Grounding Frozen Large Multimodal ModelsCode2
Decoupling Static and Hierarchical Motion Perception for Referring Video SegmentationCode2
Elysium: Exploring Object-level Perception in Videos via MLLMCode2
Unveiling Parts Beyond Objects: Towards Finer-Granularity Referring Expression SegmentationCode2
NExT-Chat: An LMM for Chat, Detection and SegmentationCode2
GLaMM: Pixel Grounding Large Multimodal ModelCode2
GREC: Generalized Referring Expression ComprehensionCode2
GRES: Generalized Referring Expression SegmentationCode2
MDETR - Modulated Detection for End-to-End Multi-Modal UnderstandingCode2
Exploring Contextual Attribute Density in Referring Expression CountingCode1
IteRPrimE: Zero-shot Referring Image Segmentation with Iterative Grad-CAM Refinement and Primary Word EmphasisCode1
New Dataset and Methods for Fine-Grained Compositional Referring Expression Comprehension via Specialist-MLLM CollaborationCode1
PixFoundation: Are We Heading in the Right Direction with Pixel-level Vision Foundation Models?Code1
RefDrone: A Challenging Benchmark for Referring Expression Comprehension in Drone ScenesCode1
NAVER: A Neuro-Symbolic Compositional Automaton for Visual Grounding with Explicit Logic ReasoningCode1
Multi-task Visual Grounding with Coarse-to-Fine Consistency ConstraintsCode1
IPDN: Image-enhanced Prompt Decoding Network for 3D Referring Expression SegmentationCode1
Exploring Contextual Attribute Density in Referring Expression CountingCode1
RG-SAN: Rule-Guided Spatial Awareness Network for End-to-End 3D Referring Expression SegmentationCode1
Cross-Modal Bidirectional Interaction Model for Referring Remote Sensing Image SegmentationCode1
Uni-Med: A Unified Medical Generalist Foundation Model For Multi-Task Learning Via Connector-MoECode1
FineCops-Ref: A new Dataset and Task for Fine-Grained Compositional Referring Expression ComprehensionCode1
Exploring Fine-Grained Image-Text Alignment for Referring Remote Sensing Image SegmentationCode1
MaPPER: Multimodal Prior-guided Parameter Efficient Tuning for Referring Expression ComprehensionCode1
LLM-wrapper: Black-Box Semantic-Aware Adaptation of Vision-Language Models for Referring Expression ComprehensionCode1
3D-GRES: Generalized 3D Referring Expression SegmentationCode1
Multi-branch Collaborative Learning Network for 3D Visual GroundingCode1
Referring Atomic Video Action RecognitionCode1
SAM as the Guide: Mastering Pseudo-Label Refinement in Semi-Supervised Referring Expression SegmentationCode1
CoHD: A Counting-Aware Hierarchical Decoding Framework for Generalized Referring Expression SegmentationCode1
Talk2Radar: Bridging Natural Language with 4D mmWave Radar for 3D Referring Expression ComprehensionCode1
DetToolChain: A New Prompting Paradigm to Unleash Detection Ability of MLLMCode1
Multi-modal Instruction Tuned LLMs with Fine-grained Visual PerceptionCode1
LLMs as Bridges: Reformulating Grounded Multimodal Named Entity RecognitionCode1
An Open and Comprehensive Pipeline for Unified Object Grounding and DetectionCode1
Referring Expression CountingCode1
Tune-An-Ellipse: CLIP Has Potential to Find What You WantCode1
Show:102550
← PrevPage 1 of 8Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1RandomAcc@0.5m14.6Unverified