SOTAVerified

Referring Expression Segmentation

The task aims at labeling the pixels of an image or video that represent an object instance referred by a linguistic expression. In particular, the referring expression (RE) must allow the identification of an individual object in a discourse or scene (the referent). REs unambiguously identify the target instance.

Papers

Showing 110 of 145 papers

TitleStatusHype
DeRIS: Decoupling Perception and Cognition for Enhanced Referring Image Segmentation through Loopback SynergyCode1
Mask-aware Text-to-Image Retrieval: Referring Expression Segmentation Meets Cross-modal Retrieval0
Refer to Anything with Vision-Language Prompts0
RemoteSAM: Towards Segment Anything for Earth ObservationCode3
VisionReasoner: Unified Visual Perception and Reasoning via Reinforcement LearningCode4
RESAnything: Attribute Prompting for Arbitrary Referring Segmentation0
3DResT: A Strong Baseline for Semi-Supervised 3D Referring Expression Segmentation0
Towards Unified Referring Expression Segmentation Across Omni-Level Visual Target GranularitiesCode0
GroundingSuite: Measuring Complex Multi-Granular Pixel GroundingCode2
SegAgent: Exploring Pixel Understanding Capabilities in MLLMs by Imitating Human Annotator TrajectoriesCode2
Show:102550
← PrevPage 1 of 15Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1DeRIS-LMean IoU78.59Unverified
2MLCD-Seg-7BOverall IoU75.6Unverified
3HyperSegOverall IoU75.2Unverified
4EVF-SAMOverall IoU71.9Unverified
5DETRISOverall IoU70.2Unverified
6C3VGOverall IoU68.95Unverified
7UniLSeg-100Overall IoU68.15Unverified
8UniLSeg-20Overall IoU66.99Unverified
9UNINEXT-HOverall IoU66.22Unverified
10GROUNDHOGOverall IoU64.9Unverified