SOTAVerified

Referring Expression Segmentation

The task aims at labeling the pixels of an image or video that represent an object instance referred by a linguistic expression. In particular, the referring expression (RE) must allow the identification of an individual object in a discourse or scene (the referent). REs unambiguously identify the target instance.

Papers

Showing 110 of 145 papers

TitleStatusHype
DeRIS: Decoupling Perception and Cognition for Enhanced Referring Image Segmentation through Loopback SynergyCode1
Mask-aware Text-to-Image Retrieval: Referring Expression Segmentation Meets Cross-modal Retrieval0
Refer to Anything with Vision-Language Prompts0
RemoteSAM: Towards Segment Anything for Earth ObservationCode3
VisionReasoner: Unified Visual Perception and Reasoning via Reinforcement LearningCode4
RESAnything: Attribute Prompting for Arbitrary Referring Segmentation0
3DResT: A Strong Baseline for Semi-Supervised 3D Referring Expression Segmentation0
Towards Unified Referring Expression Segmentation Across Omni-Level Visual Target GranularitiesCode0
GroundingSuite: Measuring Complex Multi-Granular Pixel GroundingCode2
SegAgent: Exploring Pixel Understanding Capabilities in MLLMs by Imitating Human Annotator TrajectoriesCode2
Show:102550
← PrevPage 1 of 15Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1DeRIS-LMean IoU81.32Unverified
2UniLSeg-100Overall IoU80.54Unverified
3MLCD-Seg-7BOverall IoU80.5Unverified
4UniLSeg-20Overall IoU79.47Unverified
5HyperSegOverall IoU78.9Unverified
6EVF-SAMOverall IoU78.3Unverified
7C3VGOverall IoU76.39Unverified
8DETRISOverall IoU75.3Unverified
9GROUNDHOGOverall IoU74.6Unverified
10MaskRIS (Swin-B, combined DB)Overall IoU71.09Unverified