SOTAVerified

Vision-Aware Text Features in Referring Image Segmentation: From Object Understanding to Context Understanding

2024-04-12Code Available0· sign in to hype

Hai Nguyen-Truong, E-Ro Nguyen, Tuan-Anh Vu, Minh-Triet Tran, Binh-Son Hua, Sai-Kit Yeung

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Referring image segmentation is a challenging task that involves generating pixel-wise segmentation masks based on natural language descriptions. The complexity of this task increases with the intricacy of the sentences provided. Existing methods have relied mostly on visual features to generate the segmentation masks while treating text features as supporting components. However, this under-utilization of text understanding limits the model's capability to fully comprehend the given expressions. In this work, we propose a novel framework that specifically emphasizes object and context comprehension inspired by human cognitive processes through Vision-Aware Text Features. Firstly, we introduce a CLIP Prior module to localize the main object of interest and embed the object heatmap into the query initialization process. Secondly, we propose a combination of two components: Contextual Multimodal Decoder and Meaning Consistency Constraint, to further enhance the coherent and consistent interpretation of language cues with the contextual understanding obtained from the image. Our method achieves significant performance improvements on three benchmark datasets RefCOCO, RefCOCO+ and G-Ref. Project page: https://vatex.hkustvgd.com/.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
DAVIS 2017 (val)VATEXJ&F score65.4Unverified
RefCOCOg-testVATEXmIoU70.58Unverified
RefCOCOg-valVATEXIoU0.76Unverified
RefCOCO testAVATEXmIoU74.41Unverified
RefCOCO testAVATEXmIoU79.64Unverified
RefCOCO testBVATEXmIoU75.64Unverified
RefCOCO+ test BVATEXmIoU62.52Unverified
RefCoCo valVATEXMean IoU70.02Unverified
RefCoCo valVATEXmIoU78.16Unverified

Reproductions