SOTAVerified

Textual Query-Driven Mask Transformer for Domain Generalized Segmentation

2024-07-12Code Available1· sign in to hype

Byeonghyun Pak, Byeongju Woo, Sunghwan Kim, Dae-hwan Kim, Hoseong Kim

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

In this paper, we introduce a method to tackle Domain Generalized Semantic Segmentation (DGSS) by utilizing domain-invariant semantic knowledge from text embeddings of vision-language models. We employ the text embeddings as object queries within a transformer-based segmentation framework (textual object queries). These queries are regarded as a domain-invariant basis for pixel grouping in DGSS. To leverage the power of textual object queries, we introduce a novel framework named the textual query-driven mask transformer (tqdm). Our tqdm aims to (1) generate textual object queries that maximally encode domain-invariant semantics and (2) enhance the semantic clarity of dense visual features. Additionally, we suggest three regularization losses to improve the efficacy of tqdm by aligning between visual and textual features. By utilizing our method, the model can comprehend inherent semantic information for classes of interest, enabling it to generalize to extreme domains (e.g., sketch style). Our tqdm achieves 68.9 mIoU on GTA5Cityscapes, outperforming the prior state-of-the-art method by 2.5 mIoU. The project page is available at https://byeonghyunpak.github.io/tqdm.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
GTA5 to Cityscapestqdm (EVA02-CLIP-L)mIoU68.88Unverified
GTA-to-Avg(Cityscapes,BDD,Mapillary)tqdm (EVA02-CLIP-L)mIoU66.05Unverified

Reproductions