SOTAVerified

CAT-Seg: Cost Aggregation for Open-Vocabulary Semantic Segmentation

2023-03-21CVPR 2024Code Available2· sign in to hype

Seokju Cho, Heeseong Shin, Sunghwan Hong, Anurag Arnab, Paul Hongsuck Seo, Seungryong Kim

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Open-vocabulary semantic segmentation presents the challenge of labeling each pixel within an image based on a wide range of text descriptions. In this work, we introduce a novel cost-based approach to adapt vision-language foundation models, notably CLIP, for the intricate task of semantic segmentation. Through aggregating the cosine similarity score, i.e., the cost volume between image and text embeddings, our method potently adapts CLIP for segmenting seen and unseen classes by fine-tuning its encoders, addressing the challenges faced by existing methods in handling unseen classes. Building upon this, we explore methods to effectively aggregate the cost volume considering its multi-modal nature of being established between image and text embeddings. Furthermore, we examine various methods for efficiently fine-tuning CLIP.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
ADE20K-150CAT-SegmIoU37.9Unverified
ADE20K-847CAT-SegmIoU16Unverified
PASCAL Context-459CAT-SegmIoU23.8Unverified
PASCAL Context-59CAT-SegmIoU63.3Unverified
PascalVOC-20CAT-SegmIoU97Unverified
PascalVOC-20bCAT-SegmIoU82.5Unverified

Reproductions