SOTAVerified

CMT-DeepLab: Clustering Mask Transformers for Panoptic Segmentation

2022-06-17CVPR 2022Code Available1· sign in to hype

Qihang Yu, Huiyu Wang, Dahun Kim, Siyuan Qiao, Maxwell Collins, Yukun Zhu, Hartwig Adam, Alan Yuille, Liang-Chieh Chen

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We propose Clustering Mask Transformer (CMT-DeepLab), a transformer-based framework for panoptic segmentation designed around clustering. It rethinks the existing transformer architectures used in segmentation and detection; CMT-DeepLab considers the object queries as cluster centers, which fill the role of grouping the pixels when applied to segmentation. The clustering is computed with an alternating procedure, by first assigning pixels to the clusters by their feature affinity, and then updating the cluster centers and pixel features. Together, these operations comprise the Clustering Mask Transformer (CMT) layer, which produces cross-attention that is denser and more consistent with the final segmentation task. CMT-DeepLab improves the performance over prior art significantly by 4.4% PQ, achieving a new state-of-the-art of 55.7% PQ on the COCO test-dev set.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Cityscapes valCMT-DeepLab (MaX-S, single-scale, IN-1K)PQ64.6Unverified
COCO minivalCMT-DeepLab (single-scale)PQ55.3Unverified
COCO test-devCMT-DeepLab (single-scale)PQ55.7Unverified

Reproductions