SOTAVerified

Cost Aggregation Is All You Need for Few-Shot Segmentation

2021-12-22Code Available1· sign in to hype

Sunghwan Hong, Seokju Cho, Jisu Nam, Seungryong Kim

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We introduce a novel cost aggregation network, dubbed Volumetric Aggregation with Transformers (VAT), to tackle the few-shot segmentation task by using both convolutions and transformers to efficiently handle high dimensional correlation maps between query and support. In specific, we propose our encoder consisting of volume embedding module to not only transform the correlation maps into more tractable size but also inject some convolutional inductive bias and volumetric transformer module for the cost aggregation. Our encoder has a pyramidal structure to let the coarser level aggregation to guide the finer level and enforce to learn complementary matching scores. We then feed the output into our affinity-aware decoder along with the projected feature maps for guiding the segmentation process. Combining these components, we conduct experiments to demonstrate the effectiveness of the proposed method, and our method sets a new state-of-the-art for all the standard benchmarks in few-shot segmentation task. Furthermore, we find that the proposed method attains state-of-the-art performance even for the standard benchmarks in semantic correspondence task although not specifically designed for this task. We also provide an extensive ablation study to validate our architectural choices. The trained weights and codes are available at: https://seokju-cho.github.io/VAT/.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
COCO-20i (1-shot)VAT (ResNet-50)Mean IoU41.3Unverified
COCO-20i (5-shot)VAT (ResNet-50)Mean IoU47.9Unverified
FSS-1000 (1-shot)VATMean IoU90Unverified
FSS-1000 (5-shot)VATMean IoU90.6Unverified
PASCAL-5i (1-Shot)VATMean IoU67.5Unverified
PASCAL-5i (5-Shot)VATMean IoU71.6Unverified

Reproductions