SOTAVerified

Applying ViT in Generalized Few-shot Semantic Segmentation

2024-08-27Code Available0· sign in to hype

Liyuan Geng, Jinhong Xia, Yuanhe Guo

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

This paper explores the capability of ViT-based models under the generalized few-shot semantic segmentation (GFSS) framework. We conduct experiments with various combinations of backbone models, including ResNets and pretrained Vision Transformer (ViT)-based models, along with decoders featuring a linear classifier, UPerNet, and Mask Transformer. The structure made of DINOv2 and linear classifier takes the lead on popular few-shot segmentation bench mark PASCAL-5^i, substantially outperforming the best of ResNet structure by 116% in one-shot scenario. We demonstrate the great potential of large pretrained ViT-based model on GFSS task, and expect further improvement on testing benchmarks. However, a potential caveat is that when applying pure ViT-based model and large scale ViT decoder, the model is easy to overfit.

Tasks

Reproductions