SOTAVerified

Finding an Unsupervised Image Segmenter in Each of Your Deep Generative Models

2021-05-17ICLR 2022Code Available1· sign in to hype

Luke Melas-Kyriazi, Christian Rupprecht, Iro Laina, Andrea Vedaldi

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Recent research has shown that numerous human-interpretable directions exist in the latent space of GANs. In this paper, we develop an automatic procedure for finding directions that lead to foreground-background image separation, and we use these directions to train an image segmentation model without human supervision. Our method is generator-agnostic, producing strong segmentation results with a wide range of different GAN architectures. Furthermore, by leveraging GANs pretrained on large datasets such as ImageNet, we are able to segment images from a range of domains without further training or finetuning. Evaluating our method on image segmentation benchmarks, we compare favorably to prior work while using neither human supervision nor access to the training data. Broadly, our results demonstrate that automatically extracting foreground-background structure from pretrained deep generative models can serve as a remarkably effective substitute for human supervision.

Tasks

Reproductions