SOTAVerified

Self-Supervised GAN to Counter Forgetting

2018-10-27Unverified0· sign in to hype

Ting Chen, Xiaohua Zhai, Neil Houlsby

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

GANs involve training two networks in an adversarial game, where each network's task depends on its adversary. Recently, several works have framed GAN training as an online or continual learning problem. We focus on the discriminator, which must perform classification under an (adversarially) shifting data distribution. When trained on sequential tasks, neural networks exhibit forgetting. For GANs, discriminator forgetting leads to training instability. To counter forgetting, we encourage the discriminator to maintain useful representations by adding a self-supervision. Conditional GANs have a similar effect using labels. However, our self-supervised GAN does not require labels, and closes the performance gap between conditional and unconditional models. We show that, in doing so, the self-supervised discriminator learns better representations than regular GANs.

Tasks

Reproductions