SOTAVerified

Reducing Training Sample Memorization in GANs by Training with Memorization Rejection

2022-10-21Code Available0· sign in to hype

Andrew Bai, Cho-Jui Hsieh, Wendy Kan, Hsuan-Tien Lin

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Generative adversarial network (GAN) continues to be a popular research direction due to its high generation quality. It is observed that many state-of-the-art GANs generate samples that are more similar to the training set than a holdout testing set from the same distribution, hinting some training samples are implicitly memorized in these models. This memorization behavior is unfavorable in many applications that demand the generated samples to be sufficiently distinct from known samples. Nevertheless, it is unclear whether it is possible to reduce memorization without compromising the generation quality. In this paper, we propose memorization rejection, a training scheme that rejects generated samples that are near-duplicates of training samples during training. Our scheme is simple, generic and can be directly applied to any GAN architecture. Experiments on multiple datasets and GAN models validate that memorization rejection effectively reduces training sample memorization, and in many cases does not sacrifice the generation quality. Code to reproduce the experiment results can be found at https://github.com/jybai/MRGAN.

Tasks

Reproductions