SOTAVerified

On Predicting Generalization using GANs

2021-11-28ICLR 2022Unverified0· sign in to hype

Yi Zhang, Arushi Gupta, Nikunj Saunshi, Sanjeev Arora

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Research on generalization bounds for deep networks seeks to give ways to predict test error using just the training dataset and the network parameters. While generalization bounds can give many insights about architecture design, training algorithms, etc., what they do not currently do is yield good predictions for actual test error. A recently introduced Predicting Generalization in Deep Learning competition~jiang2020neurips aims to encourage discovery of methods to better predict test error. The current paper investigates a simple idea: can test error be predicted using synthetic data, produced using a Generative Adversarial Network (GAN) that was trained on the same training dataset? Upon investigating several GAN models and architectures, we find that this turns out to be the case. In fact, using GANs pre-trained on standard datasets, the test error can be predicted without requiring any additional hyper-parameter tuning. This result is surprising because GANs have well-known limitations (e.g. mode collapse) and are known to not learn the data distribution accurately. Yet the generated samples are good enough to substitute for test data. Several additional experiments are presented to explore reasons why GANs do well at this task. In addition to a new approach for predicting generalization, the counter-intuitive phenomena presented in our work may also call for a better understanding of GANs' strengths and limitations.

Tasks

Reproductions