SOTAVerified

A Uniform Generalization Error Bound for Generative Adversarial Networks

2019-09-25Unverified0· sign in to hype

Hao Chen, Zhanfeng Mo, Qingyi Gao, Zhouwang Yang, Xiao Wang

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

This paper focuses on the theoretical investigation of unsupervised generalization theory of generative adversarial networks (GANs). We first formulate a more reasonable definition of general error and generalization bounds for GANs. On top of that, we establish a bound for generalization error with a fixed generator in a general weight normalization context. Then, we obtain a width-independent bound by applying _p,q and spectral norm weight normalization. To better understand the unsupervised model, GANs, we establish the generalization bound, which uniformly holds with respect to the choice of generators. Hence, we can explain how the complexity of discriminators and generators contribute to generalization error. For _p,q and spectral weight normalization, we provide explicit guidance on how to design parameters to train robust generators. Our numerical simulations also verify that our generalization bound is reasonable.

Tasks

Reproductions