High-Resolution Deep Convolutional Generative Adversarial Networks
Joachim D. Curtó, Irene C. Zarza, Fernando de la Torre, Irwin King, Michael R. Lyu
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/curto2/cOfficialnone★ 0
Abstract
Generative Adversarial Networks (GANs) convergence in a high-resolution setting with a computational constrain of GPU memory capacity (from 12GB to 24 GB) has been beset with difficulty due to the known lack of convergence rate stability. In order to boost network convergence of DCGAN (Deep Convolutional Generative Adversarial Networks) and achieve good-looking high-resolution results we propose a new layered network structure, HDCGAN, that incorporates current state-of-the-art techniques for this effect. A novel dataset, Curt\'o & Zarza, containing human faces from different ethnical groups in a wide variety of illumination conditions and image resolutions is introduced. Curt\'o is enhanced with HDCGAN synthetic images, thus being the first GAN augmented face dataset. We conduct extensive experiments on CelebA (MS-SSIM 0.1978 and Distance of Fr\'echet 8.77) and Curt\'o.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| CelebA 128x128 | HDCGAN | MS-SSIM | 0.2 | — | Unverified |
| CelebA 64x64 | HDCGAN | FID | 8.44 | — | Unverified |