SOTAVerified

AugGAN: Cross Domain Adaptation with GAN-based Data Augmentation

2018-09-01ECCV 2018Unverified0· sign in to hype

Sheng-Wei Huang, Che-Tsung Lin, Shu-Ping Chen, Yen-Yi Wu, Po-Hao Hsu, Shang-Hong Lai

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Deep learning based image-to-image translation methods aim at learning the joint distribution of the two domains and finding transformations between them. Despite recent GAN (Generative Adversarial Network) based methods have shown compelling visual results, they are prone to fail at preserving image-objects and maintaining translation consistency when faced with large and complex domain shifts, which reduces their practicality on tasks such as generating large-scale training data for different domains. To address this problem, we purpose a weakly supervised structure-aware image-to-image translation network, which is composed of encoders, generators, discriminators and parsing nets for the two domains, respectively, in a unified framework. The purposed network generates more visually plausible images of a different domain compared to the competing methods on different image-translation tasks. In addition, we quantitatively evaluate different methods by training Faster-RCNN and YOLO with datasets generated from the image-translation results and demonstrate significant improvement of the detection accuracies by using the proposed image-object preserving network.

Tasks

Reproductions