Learning Unsupervised Cross-domain Image-to-Image Translation Using a Shared Discriminator
Rajiv Kumar, Rishabh Dabral, G. Sivakumar
Code Available — Be the first to reproduce this paper.
ReproduceCode
Abstract
Unsupervised image-to-image translation is used to transform images from a source domain to generate images in a target domain without using source-target image pairs. Promising results have been obtained for this problem in an adversarial setting using two independent GANs and attention mechanisms. We propose a new method that uses a single shared discriminator between the two GANs, which improves the overall efficacy. We assess the qualitative and quantitative results on image transfiguration, a cross-domain translation task, in a setting where the target domain shares similar semantics to the source domain. Our results indicate that even without adding attention mechanisms, our method performs at par with attention-based methods and generates images of comparable quality.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| Apples and Oranges | Shared discriminator GAN | Kernel Inception Distance | 4.4 | — | Unverified |
| Zebra and Horses | Shared discriminator GAN | Kernel Inception Distance | 5.8 | — | Unverified |