SOTAVerified

Unsupervised Cross-Domain Image Generation

2016-11-07Code Available2· sign in to hype

Yaniv Taigman, Adam Polyak, Lior Wolf

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We study the problem of transferring a sample in one domain to an analog sample in another domain. Given two related domains, S and T, we would like to learn a generative function G that maps an input sample from S to the domain T, such that the output of a given function f, which accepts inputs in either domains, would remain unchanged. Other than the function f, the training data is unsupervised and consist of a set of samples from each domain. The Domain Transfer Network (DTN) we present employs a compound loss function that includes a multiclass GAN loss, an f-constancy component, and a regularizing component that encourages G to map samples from T to themselves. We apply our method to visual domains including digits and face images and demonstrate its ability to generate convincing novel images of previously unseen entities, while preserving their identity.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
SVNH-to-MNISTDTNClassification Accuracy84.4Unverified

Reproductions