SOTAVerified

Variational Domain Adaptation

2019-05-01ICLR 2019Unverified0· sign in to hype

Hirono Okamoto, Shohei Ohsawa, Itto Higuchi, Haruka Murakami, Mizuki Sango, Zhenghang Cui, Masahiro Suzuki, Hiroshi Kajino, Yutaka Matsuo

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

This paper proposes variational domain adaptation, a unified, scalable, simple framework for learning multiple distributions through variational inference. Unlike the existing methods on domain transfer through deep generative models, such as StarGAN (Choi et al., 2017) and UFDN (Liu et al., 2018), the variational domain adaptation has three advantages. Firstly, the samples from the target are not required. Instead, the framework requires one known source as a prior p(x) and binary discriminators, p(D_i|x), discriminating the target domain D_i from others. Consequently, the framework regards a target as a posterior that can be explicitly formulated through the Bayesian inference, p(x|D_i) p(D_i|x)p(x), as exhibited by a further proposed model of dual variational autoencoder (DualVAE). Secondly, the framework is scablable to large-scale domains. As well as VAE encodes a sample x as a mode on a latent space: (x) Z, DualVAE encodes a domain D_i as a mode on the dual latent space ^*(D_i) Z^*, named domain embedding. It reformulates the posterior with a natural paring , : Z Z^* , which can be expanded to uncountable infinite domains such as continuous domains as well as interpolation. Thirdly, DualVAE fastly converges without sophisticated automatic/manual hyperparameter search in comparison to GANs as it requires only one additional parameter to VAE. Through the numerical experiment, we demonstrate the three benefits with multi-domain image generation task on CelebA with up to 60 domains, and exhibits that DualVAE records the state-of-the-art performance outperforming StarGAN and UFDN.

Tasks

Reproductions