SOTAVerified

Variational Bayesian Framework for Advanced Image Generation with Domain-Related Variables

2023-05-23Unverified0· sign in to hype

Yuxiao Li, Santiago Mazuelas, Yuan Shen

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Deep generative models (DGMs) and their conditional counterparts provide a powerful ability for general-purpose generative modeling of data distributions. However, it remains challenging for existing methods to address advanced conditional generative problems without annotations, which can enable multiple applications like image-to-image translation and image editing. We present a unified Bayesian framework for such problems, which introduces an inference stage on latent variables within the learning process. In particular, we propose a variational Bayesian image translation network (VBITN) that enables multiple image translation and editing tasks. Comprehensive experiments show the effectiveness of our method on unsupervised image-to-image translation, and demonstrate the novel advanced capabilities for semantic editing and mixed domain translation.

Tasks

Reproductions