SOTAVerified

Towards Controllable and Photorealistic Region-wise Image Manipulation

2021-08-19Unverified0· sign in to hype

Ansheng You, Chenglin Zhou, Qixuan Zhang, Lan Xu

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Adaptive and flexible image editing is a desirable function of modern generative models. In this work, we present a generative model with auto-encoder architecture for per-region style manipulation. We apply a code consistency loss to enforce an explicit disentanglement between content and style latent representations, making the content and style of generated samples consistent with their corresponding content and style references. The model is also constrained by a content alignment loss to ensure the foreground editing will not interfere background contents. As a result, given interested region masks provided by users, our model supports foreground region-wise style transfer. Specially, our model receives no extra annotations such as semantic labels except for self-supervision. Extensive experiments show the effectiveness of the proposed method and exhibit the flexibility of the proposed model for various applications, including region-wise style editing, latent space interpolation, cross-domain style transfer.

Tasks

Reproductions