SOTAVerified

A Style-aware Discriminator for Controllable Image Translation

2022-03-29CVPR 2022Code Available1· sign in to hype

Kunhee Kim, Sanghun Park, Eunyeong Jeon, Taehun Kim, Daijin Kim

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Current image-to-image translations do not control the output domain beyond the classes used during training, nor do they interpolate between different domains well, leading to implausible results. This limitation largely arises because labels do not consider the semantic distance. To mitigate such problems, we propose a style-aware discriminator that acts as a critic as well as a style encoder to provide conditions. The style-aware discriminator learns a controllable style space using prototype-based self-supervised learning and simultaneously guides the generator. Experiments on multiple datasets verify that the proposed model outperforms current state-of-the-art image-to-image translation methods. In contrast with current methods, the proposed approach supports various applications, including style interpolation, content transplantation, and local image translation.

Tasks

Reproductions