SOTAVerified

Exploring Patch-wise Semantic Relation for Contrastive Learning in Image-to-Image Translation Tasks

2022-03-03CVPR 2022Code Available1· sign in to hype

Chanyong Jung, Gihyun Kwon, Jong Chul Ye

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Recently, contrastive learning-based image translation methods have been proposed, which contrasts different spatial locations to enhance the spatial correspondence. However, the methods often ignore the diverse semantic relation within the images. To address this, here we propose a novel semantic relation consistency (SRC) regularization along with the decoupled contrastive learning, which utilize the diverse semantics by focusing on the heterogeneous semantics between the image patches of a single image. To further improve the performance, we present a hard negative mining by exploiting the semantic relation. We verified our method for three tasks: single-modal and multi-modal image translations, and GAN compression task for image translation. Experimental results confirmed the state-of-art performance of our method in all the three tasks.

Tasks

Reproductions