SOTAVerified

Generative adversarial network-based image super-resolution using perceptual content losses

2018-09-13Code Available0· sign in to hype

Manri Cheon, Jun-Hyuk Kim, Jun-Ho Choi, Jong-Seok Lee

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

In this paper, we propose a deep generative adversarial network for super-resolution considering the trade-off between perception and distortion. Based on good performance of a recently developed model for super-resolution, i.e., deep residual network using enhanced upscale modules (EUSR), the proposed model is trained to improve perceptual performance with only slight increase of distortion. For this purpose, together with the conventional content loss, i.e., reconstruction loss such as L1 or L2, we consider additional losses in the training phase, which are the discrete cosine transform coefficients loss and differential content loss. These consider perceptual part in the content loss, i.e., consideration of proper high frequency components is helpful for the trade-off problem in super-resolution. The experimental results show that our proposed model has good performance for both perception and distortion, and is effective in perceptual super-resolution applications.

Tasks

Reproductions