SOTAVerified

Image Super-Resolution via RL-CSC: When Residual Learning Meets Convolutional Sparse Coding

2018-12-31Code Available0· sign in to hype

Menglei Zhang, Zhou Liu, Lei Yu

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We propose a simple yet effective model for Single Image Super-Resolution (SISR), by combining the merits of Residual Learning and Convolutional Sparse Coding (RL-CSC). Our model is inspired by the Learned Iterative Shrinkage-Threshold Algorithm (LISTA). We extend LISTA to its convolutional version and build the main part of our model by strictly following the convolutional form, which improves the network's interpretability. Specifically, the convolutional sparse codings of input feature maps are learned in a recursive manner, and high-frequency information can be recovered from these CSCs. More importantly, residual learning is applied to alleviate the training difficulty when the network goes deeper. Extensive experiments on benchmark datasets demonstrate the effectiveness of our method. RL-CSC (30 layers) outperforms several recent state-of-the-arts, e.g., DRRN (52 layers) and MemNet (80 layers) in both accuracy and visual qualities. Codes and more results are available at https://github.com/axzml/RL-CSC.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
BSD100 - 4x upscalingRL-CSCPSNR27.44Unverified
Set14 - 4x upscalingRL-CSCPSNR28.29Unverified
Urban100 - 4x upscalingRL-CSCPSNR25.59Unverified

Reproductions