SOTAVerified

IRConStyle: Image Restoration Framework Using Contrastive Learning and Style Transfer

2024-02-24Code Available0· sign in to hype

Dongqi Fan, Xin Zhao, Liang Chang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Recently, the contrastive learning paradigm has achieved remarkable success in high-level tasks such as classification, detection, and segmentation. However, contrastive learning applied in low-level tasks, like image restoration, is limited, and its effectiveness is uncertain. This raises a question: Why does the contrastive learning paradigm not yield satisfactory results in image restoration? In this paper, we conduct in-depth analyses and propose three guidelines to address the above question. In addition, inspired by style transfer and based on contrastive learning, we propose a novel module for image restoration called ConStyle, which can be efficiently integrated into any U-Net structure network. By leveraging the flexibility of ConStyle, we develop a general restoration network for image restoration. ConStyle and the general restoration network together form an image restoration framework, namely IRConStyle. To demonstrate the capability and compatibility of ConStyle, we replace the general restoration network with transformer-based, CNN-based, and MLP-based networks, respectively. We perform extensive experiments on various image restoration tasks, including denoising, deblurring, deraining, and dehazing. The results on 19 benchmarks demonstrate that ConStyle can be integrated with any U-Net-based network and significantly enhance performance. For instance, ConStyle NAFNet significantly outperforms the original NAFNet on SOTS outdoor (dehazing) and Rain100H (deraining) datasets, with PSNR improvements of 4.16 dB and 3.58 dB with 85% fewer parameters.

Tasks

Reproductions