SOTAVerified

SDWNet: A Straight Dilated Network with Wavelet Transformation for Image Deblurring

2021-10-12Code Available1· sign in to hype

Wenbin Zou, Mingchao Jiang, Yunchen Zhang, Liang Chen, Zhiyong Lu, Yi Wu

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Image deblurring is a classical computer vision problem that aims to recover a sharp image from a blurred image. To solve this problem, existing methods apply the Encode-Decode architecture to design the complex networks to make a good performance. However, most of these methods use repeated up-sampling and down-sampling structures to expand the receptive field, which results in texture information loss during the sampling process and some of them design the multiple stages that lead to difficulties with convergence. Therefore, our model uses dilated convolution to enable the obtainment of the large receptive field with high spatial resolution. Through making full use of the different receptive fields, our method can achieve better performance. On this basis, we reduce the number of up-sampling and down-sampling and design a simple network structure. Besides, we propose a novel module using the wavelet transform, which effectively helps the network to recover clear high-frequency texture details. Qualitative and quantitative evaluations of real and synthetic datasets show that our deblurring method is comparable to existing algorithms in terms of performance with much lower training requirements. The source code and pre-trained models are available at https://github.com/FlyEgle/SDWNet.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
GoProSDWNetPSNR31.36Unverified
RealBlur-R(trained on GoPro)SDWNetPSNR35.85Unverified

Reproductions