SOTAVerified

DeblurDiNAT: A Compact Model with Exceptional Generalization and Visual Fidelity on Unseen Domains

2024-03-19Code Available1· sign in to hype

Hanzhou Liu, Binghan Li, Chengkai Liu, Mi Lu

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Recent deblurring networks have effectively restored clear images from the blurred ones. However, they often struggle with generalization to unknown domains. Moreover, these models typically focus on distortion metrics such as PSNR and SSIM, neglecting the critical aspect of metrics aligned with human perception. To address these limitations, we propose DeblurDiNAT, a deblurring Transformer based on Dilated Neighborhood Attention. First, DeblurDiNAT employs an alternating dilation factor paradigm to capture both local and global blurred patterns, enhancing generalization and perceptual clarity. Second, a local cross-channel learner aids the Transformer block to understand the short-range relationships between adjacent channels. Additionally, we present a linear feed-forward network with a simple while effective design. Finally, a dual-stage feature fusion module is introduced as an alternative to the existing approach, which efficiently process multi-scale visual information across network levels. Compared to state-of-the-art models, our compact DeblurDiNAT demonstrates superior generalization capabilities and achieves remarkable performance in perceptual metrics, while maintaining a favorable model size.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
GoProDeblurDiNAT-LPSNR33.63Unverified
HIDE (trained on GOPRO)DeblurDiNAT-LPSNR (sRGB)31.47Unverified
RealBlur-J (trained on GoPro)DeblurDiNAT-LPSNR (sRGB)28.98Unverified
RealBlur-R (trained on GoPro)DeblurDiNAT-LPSNR (sRGB)36.09Unverified

Reproductions