SOTAVerified

Efficient Visual State Space Model for Image Deblurring

2024-05-23CVPR 2025Code Available2· sign in to hype

Lingshun Kong, Jiangxin Dong, Ming-Hsuan Yang, Jinshan Pan

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Convolutional neural networks (CNNs) and Vision Transformers (ViTs) have achieved excellent performance in image restoration. ViTs typically yield superior results in image restoration compared to CNNs due to their ability to capture long-range dependencies and input-dependent characteristics. However, the computational complexity of Transformer-based models grows quadratically with the image resolution, limiting their practical appeal in high-resolution image restoration tasks. In this paper, we propose a simple yet effective visual state space model (EVSSM) for image deblurring, leveraging the benefits of state space models (SSMs) to visual data. In contrast to existing methods that employ several fixed-direction scanning for feature extraction, which significantly increases the computational cost, we develop an efficient visual scan block that applies various geometric transformations before each SSM-based module, capturing useful non-local information and maintaining high efficiency. Extensive experimental results show that the proposed EVSSM performs favorably against state-of-the-art image deblurring methods on benchmark datasets and real-captured images.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
GoProEVSSMPSNR34.5Unverified
HIDEEVSSMPSNR31.97Unverified
RealBlur-JEVSSMPSNR34.15Unverified
RealBlur-REVSSMPSNR41.04Unverified
Real-world DatasetEVSSMPSNR48.78Unverified

Reproductions