LoFormer: Local Frequency Transformer for Image Deblurring
Xintian Mao, Jiansheng Wang, Xingran Xie, Qingli Li, Yan Wang
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/deepmed-lab-ecnu/single-image-deblurOfficialIn paperpytorch★ 45
- github.com/INVOKERer/LoFormerpytorch★ 60
Abstract
Due to the computational complexity of self-attention (SA), prevalent techniques for image deblurring often resort to either adopting localized SA or employing coarse-grained global SA methods, both of which exhibit drawbacks such as compromising global modeling or lacking fine-grained correlation. In order to address this issue by effectively modeling long-range dependencies without sacrificing fine-grained details, we introduce a novel approach termed Local Frequency Transformer (LoFormer). Within each unit of LoFormer, we incorporate a Local Channel-wise SA in the frequency domain (Freq-LC) to simultaneously capture cross-covariance within low- and high-frequency local windows. These operations offer the advantage of (1) ensuring equitable learning opportunities for both coarse-grained structures and fine-grained details, and (2) exploring a broader range of representational properties compared to coarse-grained global SA methods. Additionally, we introduce an MLP Gating mechanism complementary to Freq-LC, which serves to filter out irrelevant features while enhancing global learning capabilities. Our experiments demonstrate that LoFormer significantly improves performance in the image deblurring task, achieving a PSNR of 34.09 dB on the GoPro dataset with 126G FLOPs. https://github.com/DeepMed-Lab-ECNU/Single-Image-Deblur
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| HIDE (trained on GOPRO) | LoFormer | PSNR (sRGB) | 31.86 | — | Unverified |
| RealBlur-J | LoFormer | PSNR (sRGB) | 32.9 | — | Unverified |
| RealBlur-R | LoFormer | PSNR (sRGB) | 40.23 | — | Unverified |