ShadowFormer: Global Context Helps Image Shadow Removal
Lanqing Guo, Siyu Huang, Ding Liu, Hao Cheng, Bihan Wen
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/guolanqing/shadowformerOfficialIn paperpytorch★ 178
- github.com/BlackJoke76/OmniSRpytorch★ 25
Abstract
Recent deep learning methods have achieved promising results in image shadow removal. However, most of the existing approaches focus on working locally within shadow and non-shadow regions, resulting in severe artifacts around the shadow boundaries as well as inconsistent illumination between shadow and non-shadow regions. It is still challenging for the deep shadow removal model to exploit the global contextual correlation between shadow and non-shadow regions. In this work, we first propose a Retinex-based shadow model, from which we derive a novel transformer-based network, dubbed ShandowFormer, to exploit non-shadow regions to help shadow region restoration. A multi-scale channel attention framework is employed to hierarchically capture the global information. Based on that, we propose a Shadow-Interaction Module (SIM) with Shadow-Interaction Attention (SIA) in the bottleneck stage to effectively model the context correlation between shadow and non-shadow regions. We conduct extensive experiments on three popular public datasets, including ISTD, ISTD+, and SRD, to evaluate the proposed method. Our method achieves state-of-the-art performance by using up to 150X fewer model parameters.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| ISTD | ShadowFormer | MAE | 4.79 | — | Unverified |
| ISTD | ShadowFormer (AAAI 2023) (512x512) | RMSE | 3.06 | — | Unverified |
| ISTD | ShadowFormer (AAAI 2023) (256x256) | RMSE | 3.45 | — | Unverified |
| SRD | ShadowFormer (AAAI 2023) (512x512) | RMSE | 3.9 | — | Unverified |
| SRD | ShadowFormer (AAAI 2023) (256x256) | RMSE | 4.44 | — | Unverified |