DGMR: Diffusion Guided Masked Reconstruction Framework for Multimodal Cloud Removal
Coupled and decoupled learning, diffusion guidance, masked reconstruction, noncloudy difference similarity (NDS)
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/chouhan-avinash/DGMRIn paperpytorch★ 3
Abstract
Cloudy conditions affect the quality of captured data by optical satellites. Multimodal techniques rely on synthetic aperture radar (SAR) images to recover cloudy pixels in optical images. These techniques face challenges of noise, modality, and temporal differences. In this work, we propose a diffusion guided masked reconstruction (DGMR) framework for multimodal cloud removal, which consists of a masked reconstruction network (MRNet), conditional diffusion guidance model (CDGM), and noncloudy difference similarity (NDS) soft constraint. DGMR effectively extracts local-global relationships and combines complementary information using MRNet with coupled feature fusion and decoupled masked reconstruction. CDGM guides the intermediate features of MRNet to reconstruct more refined, cloud-free images. NDS ensures that the reconstructed output is consistent with temporal changes. DGMR achieves state-of-the-art results on four widely used benchmarks of the SEN12MS-CR, M3R-CR, and SMILE-CR datasets. The code and trained models are available at https://github.com/chouhan-avinash/DGMR/