SOTAVerified

SDDNet: Style-guided Dual-layer Disentanglement Network for Shadow Detection

2023-08-17Code Available1· sign in to hype

Runmin Cong, Yuchen Guan, Jinpeng Chen, Wei zhang, Yao Zhao, Sam Kwong

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Despite significant progress in shadow detection, current methods still struggle with the adverse impact of background color, which may lead to errors when shadows are present on complex backgrounds. Drawing inspiration from the human visual system, we treat the input shadow image as a composition of a background layer and a shadow layer, and design a Style-guided Dual-layer Disentanglement Network (SDDNet) to model these layers independently. To achieve this, we devise a Feature Separation and Recombination (FSR) module that decomposes multi-level features into shadow-related and background-related components by offering specialized supervision for each component, while preserving information integrity and avoiding redundancy through the reconstruction constraint. Moreover, we propose a Shadow Style Filter (SSF) module to guide the feature disentanglement by focusing on style differentiation and uniformization. With these two modules and our overall pipeline, our model effectively minimizes the detrimental effects of background color, yielding superior performance on three public datasets with a real-time inference speed of 32 FPS.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
CUHK-ShadowSDDNet (MM 2023) (512x512)BER7.65Unverified
CUHK-ShadowSDDNet (MM 2023) (256x256)BER8.66Unverified
SBU / SBU-RefineSDDNet (MM 2023) (512x512)BER4.86Unverified
SBU / SBU-RefineSDDNet (MM 2023) (256x256)BER5.39Unverified

Reproductions