SOTAVerified

Embedding Physical Reasoning into Diffusion-Based Shadow Generation

2026-03-18Unverified0· sign in to hype

Shilin Hu, Jingyi Xu, Akshat Dave, Dimitris Samaras, Hieu Le

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Generating realistic shadows for inserted objects requires reasoning about scene geometry and illumination. However, most existing methods operate purely in image space, leaving the physical relationship between objects, lighting, and shadows to be learned implicitly, often resulting in misaligned or implausible shadows. We instead ground shadow generation in the physics of shadow formation. Given a composite image and an object mask, we recover approximate scene geometry and estimate a dominant light direction to derive a physics-grounded shadow estimate via geometric reasoning. While coarse, this estimate provides a spatial anchor for shadow placement. Because illumination cannot always be uniquely inferred from a single image, we predict confidence scores for both lighting and shadow cues and use them to regulate their influence during generation. These cues, shadow mask, light direction, and their confidences, condition a diffusion-based generator that refines the estimate into a realistic shadow. Experiments on DESOBAV2 show that our method improves both shadow realism and localization, achieving 23% lower shadow-region RMSE and 30% lower shadow-region BER over prior state-of-the-art.

Reproductions