Precise Object and Effect Removal with Adaptive Target-Aware Attention
Jixin Zhao, Zhouxia Wang, Peiqing Yang, Shangchen Zhou
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/zjx0101/objectclearOfficial★ 556
Abstract
Object removal requires eliminating not only the target object but also its associated visual effects such as shadows and reflections. However, diffusion-based inpainting and removal methods often introduce artifacts, hallucinate contents, alter background, and struggle to remove object effects accurately. To address these challenges, we propose ObjectClear, a novel framework that decouples foreground removal from background reconstruction via an adaptive target-aware attention mechanism. This design empowers the model to precisely localize and remove both objects and their effects while maintaining high background fidelity. Moreover, the learned attention maps are leveraged for an attention-guided fusion strategy during inference, further enhancing visual consistency. To facilitate the training and evaluation, we construct OBER, a large-scale dataset for OBject-Effect Removal, which provides paired images with and without object-effects, along with precise masks for both objects and their effects. The dataset comprises high-quality captured and simulated data, covering diverse objects, effects, and complex multi-object scenes. Extensive experiments demonstrate that ObjectClear outperforms prior methods, achieving superior object-effect removal quality and background fidelity, especially in challenging scenarios.