SOTAVerified

Mitigate Replication and Copying in Diffusion Models with Generalized Caption and Dual Fusion Enhancement

2023-09-13Code Available0· sign in to hype

Chenghao Li, Dake Chen, Yuke Zhang, Peter A. Beerel

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

While diffusion models demonstrate a remarkable capability for generating high-quality images, their tendency to `replicate' training data raises privacy concerns. Although recent research suggests that this replication may stem from the insufficient generalization of training data captions and duplication of training images, effective mitigation strategies remain elusive. To address this gap, our paper first introduces a generality score that measures the caption generality and employ large language model (LLM) to generalize training captions. Subsequently, we leverage generalized captions and propose a novel dual fusion enhancement approach to mitigate the replication of diffusion models. Our empirical results demonstrate that our proposed methods can significantly reduce replication by 43.5% compared to the original diffusion model while maintaining the diversity and quality of generations. Code is available at https://github.com/HowardLi0816/dual-fusion-diffusion.

Tasks

Reproductions