SOTAVerified

Enhancing Privacy-Utility Trade-offs to Mitigate Memorization in Diffusion Models

2025-04-25CVPR 2025Unverified0· sign in to hype

Chen Chen, Daochang Liu, Mubarak Shah, Chang Xu

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Text-to-image diffusion models have demonstrated remarkable capabilities in creating images highly aligned with user prompts, yet their proclivity for memorizing training set images has sparked concerns about the originality of the generated images and privacy issues, potentially leading to legal complications for both model owners and users, particularly when the memorized images contain proprietary content. Although methods to mitigate these issues have been suggested, enhancing privacy often results in a significant decrease in the utility of the outputs, as indicated by text-alignment scores. To bridge the research gap, we introduce a novel method, PRSS, which refines the classifier-free guidance approach in diffusion models by integrating prompt re-anchoring (PR) to improve privacy and incorporating semantic prompt search (SS) to enhance utility. Extensive experiments across various privacy levels demonstrate that our approach consistently improves the privacy-utility trade-off, establishing a new state-of-the-art.

Tasks

Reproductions