SOTAVerified

Does Training with Synthetic Data Truly Protect Privacy?

2025-02-18Code Available0· sign in to hype

Yunpeng Zhao, Jie Zhang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

As synthetic data becomes increasingly popular in machine learning tasks, numerous methods--without formal differential privacy guarantees--use synthetic data for training. These methods often claim, either explicitly or implicitly, to protect the privacy of the original training data. In this work, we explore four different training paradigms: coreset selection, dataset distillation, data-free knowledge distillation, and synthetic data generated from diffusion models. While all these methods utilize synthetic data for training, they lead to vastly different conclusions regarding privacy preservation. We caution that empirical approaches to preserving data privacy require careful and rigorous evaluation; otherwise, they risk providing a false sense of privacy.

Tasks

Reproductions