SOTAVerified

Membership Inference Attacks and Privacy in Topic Modeling

2024-03-07Code Available0· sign in to hype

Nico Manzonelli, Wanrong Zhang, Salil Vadhan

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Recent research shows that large language models are susceptible to privacy attacks that infer aspects of the training data. However, it is unclear if simpler generative models, like topic models, share similar vulnerabilities. In this work, we propose an attack against topic models that can confidently identify members of the training data in Latent Dirichlet Allocation. Our results suggest that the privacy risks associated with generative modeling are not restricted to large neural models. Additionally, to mitigate these vulnerabilities, we explore differentially private (DP) topic modeling. We propose a framework for private topic modeling that incorporates DP vocabulary selection as a pre-processing step, and show that it improves privacy while having limited effects on practical utility.

Tasks

Reproductions