SOTAVerified

1-Diffractor: Efficient and Utility-Preserving Text Obfuscation Leveraging Word-Level Metric Differential Privacy

2024-05-02Code Available0· sign in to hype

Stephen Meisenbacher, Maulik Chevli, Florian Matthes

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

The study of privacy-preserving Natural Language Processing (NLP) has gained rising attention in recent years. One promising avenue studies the integration of Differential Privacy in NLP, which has brought about innovative methods in a variety of application settings. Of particular note are word-level Metric Local Differential Privacy (MLDP) mechanisms, which work to obfuscate potentially sensitive input text by performing word-by-word perturbations. Although these methods have shown promising results in empirical tests, there are two major drawbacks: (1) the inevitable loss of utility due to addition of noise, and (2) the computational expensiveness of running these mechanisms on high-dimensional word embeddings. In this work, we aim to address these challenges by proposing 1-Diffractor, a new mechanism that boasts high speedups in comparison to previous mechanisms, while still demonstrating strong utility- and privacy-preserving capabilities. We evaluate 1-Diffractor for utility on several NLP tasks, for theoretical and task-based privacy, and for efficiency in terms of speed and memory. 1-Diffractor shows significant improvements in efficiency, while still maintaining competitive utility and privacy scores across all conducted comparative tests against previous MLDP mechanisms. Our code is made available at: https://github.com/sjmeis/Diffractor.

Tasks

Reproductions