SOTAVerified

Fair NLP Models with Differentially Private Text Encoders

2021-11-16ACL ARR November 2021Unverified0· sign in to hype

Anonymous

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Encoded text representations often capture sensitive attributes about individuals (e.g., gender, race, or age), which can raise privacy concerns and contribute to making downstream models unfair to certain groups. In this work, we propose FEDERATE, an approach that combines ideas from differential privacy and adversarial learning to learn private text representations which also induces fairer models. We empirically evaluate the trade-off between the privacy of the representations and the fairness and accuracy of the downstream model on two challenging NLP tasks. Our results show that FEDERATE consistently improves upon previous methods.

Tasks

Reproductions