SOTAVerified

Comparing privacy notions for protection against reconstruction attacks in machine learning

2025-02-06Unverified0· sign in to hype

Sayan Biswas, Mark Dras, Pedro Faustini, Natasha Fernandes, Annabelle McIver, Catuscia Palamidessi, Parastoo Sadeghi

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Within the machine learning community, reconstruction attacks are a principal concern and have been identified even in federated learning (FL), which was designed with privacy preservation in mind. In response to these threats, the privacy community recommends the use of differential privacy (DP) in the stochastic gradient descent algorithm, termed DP-SGD. However, the proliferation of variants of DP in recent years such as metric privacy has made it challenging to conduct a fair comparison between different mechanisms due to the different meanings of the privacy parameters and across different variants. Thus, interpreting the practical implications of and in the FL context and amongst variants of DP remains ambiguous. In this paper, we lay a foundational framework for comparing mechanisms with differing notions of privacy guarantees, namely (,)-DP and metric privacy. We provide two foundational means of comparison: firstly, via the well-established (,)-DP guarantees, made possible through the R\'enyi differential privacy framework; and secondly, via Bayes' capacity, which we identify as an appropriate measure for reconstruction threats.

Tasks

Reproductions