SOTAVerified

Correlation inference attacks against machine learning models

2021-12-16Code Available0· sign in to hype

Ana-Maria Creţu, Florent Guépin, Yves-Alexandre de Montjoye

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Despite machine learning models being widely used today, the relationship between a model and its training dataset is not well understood. We explore correlation inference attacks, whether and when a model leaks information about the correlations between the input variables of its training dataset. We first propose a model-less attack, where an adversary exploits the spherical parametrization of correlation matrices alone to make an informed guess. Second, we propose a model-based attack, where an adversary exploits black-box model access to infer the correlations using minimal and realistic assumptions. Third, we evaluate our attacks against logistic regression and multilayer perceptron models on three tabular datasets and show the models to leak correlations. We finally show how extracted correlations can be used as building blocks for attribute inference attacks and enable weaker adversaries. Our results raise fundamental questions on what a model does and should remember from its training set.

Tasks

Reproductions