SOTAVerified

Differentially Private Generalized Linear Models Revisited

2022-05-06Unverified0· sign in to hype

Raman Arora, Raef Bassily, Cristóbal Guzmán, Michael Menart, Enayat Ullah

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We study the problem of (,)-differentially private learning of linear predictors with convex losses. We provide results for two subclasses of loss functions. The first case is when the loss is smooth and non-negative but not necessarily Lipschitz (such as the squared loss). For this case, we establish an upper bound on the excess population risk of O( w^*n + \ w^* ^2(n)^2/3,d w^*^2n\), where n is the number of samples, d is the dimension of the problem, and w^* is the minimizer of the population risk. Apart from the dependence on w^, our bound is essentially tight in all parameters. In particular, we show a lower bound of (1n + \ w^*^4/3(n)^2/3, d w^*n\). We also revisit the previously studied case of Lipschitz losses [SSTT20]. For this case, we close the gap in the existing work and show that the optimal rate is (up to log factors) ( w^*n + \ w^*n,rank w^*n\), where rank is the rank of the design matrix. This improves over existing work in the high privacy regime. Finally, our algorithms involve a private model selection approach that we develop to enable attaining the stated rates without a-priori knowledge of w^*.

Tasks

Reproductions