SOTAVerified

Differentially Private Stochastic Optimization: New Results in Convex and Non-Convex Settings

2021-07-12NeurIPS 2021Unverified0· sign in to hype

Raef Bassily, Cristóbal Guzmán, Michael Menart

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We study differentially private stochastic optimization in convex and non-convex settings. For the convex case, we focus on the family of non-smooth generalized linear losses (GLLs). Our algorithm for the _2 setting achieves optimal excess population risk in near-linear time, while the best known differentially private algorithms for general convex losses run in super-linear time. Our algorithm for the _1 setting has nearly-optimal excess population risk O(dn), and circumvents the dimension dependent lower bound of Asi:2021 for general non-smooth convex losses. In the differentially private non-convex setting, we provide several new algorithms for approximating stationary points of the population risk. For the _1-case with smooth losses and polyhedral constraint, we provide the first nearly dimension independent rate, O(^2/3d(n)^1/3) in linear time. For the constrained _2-case with smooth losses, we obtain a linear-time algorithm with rate O(1n^1/3+d^1/5(n)^2/5). Finally, for the _2-case we provide the first method for non-smooth weakly convex stochastic optimization with rate O(1n^1/4+d^1/6(n)^1/3) which matches the best existing non-private algorithm when d= O(n). We also extend all our results above for the non-convex _2 setting to the _p setting, where 1 < p 2, with only polylogarithmic (in the dimension) overhead in the rates.

Tasks

Reproductions