SOTAVerified

Renyi Differentially Private ADMM for Non-Smooth Regularized Optimization

2019-09-18Unverified0· sign in to hype

Chen Chen, Jaewoo Lee

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

In this paper we consider the problem of minimizing composite objective functions consisting of a convex differentiable loss function plus a non-smooth regularization term, such as L_1 norm or nuclear norm, under R\'enyi differential privacy (RDP). To solve the problem, we propose two stochastic alternating direction method of multipliers (ADMM) algorithms: ssADMM based on gradient perturbation and mpADMM based on output perturbation. Both algorithms decompose the original problem into sub-problems that have closed-form solutions. The first algorithm, ssADMM, applies the recent privacy amplification result for RDP to reduce the amount of noise to add. The second algorithm, mpADMM, numerically computes the sensitivity of ADMM variable updates and releases the updated parameter vector at the end of each epoch. We compare the performance of our algorithms with several baseline algorithms on both real and simulated datasets. Experimental results show that, in high privacy regimes (small ), ssADMM and mpADMM outperform other baseline algorithms in terms of classification and feature selection performance, respectively.

Tasks

Reproductions