SOTAVerified

Backpropagation Clipping for Deep Learning with Differential Privacy

2022-02-10Code Available0· sign in to hype

Timothy Stevens, Ivoline C. Ngong, David Darais, Calvin Hirsch, David Slater, Joseph P. Near

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We present backpropagation clipping, a novel variant of differentially private stochastic gradient descent (DP-SGD) for privacy-preserving deep learning. Our approach clips each trainable layer's inputs (during the forward pass) and its upstream gradients (during the backward pass) to ensure bounded global sensitivity for the layer's gradient; this combination replaces the gradient clipping step in existing DP-SGD variants. Our approach is simple to implement in existing deep learning frameworks. The results of our empirical evaluation demonstrate that backpropagation clipping provides higher accuracy at lower values for the privacy parameter compared to previous work. We achieve 98.7% accuracy for MNIST with = 0.07 and 74% accuracy for CIFAR-10 with = 3.64.

Tasks

Reproductions