SOTAVerified

Deep Learning with Data Privacy via Residual Perturbation

2024-08-11Unverified0· sign in to hype

Wenqi Tao, Huaming Ling, Zuoqiang Shi, Bao Wang

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Protecting data privacy in deep learning (DL) is of crucial importance. Several celebrated privacy notions have been established and used for privacy-preserving DL. However, many existing mechanisms achieve privacy at the cost of significant utility degradation and computational overhead. In this paper, we propose a stochastic differential equation-based residual perturbation for privacy-preserving DL, which injects Gaussian noise into each residual mapping of ResNets. Theoretically, we prove that residual perturbation guarantees differential privacy (DP) and reduces the generalization gap of DL. Empirically, we show that residual perturbation is computationally efficient and outperforms the state-of-the-art differentially private stochastic gradient descent (DPSGD) in utility maintenance without sacrificing membership privacy.

Tasks

Reproductions