Differentially Private Distributed Nonconvex Stochastic Optimization with Quantized Communication
Jialong Chen, Jimin Wang, Ji-Feng Zhang
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
This paper proposes a new distributed nonconvex stochastic optimization algorithm that can achieve privacy protection, communication efficiency and convergence simultaneously. Specifically, each node adds general privacy noises to its local state to avoid information leakage, and then quantizes its noise-perturbed state before transmitting to improve communication efficiency. By using a subsampling method controlled through the sample-size parameter, the proposed algorithm reduces cumulative differential privacy parameters , , and thus enhances the differential privacy level, which is significantly different from the existing works. By using a two-time-scale step-sizes method, the mean square convergence for nonconvex cost functions is given. Furthermore, when the global cost function satisfies the Polyak- ojasiewicz condition, the convergence rate and the oracle complexity of the proposed algorithm are given. In addition, the proposed algorithm achieves both the mean square convergence and finite cumulative differential privacy parameters , over infinite iterations as the sample-size goes to infinity. A numerical example of the distributed training on the ``MNIST'' dataset is given to show the effectiveness of the algorithm.