Reveal Your Images: Gradient Leakage Attack against Unbiased Sampling-Based Secure Aggregation
Y Yang, Z Ma, B Xiao, Y Liu, T Li, J Zhang
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/Echotoken/GLAUSpytorch★ 2
Abstract
Recently, some Unbiased Gradient Sampling-based (UGS) methods are proposed to mitigate gradient leakage by introducing sampling and unbiased transformation, such as MinMax Sampling in SIGMOD '22. In this paper, we first propose a novel gradient leakage attack, GLAUS, to show that the UGS schemes still expose the privacy of private datapoints. Specifically, for the above secure aggregation scenario where the real gradient is not available, we demonstrate an idea to approximately infer the real gradient for gradient leakage attacks. Once the gradient can be approximately obtained, the security of UGS framework is downgraded to the original federated learning. The approximate gradient is refined by the following steps: 1) narrow the gradient searching range to the finite set; 2) obtain the magnitude of each gradient value approximately; 3) revise the gradient signs . Extensive experiments on six datasets show that our attack is effective in reconstructing the private datapoints with pixel-wise accuracy both on four network sizes and three image resolutions. Finally, we show a secure aggregation method to defend against the newly proposed attack while basically maintaining the gradient unbiased and communication-efficient of the UGS.