SOTAVerified

CAFE: Catastrophic Data Leakage in Federated Learning

2021-01-01Unverified0· sign in to hype

Xiao Jin, Ruijie Du, Pin-Yu Chen, Tianyi Chen

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Private training data can be leaked through the gradient sharing mechanism deployed in machine learning systems, such as federated learning (FL). Increasing batch size is often viewed as a promising defense strategy against data leakage. In this paper, we revisit this defense premise and propose an advanced data leakage attack to efficiently recover batch data from the shared aggregated gradients. We name our proposed method as catastrophic d ata leakage in federated learning (CAFE). Comparing to existing data leakage attacks, CAFE demonstrates the ability to perform large-batch data leakage attack with high data recovery quality. Experimental results on vertical and horizontal FL settings have validated the effectiveness of CAFE in recovering private data from the shared aggregated gradients. Our results suggest that data participated in FL, especially the vertical case, have a high risk of being leaked from the training gradients. Our analysis implies unprecedented and practical data leakage risks in those learning settings.

Tasks

Reproductions