SOTAVerified

-Weighted Federated Adversarial Training

2021-09-29Unverified0· sign in to hype

Jianing Zhu, Jiangchao Yao, Tongliang Liu, Kunyang Jia, Jingren Zhou, Bo Han, Hongxia Yang

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Federated Adversarial Training (FAT) helps us address the data privacy and governance issues, meanwhile maintains the model robustness to the adversarial attack. However, the inner-maximization optimization of Adversarial Training can exacerbate the data heterogeneity among local clients, which triggers the pain points of Federated Learning. This makes that the straightforward combination of two paradigms shows the performance deterioration as observed in previous works. In this paper, we introduce an -Weighted Federated Adversarial Training (-WFAT) method to overcome this problem, which relaxes the inner-maximization of Adversarial Training into a lower bound friendly to Federated Learning. We present the theoretical analysis about this -weighted mechanism and its effect on the convergence of FAT. Empirically, the extensive experiments are conducted to comprehensively understand the characteristics of -WFAT, and the results on three benchmark datasets demonstrate -WFAT significantly outperforms FAT under different adversarial learning methods and federated optimization methods.

Tasks

Reproductions