SOTAVerified

Solving a Class of Non-Convex Minimax Optimization in Federated Learning

2023-10-05NeurIPS 2023Code Available0· sign in to hype

Xidong Wu, Jianhui Sun, Zhengmian Hu, Aidong Zhang, Heng Huang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

The minimax problems arise throughout machine learning applications, ranging from adversarial training and policy evaluation in reinforcement learning to AUROC maximization. To address the large-scale data challenges across multiple clients with communication-efficient distributed training, federated learning (FL) is gaining popularity. Many optimization algorithms for minimax problems have been developed in the centralized setting (i.e. single-machine). Nonetheless, the algorithm for minimax problems under FL is still underexplored. In this paper, we study a class of federated nonconvex minimax optimization problems. We propose FL algorithms (FedSGDA+ and FedSGDA-M) and reduce existing complexity results for the most common minimax problems. For nonconvex-concave problems, we propose FedSGDA+ and reduce the communication complexity to O(^-6). Under nonconvex-strongly-concave and nonconvex-PL minimax settings, we prove that FedSGDA-M has the best-known sample complexity of O(^3 N^-1^-3) and the best-known communication complexity of O(^2^-2). FedSGDA-M is the first algorithm to match the best sample complexity O(^-3) achieved by the single-machine method under the nonconvex-strongly-concave setting. Extensive experimental results on fair classification and AUROC maximization show the efficiency of our algorithms.

Tasks

Reproductions