SOTAVerified

An Improved Algorithm for Clustered Federated Learning

2022-10-20Code Available0· sign in to hype

Harshvardhan, Avishek Ghosh, Arya Mazumdar

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

In this paper, we address the dichotomy between heterogeneous models and simultaneous training in Federated Learning (FL) via a clustering framework. We define a new clustering model for FL based on the (optimal) local models of the users: two users belong to the same cluster if their local models are close; otherwise they belong to different clusters. A standard algorithm for clustered FL is proposed in ghosh_efficient_2021, called IFCA, which requires suitable initialization and the knowledge of hyper-parameters like the number of clusters (which is often quite difficult to obtain in practical applications) to converge. We propose an improved algorithm, Successive Refine Federated Clustering Algorithm (SR-FCA), which removes such restrictive assumptions. SR-FCA treats each user as a singleton cluster as an initialization, and then successively refine the cluster estimation via exploiting similar users belonging to the same cluster. In any intermediate step, SR-FCA uses a robust federated learning algorithm within each cluster to exploit simultaneous training and to correct clustering errors. Furthermore, SR-FCA does not require any good initialization (warm start), both in theory and practice. We show that with proper choice of learning rate, SR-FCA incurs arbitrarily small clustering error. Additionally, we validate the performance of our algorithm on standard FL datasets in non-convex problems like neural nets, and we show the benefits of SR-FCA over baselines.

Tasks

Reproductions