SOTAVerified

FedPara: Low-Rank Hadamard Product for Communication-Efficient Federated Learning

2021-08-13ICLR 2022Code Available2· sign in to hype

Nam Hyeon-Woo, Moon Ye-Bin, Tae-Hyun Oh

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

In this work, we propose a communication-efficient parameterization, FedPara, for federated learning (FL) to overcome the burdens on frequent model uploads and downloads. Our method re-parameterizes weight parameters of layers using low-rank weights followed by the Hadamard product. Compared to the conventional low-rank parameterization, our FedPara method is not restricted to low-rank constraints, and thereby it has a far larger capacity. This property enables to achieve comparable performance while requiring 3 to 10 times lower communication costs than the model with the original layers, which is not achievable by the traditional low-rank methods. The efficiency of our method can be further improved by combining with other efficient FL optimizers. In addition, we extend our method to a personalized FL application, pFedPara, which separates parameters into global and local ones. We show that pFedPara outperforms competing personalized FL methods with more than three times fewer parameters.

Tasks

Reproductions