An Operator Splitting View of Federated Learning
Saber Malekmohammadi, Kiarash Shaloudegi, Zeou Hu, YaoLiang Yu
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
Over the past few years, the federated learning (FL) community has witnessed a proliferation of new FL algorithms. However, our understating of the theory of FL is still fragmented, and a thorough, formal comparison of these algorithms remains elusive. Motivated by this gap, we show that many of the existing FL algorithms can be understood from an operator splitting point of view. This unification allows us to compare different algorithms with ease, to refine previous convergence results and to uncover new algorithmic variants. In particular, our analysis reveals the vital role played by the step size in FL algorithms. The unification also leads to a streamlined and economic way to accelerate FL algorithms, without incurring any communication overhead. We perform numerical experiments on both convex and nonconvex models to validate our findings.