SOTAVerified

Achieving Dimension-Free Communication in Federated Learning via Zeroth-Order Optimization

2024-05-24Code Available1· sign in to hype

Zhe Li, Bicheng Ying, Zidong Liu, Chaosheng Dong, Haibo Yang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Federated Learning (FL) offers a promising framework for collaborative and privacy-preserving machine learning across distributed data sources. However, the substantial communication costs associated with FL significantly challenge its efficiency. Specifically, in each communication round, the communication costs scale linearly with the model's dimension, which presents a formidable obstacle, especially in large model scenarios. Despite various communication-efficient strategies, the intrinsic dimension-dependent communication cost remains a major bottleneck for current FL implementations. This paper proposes a novel dimension-free communication algorithm - DeComFL, which leverages the zeroth-order optimization techniques and reduces the communication cost from O(d) to O(1) by transmitting only a constant number of scalar values between clients and the server in each round, regardless of the dimension d of the model parameters. Theoretically, in non-convex functions, we prove that our algorithm achieves state-of-the-art rates, which show a linear speedup of the number of clients and local steps under standard assumptions. With additional low effective rank assumption, we can further show the convergence rate is independent of the model dimension d as well. Empirical evaluations, encompassing both classic deep learning training and large language model fine-tuning, demonstrate significant reductions in communication overhead. Notably, DeComFL achieves this by transmitting only around 1MB of data in total between the server and a client to fine-tune a model with billions of parameters. Our code is available at https://github.com/ZidongLiu/DeComFL.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
BoolQOPT-1.3BTest Accuracy62.5Unverified
BoolQOPT-125MTest Accuracy61.6Unverified
cbOPT-125MTest Accuracy75Unverified
cbOPT-1.3BTest Accuracy75.71Unverified
RTEOPT-125MTest Accuracy57.05Unverified
RTEOPT-1.3BTest Accuracy60.89Unverified
SST-2OPT-1.3BTest Accuracy90.78Unverified
SST-2OPT-125MTest Accuracy85.08Unverified
WiCOPT-125MTest Accuracy53.38Unverified
WiCOPT-1.3BTest Accuracy56.14Unverified
WSCOPT-1.3BTest Accuracy64.16Unverified
WSCOPT-125MTest Accuracy59.59Unverified

Reproductions