Unbounded Gradients in Federated Leaning with Buffered Asynchronous Aggregation
2022-10-03Unverified0· sign in to hype
Mohammad Taha Toghani, César A. Uribe
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
Synchronous updates may compromise the efficiency of cross-device federated learning once the number of active clients increases. The FedBuff algorithm (Nguyen et al., 2022) alleviates this problem by allowing asynchronous updates (staleness), which enhances the scalability of training while preserving privacy via secure aggregation. We revisit the FedBuff algorithm for asynchronous federated learning and extend the existing analysis by removing the boundedness assumptions from the gradient norm. This paper presents a theoretical analysis of the convergence rate of this algorithm when heterogeneity in data, batch size, and delay are considered.