SOTAVerified

Fast Server Learning Rate Tuning for Coded Federated Dropout

2022-01-26Unverified0· sign in to hype

Giacomo Verardo, Daniel Barreira, Marco Chiesa, Dejan Kostic, Gerald Q. Maguire Jr

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

In cross-device Federated Learning (FL), clients with low computational power train a common [4] machine model by exchanging parameters via updates instead of potentially private data. Federated Dropout (FD) is a technique that improves the communication efficiency of a FL session by selecting a subset of model parameters to be updated in each training round. However, compared to standard FL, FD produces considerably lower accuracy and faces a longer convergence time. In this paper, we leverage coding theory to enhance FD by allowing different sub-models to be used at each client. We also show that by carefully tuning the server learning rate hyper-parameter, we can achieve higher training speed while also achieving up to the same final accuracy as the no dropout case. For the EMNIST dataset, our mechanism achieves 99.6\% of the final accuracy of the no dropout case while requiring 2.43 less bandwidth to achieve this level of accuracy.

Tasks

Reproductions