FedMorph: Communication Efficient Federated Learning via Morphing Neural Network
Guoqing Ma, Chuanting Zhang, Basem Shihada
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
The two fundamental bottlenecks in Federated Learning (FL) are communication and computation on heterogeneous edge networks, restricting both model capacity and user participation. To address these issues, we present FedMorph, an approach to automatically morph the global neural network to a sub-network to reduce both the communication and local computation overloads. FedMorph distills a fresh sub-network from the original one at the beginning of each communication round while keeps its `knowledge' as similar as the aggregated model from local clients in a federated average (FedAvg) like way. The network morphing process considers the constraints, e.g., model size or computation flops, as an extra regularizer to the objective function. To make the objective function solvable, we relax the model with the concept of soft-mask. We empirically show that FedMorph, without any other tricks, reduces communication and computation overloads and increases the generalization accuracy. E.g., it provides an 85 reduction in server-to-client communication and 18 reduction in local device computation on the MNIST dataset with ResNet8 as the training network. With benchmark compression approaches, e.g., TopK sparsification, FedMorph collectively provides an 847 reduction in upload communication.