SOTAVerified

Modular Distributed Nonconvex Learning with Error Feedback

2025-03-18Unverified0· sign in to hype

Guido Carnevale, Nicola Bastianello

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

In this paper, we design a novel distributed learning algorithm using stochastic compressed communications. In detail, we pursue a modular approach, merging ADMM and a gradient-based approach, benefiting from the robustness of the former and the computational efficiency of the latter. Additionally, we integrate a stochastic integral action (error feedback) enabling almost sure rejection of the compression error. We analyze the resulting method in nonconvex scenarios and guarantee almost sure asymptotic convergence to the set of stationary points of the problem. This result is obtained using system-theoretic tools based on stochastic timescale separation. We corroborate our findings with numerical simulations in nonconvex classification.

Tasks

Reproductions