SOTAVerified

Distributed Optimization

The goal of Distributed Optimization is to optimize a certain objective defined over millions of billions of data that is distributed over many machines by utilizing the computational power of these machines.

Source: Analysis of Distributed StochasticDual Coordinate Ascent

Papers

Showing 391400 of 536 papers

TitleStatusHype
Qsparse-local-SGD: Distributed SGD with Quantization, Sparsification and Local ComputationsCode0
Layer-wise Adaptive Gradient Sparsification for Distributed Deep Learning with Convergence Guarantees0
vqSGD: Vector Quantized Stochastic Gradient Descent0
Learning-Accelerated ADMM for Distributed Optimal Power Flow0
On the Convergence of Local Descent Methods in Federated Learning0
Local SGD with Periodic Averaging: Tighter Analysis and Adaptive SynchronizationCode0
Asynchronous Decentralized SGD with Quantized and Local Updates0
Accelerated Primal-Dual Algorithms for Distributed Smooth Convex Optimization over NetworksCode0
Sparsification as a Remedy for Staleness in Distributed Asynchronous SGD0
SCAFFOLD: Stochastic Controlled Averaging for Federated LearningCode1
Show:102550
← PrevPage 40 of 54Next →

No leaderboard results yet.