SOTAVerified

Distributed Optimization

The goal of Distributed Optimization is to optimize a certain objective defined over millions of billions of data that is distributed over many machines by utilizing the computational power of these machines.

Source: Analysis of Distributed StochasticDual Coordinate Ascent

Papers

Showing 391400 of 536 papers

TitleStatusHype
Graph Learning Under Partial Observability0
A Distributed Quasi-Newton Algorithm for Primal and Dual Regularized Empirical Risk MinimizationCode0
Qsparse-local-SGD: Distributed SGD with Quantization, Sparsification and Local ComputationsCode0
Optimization for Reinforcement Learning: From Single Agent to Cooperative Agents0
Layer-wise Adaptive Gradient Sparsification for Distributed Deep Learning with Convergence Guarantees0
vqSGD: Vector Quantized Stochastic Gradient Descent0
Learning-Accelerated ADMM for Distributed Optimal Power Flow0
On the Convergence of Local Descent Methods in Federated Learning0
Local SGD with Periodic Averaging: Tighter Analysis and Adaptive SynchronizationCode0
Asynchronous Decentralized SGD with Quantized and Local Updates0
Show:102550
← PrevPage 40 of 54Next →

No leaderboard results yet.