SOTAVerified

Distributed Optimization

The goal of Distributed Optimization is to optimize a certain objective defined over millions of billions of data that is distributed over many machines by utilizing the computational power of these machines.

Source: Analysis of Distributed StochasticDual Coordinate Ascent

Papers

Showing 251260 of 536 papers

TitleStatusHype
Distributed gradient-based optimization in the presence of dependent aperiodic communication0
Distributed Learning of Generalized Linear Causal Networks0
Coordinated Day-ahead Dispatch of Multiple Power Distribution Grids hosting Stochastic Resources: An ADMM-based Framework0
Distributed Random Reshuffling over Networks0
Convergence Rates of Two-Time-Scale Gradient Descent-Ascent Dynamics for Solving Nonconvex Min-Max Problems0
Communication-Efficient Distributed SGD with Compressed Sensing0
Distributed Graph Learning with Smooth Data Priors0
Collaborative Learning over Wireless Networks: An Introductory Overview0
Variance Reduction in Deep Learning: More Momentum is All You Need0
FLIX: A Simple and Communication-Efficient Alternative to Local Methods in Federated Learning0
Show:102550
← PrevPage 26 of 54Next →

No leaderboard results yet.