SOTAVerified

Distributed Optimization

The goal of Distributed Optimization is to optimize a certain objective defined over millions of billions of data that is distributed over many machines by utilizing the computational power of these machines.

Source: Analysis of Distributed StochasticDual Coordinate Ascent

Papers

Showing 461470 of 536 papers

TitleStatusHype
GoSGD: Distributed Optimization for Deep Learning with Gossip Exchange0
Fundamental Resource Trade-offs for Encoded Distributed Optimization0
A Stochastic Large-scale Machine Learning Algorithm for Distributed Features and Observations0
SUCAG: Stochastic Unbiased Curvature-aided Gradient Method for Distributed Optimization0
Redundancy Techniques for Straggler Mitigation in Distributed Optimization and Learning0
A Distributed Quasi-Newton Algorithm for Empirical Risk Minimization with Nonsmooth RegularizationCode0
Convergence rate of sign stochastic gradient descent for non-convex functions0
ZOOpt: Toolbox for Derivative-Free OptimizationCode0
Optimal Algorithms for Distributed Optimization0
Accelerated consensus via Min-Sum Splitting0
Show:102550
← PrevPage 47 of 54Next →

No leaderboard results yet.