SOTAVerified

Distributed Optimization

The goal of Distributed Optimization is to optimize a certain objective defined over millions of billions of data that is distributed over many machines by utilizing the computational power of these machines.

Source: Analysis of Distributed StochasticDual Coordinate Ascent

Papers

Showing 401410 of 536 papers

TitleStatusHype
Accelerated Primal-Dual Algorithms for Distributed Smooth Convex Optimization over NetworksCode0
Sparsification as a Remedy for Staleness in Distributed Asynchronous SGD0
SlowMo: Improving Communication-Efficient Distributed SGD with Slow MomentumCode0
PopSGD: Decentralized Stochastic Gradient Descent in the Population Model0
Gradient-Consensus: Linearly Convergent Distributed Optimization Algorithm over Directed Graphs0
Convex Set Disjointness, Distributed Learning of Halfspaces, and LP Feasibility0
Proximal gradient flow and Douglas-Rachford splitting dynamics: global exponential stability via integral quadratic constraints0
Federated Learning: Challenges, Methods, and Future DirectionsCode0
Gradient flows and proximal splitting methods: A unified view on accelerated and stochastic optimization0
Popt4jlib: A Parallel/Distributed Optimization Library for Java0
Show:102550
← PrevPage 41 of 54Next →

No leaderboard results yet.