SOTAVerified

Distributed Optimization

The goal of Distributed Optimization is to optimize a certain objective defined over millions of billions of data that is distributed over many machines by utilizing the computational power of these machines.

Source: Analysis of Distributed StochasticDual Coordinate Ascent

Papers

Showing 271280 of 536 papers

TitleStatusHype
Variance Reduction in Deep Learning: More Momentum is All You Need0
vqSGD: Vector Quantized Stochastic Gradient Descent0
When Evolutionary Computation Meets Privacy0
Widely-distributed Radar Imaging Based on Consensus ADMM0
Without-Replacement Sampling for Stochastic Gradient Methods: Convergence Results and Application to Distributed Optimization0
Without-Replacement Sampling for Stochastic Gradient Methods0
Asynchronous Message-Passing and Zeroth-Order Optimization Based Distributed Learning with a Use-Case in Resource Allocation in Communication Networks0
Zeroth-Order Feedback-Based Optimization for Distributed Demand Response0
Zeroth Order Nonconvex Multi-Agent Optimization over Networks0
Distributed Optimization via Energy Conservation Laws in Dilated Coordinates0
Show:102550
← PrevPage 28 of 54Next →

No leaderboard results yet.