SOTAVerified

Distributed Optimization

The goal of Distributed Optimization is to optimize a certain objective defined over millions of billions of data that is distributed over many machines by utilizing the computational power of these machines.

Source: Analysis of Distributed StochasticDual Coordinate Ascent

Papers

Showing 311320 of 536 papers

TitleStatusHype
LocalNewton: Reducing Communication Bottleneck for Distributed Learning0
Distributed Saddle-Point Problems: Lower Bounds, Near-Optimal and Robust Algorithms0
Local SGD Optimizes Overparameterized Neural Networks in Polynomial Time0
LoCoDL: Communication-Efficient Distributed Learning with Local Training and Compression0
Logarithmically Quantized Distributed Optimization over Dynamic Multi-Agent Networks0
Log-Scale Quantization in Distributed First-Order Methods: Gradient-based Learning from Distributed Data0
Lower Bounds and Nearly Optimal Algorithms in Distributed Learning with Communication Compression0
Machine Learning for Large-Scale Optimization in 6G Wireless Networks0
Machine Learning Infused Distributed Optimization for Coordinating Virtual Power Plant Assets0
Markov Chain Block Coordinate Descent0
Show:102550
← PrevPage 32 of 54Next →

No leaderboard results yet.