SOTAVerified

Distributed Optimization

The goal of Distributed Optimization is to optimize a certain objective defined over millions of billions of data that is distributed over many machines by utilizing the computational power of these machines.

Source: Analysis of Distributed StochasticDual Coordinate Ascent

Papers

Showing 391400 of 536 papers

TitleStatusHype
Linear Convergent Decentralized Optimization with Compression0
Linear Speedup of Incremental Aggregated Gradient Methods on Streaming Data0
Local Methods with Adaptivity via Scaling0
LocalNewton: Reducing Communication Bottleneck for Distributed Learning0
Distributed Saddle-Point Problems: Lower Bounds, Near-Optimal and Robust Algorithms0
Local SGD Optimizes Overparameterized Neural Networks in Polynomial Time0
LoCoDL: Communication-Efficient Distributed Learning with Local Training and Compression0
Logarithmically Quantized Distributed Optimization over Dynamic Multi-Agent Networks0
Log-Scale Quantization in Distributed First-Order Methods: Gradient-based Learning from Distributed Data0
Lower Bounds and Nearly Optimal Algorithms in Distributed Learning with Communication Compression0
Show:102550
← PrevPage 40 of 54Next →

No leaderboard results yet.