SOTAVerified

Distributed Optimization

The goal of Distributed Optimization is to optimize a certain objective defined over millions of billions of data that is distributed over many machines by utilizing the computational power of these machines.

Source: Analysis of Distributed StochasticDual Coordinate Ascent

Papers

Showing 76100 of 536 papers

TitleStatusHype
Accelerating Distributed Optimization: A Primal-Dual Perspective on Local Steps0
Graph Neural Networks Gone Hogwild0
Distributed Utility Optimization in Vehicular Communication Systems0
A KL-based Analysis Framework with Applications to Non-Descent Optimization Methods0
ACCO: Accumulate While You Communicate for Communication-Overlapped Sharded LLM TrainingCode1
Log-Scale Quantization in Distributed First-Order Methods: Gradient-based Learning from Distributed Data0
Local Methods with Adaptivity via Scaling0
Differentially-Private Distributed Model Predictive Control of Linear Discrete-Time Systems with Global Constraints0
MicroAdam: Accurate Adaptive Optimization with Low Space Overhead and Provable ConvergenceCode1
The Limits and Potentials of Local SGD for Distributed Heterogeneous Learning with Intermittent Communication0
Flattened one-bit stochastic gradient descent: compressed distributed optimization with controlled variance0
Structured Reinforcement Learning for Incentivized Stochastic Covert Optimization0
Distributed Traffic Signal Control via Coordinated Maximum Pressure-plus-Penalty0
Estimation Network Design framework for efficient distributed optimization0
Rate Analysis of Coupled Distributed Stochastic Approximation for Misspecified Optimization0
Distributed Fractional Bayesian Learning for Adaptive Optimization0
Federated Optimization with Doubly Regularized Drift Correction0
PIM-Opt: Demystifying Distributed Optimization Algorithms on a Real-World Processing-In-Memory SystemCode0
Generalized Gradient Descent is a Hypergraph Functor0
Distributed Maximum Consensus over Noisy Links0
Network-Aware Value Stacking of Community Battery via Asynchronous Distributed Optimization0
Quantization Avoids Saddle Points in Distributed Optimization0
Streamlining in the Riemannian Realm: Efficient Riemannian Optimization with Loopless Variance Reduction0
LoCoDL: Communication-Efficient Distributed Learning with Local Training and Compression0
MUSIC: Accelerated Convergence for Distributed Optimization With Inexact and Exact Methods0
Show:102550
← PrevPage 4 of 22Next →

No leaderboard results yet.