SOTAVerified

Distributed Optimization

The goal of Distributed Optimization is to optimize a certain objective defined over millions of billions of data that is distributed over many machines by utilizing the computational power of these machines.

Source: Analysis of Distributed StochasticDual Coordinate Ascent

Papers

Showing 101125 of 536 papers

TitleStatusHype
Communication- and Computation-Efficient Distributed Submodular Optimization in Robot Mesh NetworksCode0
Provable Privacy Advantages of Decentralized Federated Learning via Distributed Optimization0
Fast Distributed Optimization over Directed Graphs under Malicious Attacks using Trust0
Graphon Particle Systems, Part II: Dynamics of Distributed Stochastic Continuum Optimization0
Accelerating Distributed Optimization: A Primal-Dual Perspective on Local Steps0
Graph Neural Networks Gone Hogwild0
Distributed Utility Optimization in Vehicular Communication Systems0
A KL-based Analysis Framework with Applications to Non-Descent Optimization Methods0
Log-Scale Quantization in Distributed First-Order Methods: Gradient-based Learning from Distributed Data0
Local Methods with Adaptivity via Scaling0
Differentially-Private Distributed Model Predictive Control of Linear Discrete-Time Systems with Global Constraints0
The Limits and Potentials of Local SGD for Distributed Heterogeneous Learning with Intermittent Communication0
Flattened one-bit stochastic gradient descent: compressed distributed optimization with controlled variance0
Structured Reinforcement Learning for Incentivized Stochastic Covert Optimization0
Distributed Traffic Signal Control via Coordinated Maximum Pressure-plus-Penalty0
Estimation Network Design framework for efficient distributed optimization0
Rate Analysis of Coupled Distributed Stochastic Approximation for Misspecified Optimization0
Distributed Fractional Bayesian Learning for Adaptive Optimization0
Federated Optimization with Doubly Regularized Drift Correction0
PIM-Opt: Demystifying Distributed Optimization Algorithms on a Real-World Processing-In-Memory SystemCode0
Generalized Gradient Descent is a Hypergraph Functor0
Distributed Maximum Consensus over Noisy Links0
Network-Aware Value Stacking of Community Battery via Asynchronous Distributed Optimization0
Quantization Avoids Saddle Points in Distributed Optimization0
Streamlining in the Riemannian Realm: Efficient Riemannian Optimization with Loopless Variance Reduction0
Show:102550
← PrevPage 5 of 22Next →

No leaderboard results yet.