SOTAVerified

Distributed Optimization

The goal of Distributed Optimization is to optimize a certain objective defined over millions of billions of data that is distributed over many machines by utilizing the computational power of these machines.

Source: Analysis of Distributed StochasticDual Coordinate Ascent

Papers

Showing 401425 of 536 papers

TitleStatusHype
SlowMo: Improving Communication-Efficient Distributed SGD with Slow MomentumCode0
PopSGD: Decentralized Stochastic Gradient Descent in the Population Model0
Gradient-Consensus: Linearly Convergent Distributed Optimization Algorithm over Directed Graphs0
Communication-Efficient Distributed Optimization in Networks with Gradient Tracking and Variance ReductionCode1
Convex Set Disjointness, Distributed Learning of Halfspaces, and LP Feasibility0
Proximal gradient flow and Douglas-Rachford splitting dynamics: global exponential stability via integral quadratic constraints0
Federated Learning: Challenges, Methods, and Future DirectionsCode0
Gradient flows and proximal splitting methods: A unified view on accelerated and stochastic optimization0
Popt4jlib: A Parallel/Distributed Optimization Library for Java0
Centralised and Distributed Optimization for Aggregated Flexibility Services Provision0
Data Encoding for Byzantine-Resilient Distributed Optimization0
Trading Redundancy for Communication: Speeding up Distributed SGD for Non-convex OptimizationCode0
Asymptotic Network Independence in Distributed Stochastic Optimization for Machine Learning0
Distributed Optimization for Smart Cyber-Physical Networks0
Secure Architectures Implementing Trusted Coalitions for Blockchained Distributed Learning (TCLearn)0
Distributed Optimization for Over-Parameterized Learning0
The Communication Complexity of Optimization0
Communication-Efficient Accurate Statistical Estimation0
Qsparse-local-SGD: Distributed SGD with Quantization, Sparsification, and Local Computations0
Deep Learning for Distributed Optimization: Applications to Wireless Resource Management0
PowerSGD: Practical Low-Rank Gradient Compression for Distributed OptimizationCode0
Accelerated Sparsified SGD with Error Feedback0
Distributed estimation of the inverse Hessian by determinantal averaging0
Leader Stochastic Gradient Descent for Distributed Training of Deep Learning Models: Extension0
OverSketched Newton: Fast Convex Optimization for Serverless SystemsCode0
Show:102550
← PrevPage 17 of 22Next →

No leaderboard results yet.