SOTAVerified

Distributed Optimization

The goal of Distributed Optimization is to optimize a certain objective defined over millions of billions of data that is distributed over many machines by utilizing the computational power of these machines.

Source: Analysis of Distributed StochasticDual Coordinate Ascent

Papers

Showing 381390 of 536 papers

TitleStatusHype
Layer-wise Adaptive Gradient Sparsification for Distributed Deep Learning with Convergence Guarantees0
Leader Stochastic Gradient Descent for Distributed Training of Deep Learning Models: Extension0
Learning-Accelerated ADMM for Distributed Optimal Power Flow0
Learning Autonomy in Management of Wireless Random Networks0
Distributed Model Predictive Control Design for Multi-agent Systems via Bayesian Optimization0
Learning (With) Distributed Optimization0
Leveraging Function Space Aggregation for Federated Learning at Scale0
Limited Communications Distributed Optimization via Deep Unfolded Distributed ADMM0
Linear Convergence of Distributed Mirror Descent with Integral Feedback for Strongly Convex Problems0
On Linear Convergence of PI Consensus Algorithm under the Restricted Secant Inequality0
Show:102550
← PrevPage 39 of 54Next →

No leaderboard results yet.