SOTAVerified

Distributed Optimization

The goal of Distributed Optimization is to optimize a certain objective defined over millions of billions of data that is distributed over many machines by utilizing the computational power of these machines.

Source: Analysis of Distributed StochasticDual Coordinate Ascent

Papers

Showing 211220 of 536 papers

TitleStatusHype
Distributed Learning of Neural Lyapunov Functions for Large-Scale Networked Dissipative Systems0
Variance Reduced ProxSkip: Algorithm, Theory and Application to Federated LearningCode0
Can Competition Outperform Collaboration? The Role of Misbehaving Agents0
Simultaneous Contact-Rich Grasping and Locomotion via Distributed Optimization Enabling Free-Climbing for Multi-Limbed Robots0
On the Convergence of Distributed Stochastic Bilevel Optimization Algorithms over a Network0
Distributed Adversarial Training to Robustify Deep Neural Networks at ScaleCode0
Lower Bounds and Nearly Optimal Algorithms in Distributed Learning with Communication Compression0
Beyond spectral gap: The role of the topology in decentralized learningCode1
A Computation and Communication Efficient Method for Distributed Nonconvex Problems in the Partial Participation Setting0
Optimal Gradient Sliding and its Application to Distributed Optimization Under Similarity0
Show:102550
← PrevPage 22 of 54Next →

No leaderboard results yet.