SOTAVerified

Distributed Optimization

The goal of Distributed Optimization is to optimize a certain objective defined over millions of billions of data that is distributed over many machines by utilizing the computational power of these machines.

Source: Analysis of Distributed StochasticDual Coordinate Ascent

Papers

Showing 2130 of 536 papers

TitleStatusHype
Moshpit SGD: Communication-Efficient Decentralized Training on Heterogeneous Unreliable DevicesCode1
Decentralized Riemannian Gradient Descent on the Stiefel ManifoldCode1
Distributed Resource Allocation with Multi-Agent Deep Reinforcement Learning for 5G-V2V CommunicationCode1
Graph Neural Networks for Scalable Radio Resource Management: Architecture Design and Theoretical AnalysisCode1
Byzantine-Robust Learning on Heterogeneous Datasets via BucketingCode1
Federated Accelerated Stochastic Gradient DescentCode1
MANGO: A Python Library for Parallel Hyperparameter TuningCode1
Privacy-Preserving Distributed Optimization via Subspace Perturbation: A General FrameworkCode1
Training Large Neural Networks with Constant Memory using a New Execution AlgorithmCode1
FedDANE: A Federated Newton-Type MethodCode1
Show:102550
← PrevPage 3 of 54Next →

No leaderboard results yet.