SOTAVerified

Distributed Optimization

The goal of Distributed Optimization is to optimize a certain objective defined over millions of billions of data that is distributed over many machines by utilizing the computational power of these machines.

Source: Analysis of Distributed StochasticDual Coordinate Ascent

Papers

Showing 1120 of 536 papers

TitleStatusHype
Beyond spectral gap (extended): The role of the topology in decentralized learningCode1
Beyond spectral gap: The role of the topology in decentralized learningCode1
Acceleration of Federated Learning with Alleviated Forgetting in Local TrainingCode1
Signal Decomposition Using Masked Proximal OperatorsCode1
Recycling Model Updates in Federated Learning: Are Gradient Subspaces Low-Rank?Code1
Unbiased Single-scale and Multi-scale Quantizers for Distributed OptimizationCode1
BAGUA: Scaling up Distributed Learning with System RelaxationsCode1
Secure Distributed Training at ScaleCode1
DeepLM: Large-Scale Nonlinear Least Squares on Deep Learning Frameworks Using Stochastic Domain DecompositionCode1
An Efficient Learning Framework For Federated XGBoost Using Secret Sharing And Distributed OptimizationCode1
Show:102550
← PrevPage 2 of 54Next →

No leaderboard results yet.