SOTAVerified

Distributed Optimization

The goal of Distributed Optimization is to optimize a certain objective defined over millions of billions of data that is distributed over many machines by utilizing the computational power of these machines.

Source: Analysis of Distributed StochasticDual Coordinate Ascent

Papers

Showing 2130 of 536 papers

TitleStatusHype
Beyond spectral gap: The role of the topology in decentralized learningCode1
Byzantine-Robust Learning on Heterogeneous Datasets via BucketingCode1
DeepLM: Large-Scale Nonlinear Least Squares on Deep Learning Frameworks Using Stochastic Domain DecompositionCode1
Just One Byte (per gradient): A Note on Low-Bandwidth Decentralized Language Model Finetuning Using Shared RandomnessCode1
MicroAdam: Accurate Adaptive Optimization with Low Space Overhead and Provable ConvergenceCode1
Communication-Efficient Distributed Optimization in Networks with Gradient Tracking and Variance ReductionCode1
Graph Neural Networks for Scalable Radio Resource Management: Architecture Design and Theoretical AnalysisCode1
Decentralized Riemannian Gradient Descent on the Stiefel ManifoldCode1
ACCO: Accumulate While You Communicate for Communication-Overlapped Sharded LLM TrainingCode1
SCAFFOLD: Stochastic Controlled Averaging for Federated LearningCode1
Show:102550
← PrevPage 3 of 54Next →

No leaderboard results yet.