SOTAVerified

Distributed Optimization

The goal of Distributed Optimization is to optimize a certain objective defined over millions of billions of data that is distributed over many machines by utilizing the computational power of these machines.

Source: Analysis of Distributed StochasticDual Coordinate Ascent

Papers

Showing 110 of 536 papers

TitleStatusHype
Power Bundle Adjustment for Large-Scale 3D ReconstructionCode2
Communication-Efficient Distributed Optimization in Networks with Gradient Tracking and Variance ReductionCode1
ACCO: Accumulate While You Communicate for Communication-Overlapped Sharded LLM TrainingCode1
BAGUA: Scaling up Distributed Learning with System RelaxationsCode1
Beyond spectral gap: The role of the topology in decentralized learningCode1
Byzantine-Robust Learning on Heterogeneous Datasets via BucketingCode1
An Efficient Learning Framework For Federated XGBoost Using Secret Sharing And Distributed OptimizationCode1
Acceleration of Federated Learning with Alleviated Forgetting in Local TrainingCode1
Asynchronous Local-SGD Training for Language ModelingCode1
Beyond spectral gap (extended): The role of the topology in decentralized learningCode1
Show:102550
← PrevPage 1 of 54Next →

No leaderboard results yet.