SOTAVerified

Distributed Optimization

The goal of Distributed Optimization is to optimize a certain objective defined over millions of billions of data that is distributed over many machines by utilizing the computational power of these machines.

Source: Analysis of Distributed StochasticDual Coordinate Ascent

Papers

Showing 1120 of 536 papers

TitleStatusHype
FedDANE: A Federated Newton-Type MethodCode1
Federated Accelerated Stochastic Gradient DescentCode1
Acceleration of Federated Learning with Alleviated Forgetting in Local TrainingCode1
ACCO: Accumulate While You Communicate for Communication-Overlapped Sharded LLM TrainingCode1
Communication-Efficient Distributed Optimization in Networks with Gradient Tracking and Variance ReductionCode1
Asynchronous Local-SGD Training for Language ModelingCode1
Beyond spectral gap (extended): The role of the topology in decentralized learningCode1
Beyond spectral gap: The role of the topology in decentralized learningCode1
Decentralized Riemannian Gradient Descent on the Stiefel ManifoldCode1
FedCFA: Alleviating Simpson's Paradox in Model Aggregation with Counterfactual Federated LearningCode1
Show:102550
← PrevPage 2 of 54Next →

No leaderboard results yet.