SOTAVerified

Distributed Optimization

The goal of Distributed Optimization is to optimize a certain objective defined over millions of billions of data that is distributed over many machines by utilizing the computational power of these machines.

Source: Analysis of Distributed StochasticDual Coordinate Ascent

Papers

Showing 125 of 536 papers

TitleStatusHype
Power Bundle Adjustment for Large-Scale 3D ReconstructionCode2
Moshpit SGD: Communication-Efficient Decentralized Training on Heterogeneous Unreliable DevicesCode1
An Efficient Learning Framework For Federated XGBoost Using Secret Sharing And Distributed OptimizationCode1
Federated Accelerated Stochastic Gradient DescentCode1
MANGO: A Python Library for Parallel Hyperparameter TuningCode1
MicroAdam: Accurate Adaptive Optimization with Low Space Overhead and Provable ConvergenceCode1
BAGUA: Scaling up Distributed Learning with System RelaxationsCode1
Byzantine-Robust Learning on Heterogeneous Datasets via BucketingCode1
Distributed Resource Allocation with Multi-Agent Deep Reinforcement Learning for 5G-V2V CommunicationCode1
FedDANE: A Federated Newton-Type MethodCode1
Graph Neural Networks for Scalable Radio Resource Management: Architecture Design and Theoretical AnalysisCode1
Just One Byte (per gradient): A Note on Low-Bandwidth Decentralized Language Model Finetuning Using Shared RandomnessCode1
Acceleration of Federated Learning with Alleviated Forgetting in Local TrainingCode1
ACCO: Accumulate While You Communicate for Communication-Overlapped Sharded LLM TrainingCode1
Communication-Efficient Distributed Optimization in Networks with Gradient Tracking and Variance ReductionCode1
Asynchronous Local-SGD Training for Language ModelingCode1
Beyond spectral gap (extended): The role of the topology in decentralized learningCode1
Beyond spectral gap: The role of the topology in decentralized learningCode1
Decentralized Riemannian Gradient Descent on the Stiefel ManifoldCode1
DeepLM: Large-Scale Nonlinear Least Squares on Deep Learning Frameworks Using Stochastic Domain DecompositionCode1
DPLib: A Standard Benchmark Library for Distributed Power System Analysis and OptimizationCode1
FedCFA: Alleviating Simpson's Paradox in Model Aggregation with Counterfactual Federated LearningCode1
Federated Optimization in Heterogeneous NetworksCode1
GNN-Empowered Effective Partial Observation MARL Method for AoI Management in Multi-UAV NetworkCode1
Federated Learning as Variational Inference: A Scalable Expectation Propagation ApproachCode1
Show:102550
← PrevPage 1 of 22Next →

No leaderboard results yet.