SOTAVerified

Distributed Optimization

The goal of Distributed Optimization is to optimize a certain objective defined over millions of billions of data that is distributed over many machines by utilizing the computational power of these machines.

Source: Analysis of Distributed StochasticDual Coordinate Ascent

Papers

Showing 150 of 536 papers

TitleStatusHype
Power Bundle Adjustment for Large-Scale 3D ReconstructionCode2
Beyond spectral gap (extended): The role of the topology in decentralized learningCode1
GNN-Empowered Effective Partial Observation MARL Method for AoI Management in Multi-UAV NetworkCode1
DeepLM: Large-Scale Nonlinear Least Squares on Deep Learning Frameworks Using Stochastic Domain DecompositionCode1
Communication-Efficient Distributed Optimization in Networks with Gradient Tracking and Variance ReductionCode1
MANGO: A Python Library for Parallel Hyperparameter TuningCode1
Graph Neural Networks for Scalable Radio Resource Management: Architecture Design and Theoretical AnalysisCode1
MicroAdam: Accurate Adaptive Optimization with Low Space Overhead and Provable ConvergenceCode1
Training Large Neural Networks with Constant Memory using a New Execution AlgorithmCode1
Distributed Resource Allocation with Multi-Agent Deep Reinforcement Learning for 5G-V2V CommunicationCode1
An Efficient Learning Framework For Federated XGBoost Using Secret Sharing And Distributed OptimizationCode1
Moshpit SGD: Communication-Efficient Decentralized Training on Heterogeneous Unreliable DevicesCode1
Acceleration of Federated Learning with Alleviated Forgetting in Local TrainingCode1
Federated Learning as Variational Inference: A Scalable Expectation Propagation ApproachCode1
Privacy-Preserving Distributed Optimization via Subspace Perturbation: A General FrameworkCode1
Beyond spectral gap: The role of the topology in decentralized learningCode1
Just One Byte (per gradient): A Note on Low-Bandwidth Decentralized Language Model Finetuning Using Shared RandomnessCode1
Optimization Algorithm Design via Electric CircuitsCode1
SCAFFOLD: Stochastic Controlled Averaging for Federated LearningCode1
Signal Decomposition Using Masked Proximal OperatorsCode1
Federated Optimization in Heterogeneous NetworksCode1
DPLib: A Standard Benchmark Library for Distributed Power System Analysis and OptimizationCode1
FedCFA: Alleviating Simpson's Paradox in Model Aggregation with Counterfactual Federated LearningCode1
Secure Distributed Training at ScaleCode1
Recycling Model Updates in Federated Learning: Are Gradient Subspaces Low-Rank?Code1
Unbiased Single-scale and Multi-scale Quantizers for Distributed OptimizationCode1
Decentralized Riemannian Gradient Descent on the Stiefel ManifoldCode1
FedDANE: A Federated Newton-Type MethodCode1
ACCO: Accumulate While You Communicate for Communication-Overlapped Sharded LLM TrainingCode1
Federated Accelerated Stochastic Gradient DescentCode1
Asynchronous Local-SGD Training for Language ModelingCode1
Byzantine-Robust Learning on Heterogeneous Datasets via BucketingCode1
BAGUA: Scaling up Distributed Learning with System RelaxationsCode1
Accelerated Primal-Dual Algorithms for Distributed Smooth Convex Optimization over NetworksCode0
Federated Learning with Compression: Unified Analysis and Sharp GuaranteesCode0
Error Feedback Shines when Features are RareCode0
A Distributed Quasi-Newton Algorithm for Primal and Dual Regularized Empirical Risk MinimizationCode0
FairSync: Ensuring Amortized Group Exposure in Distributed Recommendation RetrievalCode0
A Distributed Quasi-Newton Algorithm for Empirical Risk Minimization with Nonsmooth RegularizationCode0
Dynamic communication topologies for distributed heuristics in energy system optimization algorithmsCode0
Distributed Optimization with Arbitrary Local SolversCode0
EF-BV: A Unified Theory of Error Feedback and Variance Reduction Mechanisms for Biased and Unbiased Compression in Distributed OptimizationCode0
Efficient Randomized Subspace Embeddings for Distributed Optimization under a Communication BudgetCode0
Distributed Adversarial Training to Robustify Deep Neural Networks at ScaleCode0
Distributed Markov Chain Monte Carlo Sampling based on the Alternating Direction Method of MultipliersCode0
Differentially Private Distributed Estimation and LearningCode0
Distributed Optimization, Averaging via ADMM, and Network TopologyCode0
Adding vs. Averaging in Distributed Primal-Dual OptimizationCode0
Cooperative Tuning of Multi-Agent Optimal Control SystemsCode0
PIM-Opt: Demystifying Distributed Optimization Algorithms on a Real-World Processing-In-Memory SystemCode0
Show:102550
← PrevPage 1 of 11Next →

No leaderboard results yet.