SOTAVerified

Distributed Optimization

The goal of Distributed Optimization is to optimize a certain objective defined over millions of billions of data that is distributed over many machines by utilizing the computational power of these machines.

Source: Analysis of Distributed StochasticDual Coordinate Ascent

Papers

Showing 376400 of 536 papers

TitleStatusHype
New Bounds For Distributed Mean Estimation and Variance Reduction0
Adaptive Sampling Distributed Stochastic Variance Reduced Gradient for Heterogeneous Distributed Datasets0
The Geometry of Sign Gradient Descent0
Is Local SGD Better than Minibatch SGD?0
Distributed Optimization over Block-Cyclic Data0
Distributed Averaging Methods for Randomized Second Order Optimization0
Training Large Neural Networks with Constant Memory using a New Execution AlgorithmCode1
Differentially Quantized Gradient Methods0
FedDANE: A Federated Newton-Type MethodCode1
Estimating the Error of Randomized Newton Methods: A Bootstrap Approach0
Acceleration for Compressed Gradient Descent in Distributed Optimization0
Manifold Identification for Ultimately Communication-Efficient Distributed OptimizationCode0
Graph Learning Under Partial Observability0
A Distributed Quasi-Newton Algorithm for Primal and Dual Regularized Empirical Risk MinimizationCode0
Optimization for Reinforcement Learning: From Single Agent to Cooperative Agents0
Qsparse-local-SGD: Distributed SGD with Quantization, Sparsification and Local ComputationsCode0
Layer-wise Adaptive Gradient Sparsification for Distributed Deep Learning with Convergence Guarantees0
vqSGD: Vector Quantized Stochastic Gradient Descent0
Learning-Accelerated ADMM for Distributed Optimal Power Flow0
On the Convergence of Local Descent Methods in Federated Learning0
Local SGD with Periodic Averaging: Tighter Analysis and Adaptive SynchronizationCode0
Asynchronous Decentralized SGD with Quantized and Local Updates0
Accelerated Primal-Dual Algorithms for Distributed Smooth Convex Optimization over NetworksCode0
Sparsification as a Remedy for Staleness in Distributed Asynchronous SGD0
SCAFFOLD: Stochastic Controlled Averaging for Federated LearningCode1
Show:102550
← PrevPage 16 of 22Next →

No leaderboard results yet.