SOTAVerified

Distributed Optimization

The goal of Distributed Optimization is to optimize a certain objective defined over millions of billions of data that is distributed over many machines by utilizing the computational power of these machines.

Source: Analysis of Distributed StochasticDual Coordinate Ascent

Papers

Showing 5175 of 536 papers

TitleStatusHype
Bias-Variance Reduced Local SGD for Less Heterogeneous Federated Learning0
Acceleration in Distributed Optimization under Similarity0
A Provably Communication-Efficient Asynchronous Distributed Inference Method for Convex and Nonconvex Problems0
A Reinforcement Learning Approach to Parameter Selection for Distributed Optimal Power Flow0
An Online Optimization Approach for Multi-Agent Tracking of Dynamic Parameters in the Presence of Adversarial Noise0
A Sequential Approximation Framework for Coded Distributed Optimization0
A Stochastic Large-scale Machine Learning Algorithm for Distributed Features and Observations0
A Survey of Optimization Methods for Training DL Models: Theoretical Perspective on Convergence and Generalization0
An Integrated Optimization + Learning Approach to Optimal Dynamic Pricing for the Retailer with Multi-type Customers in Smart Grids0
A Survey on Distributed Evolutionary Computation0
A survey on secure decentralized optimization and learning0
Asymptotic Network Independence in Distributed Stochastic Optimization for Machine Learning0
A Differential Private Method for Distributed Optimization in Directed Networks via State Decomposition0
An Exact Quantized Decentralized Gradient Descent Algorithm0
SHED: A Newton-type algorithm for federated learning based on incremental Hessian eigenvector sharing0
Debiased distributed learning for sparse partial linear models in high dimensions0
Accelerating variational quantum algorithms with multiple quantum processors0
An Equivalent Circuit Approach to Distributed Optimization0
Accelerated Distributed Optimization with Compression and Error Feedback0
99% of Distributed Optimization is a Waste of Time: The Issue and How to Fix it0
Adaptive Sampling Distributed Stochastic Variance Reduced Gradient for Heterogeneous Distributed Datasets0
Information-Geometric Barycenters for Bayesian Federated Learning0
BALPA: A Balanced Primal-Dual Algorithm for Nonsmooth Optimization with Application to Distributed Optimization0
A Mirror Descent-Based Algorithm for Corruption-Tolerant Distributed Gradient Descent0
Algorithm Unrolling-Based Distributed Optimization for RIS-Assisted Cell-Free Networks0
Show:102550
← PrevPage 3 of 22Next →

No leaderboard results yet.