SOTAVerified

Distributed Optimization

The goal of Distributed Optimization is to optimize a certain objective defined over millions of billions of data that is distributed over many machines by utilizing the computational power of these machines.

Source: Analysis of Distributed StochasticDual Coordinate Ascent

Papers

Showing 261270 of 536 papers

TitleStatusHype
Escaping Saddle Points with Bias-Variance Reduced Local Perturbed SGD for Communication Efficient Nonconvex Distributed Learning0
SHED: A Newton-type algorithm for federated learning based on incremental Hessian eigenvector sharing0
Distributed saddle point problems for strongly concave-convex functions0
Spatial Reuse in Dense Wireless Areas: A Cross-layer Optimization Approach via ADMM0
Communication Efficient Federated Learning via Ordered ADMM in a Fully Decentralized Setting0
DASHA: Distributed Nonconvex Optimization with Communication Compression, Optimal Oracle Complexity, and No Client Synchronization0
Federated Active Learning (F-AL): an Efficient Annotation Strategy for Federated Learning0
End-to-End Quality-of-Service Assurance with Autonomous Systems: 5G/6G Case Study0
Distributed gradient-based optimization in the presence of dependent aperiodic communication0
Distributed Learning of Generalized Linear Causal Networks0
Show:102550
← PrevPage 27 of 54Next →

No leaderboard results yet.