SOTAVerified

Distributed Optimization

The goal of Distributed Optimization is to optimize a certain objective defined over millions of billions of data that is distributed over many machines by utilizing the computational power of these machines.

Source: Analysis of Distributed StochasticDual Coordinate Ascent

Papers

Showing 241250 of 536 papers

TitleStatusHype
Signal Decomposition Using Masked Proximal OperatorsCode1
Escaping Saddle Points with Bias-Variance Reduced Local Perturbed SGD for Communication Efficient Nonconvex Distributed Learning0
Distributed saddle point problems for strongly concave-convex functions0
Spatial Reuse in Dense Wireless Areas: A Cross-layer Optimization Approach via ADMM0
SHED: A Newton-type algorithm for federated learning based on incremental Hessian eigenvector sharing0
Communication Efficient Federated Learning via Ordered ADMM in a Fully Decentralized Setting0
DASHA: Distributed Nonconvex Optimization with Communication Compression, Optimal Oracle Complexity, and No Client Synchronization0
Federated Active Learning (F-AL): an Efficient Annotation Strategy for Federated Learning0
Recycling Model Updates in Federated Learning: Are Gradient Subspaces Low-Rank?Code1
End-to-End Quality-of-Service Assurance with Autonomous Systems: 5G/6G Case Study0
Show:102550
← PrevPage 25 of 54Next →

No leaderboard results yet.