SOTAVerified

Distributed Optimization

The goal of Distributed Optimization is to optimize a certain objective defined over millions of billions of data that is distributed over many machines by utilizing the computational power of these machines.

Source: Analysis of Distributed StochasticDual Coordinate Ascent

Papers

Showing 421430 of 536 papers

TitleStatusHype
PowerSGD: Practical Low-Rank Gradient Compression for Distributed OptimizationCode0
Accelerated Sparsified SGD with Error Feedback0
Distributed estimation of the inverse Hessian by determinantal averaging0
Leader Stochastic Gradient Descent for Distributed Training of Deep Learning Models: Extension0
OverSketched Newton: Fast Convex Optimization for Serverless SystemsCode0
Byzantine Fault Tolerant Distributed Linear Regression0
Differentially Private Consensus-Based Distributed Optimization0
SLSGD: Secure and Efficient Distributed On-device Machine Learning0
A Provably Communication-Efficient Asynchronous Distributed Inference Method for Convex and Nonconvex Problems0
On Maintaining Linear Convergence of Distributed Learning and Optimization under Limited Communication0
Show:102550
← PrevPage 43 of 54Next →

No leaderboard results yet.