SOTAVerified

Distributed Optimization

The goal of Distributed Optimization is to optimize a certain objective defined over millions of billions of data that is distributed over many machines by utilizing the computational power of these machines.

Source: Analysis of Distributed StochasticDual Coordinate Ascent

Papers

Showing 291300 of 536 papers

TitleStatusHype
Graph Learning Under Partial Observability0
Is Local SGD Better than Minibatch SGD?0
Iterative Pre-Conditioning to Expedite the Gradient-Descent Method0
KKT Conditions, First-Order and Second-Order Optimization, and Distributed Optimization: Tutorial and Survey0
Model Aggregation via Good-Enough Model Spaces0
LAGO: Few-shot Crosslingual Embedding Inversion Attacks via Language Similarity-Aware Graph Optimization0
LASER: Linear Compression in Wireless Distributed Optimization0
Layer-wise Adaptive Gradient Sparsification for Distributed Deep Learning with Convergence Guarantees0
Leader Stochastic Gradient Descent for Distributed Training of Deep Learning Models: Extension0
Learning-Accelerated ADMM for Distributed Optimal Power Flow0
Show:102550
← PrevPage 30 of 54Next →

No leaderboard results yet.