SOTAVerified

Distributed Optimization

The goal of Distributed Optimization is to optimize a certain objective defined over millions of billions of data that is distributed over many machines by utilizing the computational power of these machines.

Source: Analysis of Distributed StochasticDual Coordinate Ascent

Papers

Showing 376400 of 536 papers

TitleStatusHype
Iterative Pre-Conditioning to Expedite the Gradient-Descent Method0
KKT Conditions, First-Order and Second-Order Optimization, and Distributed Optimization: Tutorial and Survey0
Model Aggregation via Good-Enough Model Spaces0
LAGO: Few-shot Crosslingual Embedding Inversion Attacks via Language Similarity-Aware Graph Optimization0
LASER: Linear Compression in Wireless Distributed Optimization0
Layer-wise Adaptive Gradient Sparsification for Distributed Deep Learning with Convergence Guarantees0
Leader Stochastic Gradient Descent for Distributed Training of Deep Learning Models: Extension0
Learning-Accelerated ADMM for Distributed Optimal Power Flow0
Learning Autonomy in Management of Wireless Random Networks0
Distributed Model Predictive Control Design for Multi-agent Systems via Bayesian Optimization0
Learning (With) Distributed Optimization0
Leveraging Function Space Aggregation for Federated Learning at Scale0
Limited Communications Distributed Optimization via Deep Unfolded Distributed ADMM0
Linear Convergence of Distributed Mirror Descent with Integral Feedback for Strongly Convex Problems0
On Linear Convergence of PI Consensus Algorithm under the Restricted Secant Inequality0
Linear Convergent Decentralized Optimization with Compression0
Linear Speedup of Incremental Aggregated Gradient Methods on Streaming Data0
Local Methods with Adaptivity via Scaling0
LocalNewton: Reducing Communication Bottleneck for Distributed Learning0
Distributed Saddle-Point Problems: Lower Bounds, Near-Optimal and Robust Algorithms0
Local SGD Optimizes Overparameterized Neural Networks in Polynomial Time0
LoCoDL: Communication-Efficient Distributed Learning with Local Training and Compression0
Logarithmically Quantized Distributed Optimization over Dynamic Multi-Agent Networks0
Log-Scale Quantization in Distributed First-Order Methods: Gradient-based Learning from Distributed Data0
Lower Bounds and Nearly Optimal Algorithms in Distributed Learning with Communication Compression0
Show:102550
← PrevPage 16 of 22Next →

No leaderboard results yet.