SOTAVerified

Distributed Optimization

The goal of Distributed Optimization is to optimize a certain objective defined over millions of billions of data that is distributed over many machines by utilizing the computational power of these machines.

Source: Analysis of Distributed StochasticDual Coordinate Ascent

Papers

Showing 351400 of 536 papers

TitleStatusHype
Geometrically Convergent Distributed Optimization with Uncoordinated Step-Sizes0
GIANT: Globally Improved Approximate Newton Method for Distributed Optimization0
Goal-Oriented Wireless Communication Resource Allocation for Cyber-Physical Systems0
GoSGD: Distributed Optimization for Deep Learning with Gossip Exchange0
Gradient-Consensus: Linearly Convergent Distributed Optimization Algorithm over Directed Graphs0
Gradient flows and proximal splitting methods: A unified view on accelerated and stochastic optimization0
Gradient Primal-Dual Algorithm Converges to Second-Order Stationary Solution for Nonconvex Distributed Optimization Over Networks0
Gradient Sparsification for Communication-Efficient Distributed Optimization0
Gradient-Tracking over Directed Graphs for solving Leaderless Multi-Cluster Games0
Graph Neural Network-Based Distributed Optimal Control for Linear Networked Systems: An Online Distributed Training Approach0
Graph Neural Networks Gone Hogwild0
Graphon Particle Systems, Part II: Dynamics of Distributed Stochastic Continuum Optimization0
Hemingway: Modeling Distributed Optimization Algorithms0
Hessian Riemannian Flow For Multi-Population Wardrop Equilibrium0
Simple and Scalable Algorithms for Cluster-Aware Precision Medicine0
High-performance Kernel Machines with Implicit Distributed Optimization and Randomization0
High-Probability Convergence for Composite and Distributed Stochastic Minimization and Variational Inequalities with Heavy-Tailed Noise0
Hybrid Decentralized Optimization: Leveraging Both First- and Zeroth-Order Optimizers for Faster Convergence0
Hyperspectral Unmixing Based on Clustered Multitask Networks0
Impact of Redundancy on Resilience in Distributed Optimization and Learning0
Improving Rate of Convergence via Gain Adaptation in Multi-Agent Distributed ADMM Framework0
Improving the Transient Times for Distributed Stochastic Gradient Methods0
Improving the Worst-Case Bidirectional Communication Complexity for Nonconvex Distributed Optimization under Function Similarity0
Innovation Compression for Communication-efficient Distributed Optimization with Linear Convergence0
Graph Learning Under Partial Observability0
Is Local SGD Better than Minibatch SGD?0
Iterative Pre-Conditioning to Expedite the Gradient-Descent Method0
KKT Conditions, First-Order and Second-Order Optimization, and Distributed Optimization: Tutorial and Survey0
Model Aggregation via Good-Enough Model Spaces0
LAGO: Few-shot Crosslingual Embedding Inversion Attacks via Language Similarity-Aware Graph Optimization0
LASER: Linear Compression in Wireless Distributed Optimization0
Layer-wise Adaptive Gradient Sparsification for Distributed Deep Learning with Convergence Guarantees0
Leader Stochastic Gradient Descent for Distributed Training of Deep Learning Models: Extension0
Learning-Accelerated ADMM for Distributed Optimal Power Flow0
Learning Autonomy in Management of Wireless Random Networks0
Distributed Model Predictive Control Design for Multi-agent Systems via Bayesian Optimization0
Learning (With) Distributed Optimization0
Leveraging Function Space Aggregation for Federated Learning at Scale0
Limited Communications Distributed Optimization via Deep Unfolded Distributed ADMM0
Linear Convergence of Distributed Mirror Descent with Integral Feedback for Strongly Convex Problems0
On Linear Convergence of PI Consensus Algorithm under the Restricted Secant Inequality0
Linear Convergent Decentralized Optimization with Compression0
Linear Speedup of Incremental Aggregated Gradient Methods on Streaming Data0
Local Methods with Adaptivity via Scaling0
LocalNewton: Reducing Communication Bottleneck for Distributed Learning0
Distributed Saddle-Point Problems: Lower Bounds, Near-Optimal and Robust Algorithms0
Local SGD Optimizes Overparameterized Neural Networks in Polynomial Time0
LoCoDL: Communication-Efficient Distributed Learning with Local Training and Compression0
Logarithmically Quantized Distributed Optimization over Dynamic Multi-Agent Networks0
Log-Scale Quantization in Distributed First-Order Methods: Gradient-based Learning from Distributed Data0
Show:102550
← PrevPage 8 of 11Next →

No leaderboard results yet.