SOTAVerified

Distributed Optimization

The goal of Distributed Optimization is to optimize a certain objective defined over millions of billions of data that is distributed over many machines by utilizing the computational power of these machines.

Source: Analysis of Distributed StochasticDual Coordinate Ascent

Papers

Showing 501525 of 536 papers

TitleStatusHype
A Survey of Optimization Methods for Training DL Models: Theoretical Perspective on Convergence and Generalization0
A Survey of Resilient Coordination for Cyber-Physical Systems Against Malicious Attacks0
A Survey on Distributed Evolutionary Computation0
A survey on secure decentralized optimization and learning0
Asymptotic Network Independence in Distributed Stochastic Optimization for Machine Learning0
Asynchronous Adaptation and Learning over Networks --- Part I: Modeling and Stability Analysis0
Asynchronous Adaptation and Learning over Networks - Part II: Performance Analysis0
Asynchronous Distributed ADMM for Large-Scale Optimization- Part I: Algorithm and Convergence Analysis0
Asynchronous Distributed Optimization with Stochastic Delays0
Asynchronous Distributed Optimization with Delay-free Parameters0
Asynchronous Forward Bounding for Distributed COPs0
Asynchronous Iterations in Optimization: New Sequence Results and Sharper Algorithmic Guarantees0
Asynchronous Stochastic Optimization Robust to Arbitrary Delays0
A Tight Convergence Analysis for Stochastic Gradient Descent with Delayed Updates0
AttentionX: Exploiting Consensus Discrepancy In Attention from A Distributed Optimization Perspective0
Auction-based and Distributed Optimization Approaches for Scheduling Observations in Satellite Constellations with Exclusive Orbit Portions0
Information-Geometric Barycenters for Bayesian Federated Learning0
BALPA: A Balanced Primal-Dual Algorithm for Nonsmooth Optimization with Application to Distributed Optimization0
Basis Matters: Better Communication-Efficient Second Order Methods for Federated Learning0
Beyond Self-Repellent Kernels: History-Driven Target Towards Efficient Nonlinear MCMC on General Graphs0
Bias-Variance Reduced Local SGD for Less Heterogeneous Federated Learning0
Byzantine Fault Tolerant Distributed Linear Regression0
Byzantine Machine Learning Made Easy by Resilient Averaging of Momentums0
Byzantine-Resilient Federated Learning via Distributed Optimization0
Byzantine-Resilient Non-Convex Stochastic Gradient Descent0
Show:102550
← PrevPage 21 of 22Next →

No leaderboard results yet.