SOTAVerified

Distributed Optimization

The goal of Distributed Optimization is to optimize a certain objective defined over millions of billions of data that is distributed over many machines by utilizing the computational power of these machines.

Source: Analysis of Distributed StochasticDual Coordinate Ascent

Papers

Showing 351375 of 536 papers

TitleStatusHype
GIANT: Globally Improved Approximate Newton Method for Distributed Optimization0
Goal-Oriented Wireless Communication Resource Allocation for Cyber-Physical Systems0
GoSGD: Distributed Optimization for Deep Learning with Gossip Exchange0
Gradient-Consensus: Linearly Convergent Distributed Optimization Algorithm over Directed Graphs0
Gradient flows and proximal splitting methods: A unified view on accelerated and stochastic optimization0
Gradient Primal-Dual Algorithm Converges to Second-Order Stationary Solution for Nonconvex Distributed Optimization Over Networks0
Gradient Sparsification for Communication-Efficient Distributed Optimization0
Gradient-Tracking over Directed Graphs for solving Leaderless Multi-Cluster Games0
Graph Neural Network-Based Distributed Optimal Control for Linear Networked Systems: An Online Distributed Training Approach0
Graph Neural Networks Gone Hogwild0
Graphon Particle Systems, Part II: Dynamics of Distributed Stochastic Continuum Optimization0
Hemingway: Modeling Distributed Optimization Algorithms0
Hessian Riemannian Flow For Multi-Population Wardrop Equilibrium0
Simple and Scalable Algorithms for Cluster-Aware Precision Medicine0
High-performance Kernel Machines with Implicit Distributed Optimization and Randomization0
High-Probability Convergence for Composite and Distributed Stochastic Minimization and Variational Inequalities with Heavy-Tailed Noise0
Hybrid Decentralized Optimization: Leveraging Both First- and Zeroth-Order Optimizers for Faster Convergence0
Hyperspectral Unmixing Based on Clustered Multitask Networks0
Impact of Redundancy on Resilience in Distributed Optimization and Learning0
Improving Rate of Convergence via Gain Adaptation in Multi-Agent Distributed ADMM Framework0
Improving the Transient Times for Distributed Stochastic Gradient Methods0
Improving the Worst-Case Bidirectional Communication Complexity for Nonconvex Distributed Optimization under Function Similarity0
Innovation Compression for Communication-efficient Distributed Optimization with Linear Convergence0
Graph Learning Under Partial Observability0
Is Local SGD Better than Minibatch SGD?0
Show:102550
← PrevPage 15 of 22Next →

No leaderboard results yet.