SOTAVerified

Distributed Optimization

The goal of Distributed Optimization is to optimize a certain objective defined over millions of billions of data that is distributed over many machines by utilizing the computational power of these machines.

Source: Analysis of Distributed StochasticDual Coordinate Ascent

Papers

Showing 51100 of 536 papers

TitleStatusHype
Single Point-Based Distributed Zeroth-Order Optimization with a Non-Convex Stochastic Objective Function0
Aiding Global Convergence in Federated Learning via Local Perturbation and Mutual Similarity Information0
A Federated Distributionally Robust Support Vector Machine with Mixture of Wasserstein Balls Ambiguity Set for Distributed Fault Diagnosis0
Newton Meets Marchenko-Pastur: Massively Parallel Second-Order Optimization with Hessian Sketching and Debiasing0
A Plug and Play Distributed Secondary Controller for Microgrids with Grid-Forming Inverters0
Distributed Optimization via Energy Conservation Laws in Dilated Coordinates0
Peer-to-Peer Learning Dynamics of Wide Neural Networks0
Exploring Scaling Laws for Local SGD in Large Language Model Training0
Distributed Optimization for Traffic Light Control and Connected Automated Vehicle Coordination in Mixed-Traffic Intersections0
ADMM for Downlink Beamforming in Cell-Free Massive MIMO Systems0
Distributed Optimization with Finite Bit Adaptive Quantization for Efficient Communication and Precision Enhancement0
AttentionX: Exploiting Consensus Discrepancy In Attention from A Distributed Optimization Perspective0
GNN-Empowered Effective Partial Observation MARL Method for AoI Management in Multi-UAV NetworkCode1
Prescribed-time Convergent Distributed Multiobjective Optimization with Dynamic Event-triggered Communication0
Seamless Integration: Sampling Strategies in Federated Learning Systems0
A survey on secure decentralized optimization and learning0
Distributed Optimization by Network Flows with Spatio-Temporal Compression0
Spatio-Temporal Communication Compression in Distributed Prime-Dual Flows0
Distributed Difference of Convex Optimization0
A Mirror Descent-Based Algorithm for Corruption-Tolerant Distributed Gradient Descent0
Cooperative Integrated Sensing and Communication Networks: Analysis and Distributed Design0
Communication- and Computation-Efficient Distributed Submodular Optimization in Robot Mesh NetworksCode0
Provable Privacy Advantages of Decentralized Federated Learning via Distributed Optimization0
Fast Distributed Optimization over Directed Graphs under Malicious Attacks using Trust0
Graphon Particle Systems, Part II: Dynamics of Distributed Stochastic Continuum Optimization0
Accelerating Distributed Optimization: A Primal-Dual Perspective on Local Steps0
Graph Neural Networks Gone Hogwild0
Distributed Utility Optimization in Vehicular Communication Systems0
A KL-based Analysis Framework with Applications to Non-Descent Optimization Methods0
ACCO: Accumulate While You Communicate for Communication-Overlapped Sharded LLM TrainingCode1
Log-Scale Quantization in Distributed First-Order Methods: Gradient-based Learning from Distributed Data0
Local Methods with Adaptivity via Scaling0
Differentially-Private Distributed Model Predictive Control of Linear Discrete-Time Systems with Global Constraints0
MicroAdam: Accurate Adaptive Optimization with Low Space Overhead and Provable ConvergenceCode1
The Limits and Potentials of Local SGD for Distributed Heterogeneous Learning with Intermittent Communication0
Flattened one-bit stochastic gradient descent: compressed distributed optimization with controlled variance0
Structured Reinforcement Learning for Incentivized Stochastic Covert Optimization0
Distributed Traffic Signal Control via Coordinated Maximum Pressure-plus-Penalty0
Estimation Network Design framework for efficient distributed optimization0
Rate Analysis of Coupled Distributed Stochastic Approximation for Misspecified Optimization0
Distributed Fractional Bayesian Learning for Adaptive Optimization0
Federated Optimization with Doubly Regularized Drift Correction0
PIM-Opt: Demystifying Distributed Optimization Algorithms on a Real-World Processing-In-Memory SystemCode0
Generalized Gradient Descent is a Hypergraph Functor0
Distributed Maximum Consensus over Noisy Links0
Network-Aware Value Stacking of Community Battery via Asynchronous Distributed Optimization0
Quantization Avoids Saddle Points in Distributed Optimization0
Streamlining in the Riemannian Realm: Efficient Riemannian Optimization with Loopless Variance Reduction0
LoCoDL: Communication-Efficient Distributed Learning with Local Training and Compression0
MUSIC: Accelerated Convergence for Distributed Optimization With Inexact and Exact Methods0
Show:102550
← PrevPage 2 of 11Next →

No leaderboard results yet.