SOTAVerified

Distributed Optimization

The goal of Distributed Optimization is to optimize a certain objective defined over millions of billions of data that is distributed over many machines by utilizing the computational power of these machines.

Source: Analysis of Distributed StochasticDual Coordinate Ascent

Papers

Showing 201250 of 536 papers

TitleStatusHype
Shuffle-QUDIO: accelerate distributed VQE with trainability enhancement and measurement reductionCode0
Cooperative Tuning of Multi-Agent Optimal Control SystemsCode0
Distributed CPU Scheduling Subject to Nonlinear Constraints0
Real-Time Distributed Model Predictive Control with Limited Communication Data Rates0
Decentralized Optimization with Distributed Features and Non-Smooth Objective Functions0
Consensus optimization approach for distributed Kalman filtering: performance recovery of centralized filtering with proofs0
Multi-Agent Reinforcement Learning with Graph Convolutional Neural Networks for optimal Bidding Strategies of Generation Units in Electricity Markets0
Coordinating Flexible Ramping Products with Dynamics of the Natural Gas Network0
Convergence Theory of Generalized Distributed Subgradient Method with Random Quantization0
Online Computation of Terminal Ingredients in Distributed Model Predictive Control for Reference Tracking0
Distributed Learning of Neural Lyapunov Functions for Large-Scale Networked Dissipative Systems0
Variance Reduced ProxSkip: Algorithm, Theory and Application to Federated LearningCode0
Can Competition Outperform Collaboration? The Role of Misbehaving Agents0
Simultaneous Contact-Rich Grasping and Locomotion via Distributed Optimization Enabling Free-Climbing for Multi-Limbed Robots0
On the Convergence of Distributed Stochastic Bilevel Optimization Algorithms over a Network0
Distributed Adversarial Training to Robustify Deep Neural Networks at ScaleCode0
Lower Bounds and Nearly Optimal Algorithms in Distributed Learning with Communication Compression0
Beyond spectral gap: The role of the topology in decentralized learningCode1
A Computation and Communication Efficient Method for Distributed Nonconvex Problems in the Partial Participation Setting0
Optimal Gradient Sliding and its Application to Distributed Optimization Under Similarity0
Byzantine Machine Learning Made Easy by Resilient Averaging of Momentums0
Distributed Optimization in Distribution Systems with Grid-Forming and Grid-Supporting Inverters0
On Distributed Adaptive Optimization with Gradient Compression0
EF-BV: A Unified Theory of Error Feedback and Variance Reduction Mechanisms for Biased and Unbiased Compression in Distributed OptimizationCode0
Understanding A Class of Decentralized and Federated Optimization Algorithms: A Multi-Rate Feedback Control Perspective0
Power Bundle Adjustment for Large-Scale 3D ReconstructionCode2
Optimization-Based Ramping Reserve Allocation of BESS for AGC Enhancement0
Distributed Dynamic Safe Screening Algorithms for Sparse Regularization0
FedADMM: A Federated Primal-Dual Algorithm Allowing Partial Participation0
Competition-Based Resilience in Distributed Quadratic Optimization0
Multi-Edge Server-Assisted Dynamic Federated Learning with an Optimized Floating Aggregation Point0
Distributed Sketching for Randomized Optimization: Exact Characterization, Concentration and Lower Bounds0
Distributed Dual Quaternion Based Localization of Visual Sensor Networks0
Optimal Methods for Convex Risk Averse Distributed Optimization0
Federated Minimax Optimization: Improved Convergence Analyses and Algorithms0
Correlated quantization for distributed mean estimation and optimization0
Acceleration of Federated Learning with Alleviated Forgetting in Local TrainingCode1
Distributed Methods with Absolute Compression and Error Compensation0
Distributed-MPC with Data-Driven Estimation of Bus Admittance Matrix in Voltage Control0
Multi-objective Distributed Optimization for Zonal Distribution System with Multi-Microgrids0
Signal Decomposition Using Masked Proximal OperatorsCode1
Escaping Saddle Points with Bias-Variance Reduced Local Perturbed SGD for Communication Efficient Nonconvex Distributed Learning0
Distributed saddle point problems for strongly concave-convex functions0
Spatial Reuse in Dense Wireless Areas: A Cross-layer Optimization Approach via ADMM0
SHED: A Newton-type algorithm for federated learning based on incremental Hessian eigenvector sharing0
Communication Efficient Federated Learning via Ordered ADMM in a Fully Decentralized Setting0
DASHA: Distributed Nonconvex Optimization with Communication Compression, Optimal Oracle Complexity, and No Client SynchronizationCode0
Federated Active Learning (F-AL): an Efficient Annotation Strategy for Federated Learning0
Recycling Model Updates in Federated Learning: Are Gradient Subspaces Low-Rank?Code1
End-to-End Quality-of-Service Assurance with Autonomous Systems: 5G/6G Case Study0
Show:102550
← PrevPage 5 of 11Next →

No leaderboard results yet.