SOTAVerified

Distributed Optimization

The goal of Distributed Optimization is to optimize a certain objective defined over millions of billions of data that is distributed over many machines by utilizing the computational power of these machines.

Source: Analysis of Distributed StochasticDual Coordinate Ascent

Papers

Showing 351375 of 536 papers

TitleStatusHype
Optimal Methods for Convex Risk Averse Distributed Optimization0
Optimization-Based Ramping Reserve Allocation of BESS for AGC Enhancement0
Optimization for Reinforcement Learning: From Single Agent to Cooperative Agents0
Optimization in Open Networks via Dual Averaging0
Parallel Feedforward Compensation for Output Synchronization: Fully Distributed Control and Indefinite Laplacian0
Partitioning Data on Features or Samples in Communication-Efficient Distributed Optimization?0
Peer-to-Peer Learning Dynamics of Wide Neural Networks0
Pixel super-resolved lensless on-chip sensor with scattering multiplexing0
PopSGD: Decentralized Stochastic Gradient Descent in the Population Model0
Asynchronous Decentralized SGD with Quantized and Local Updates0
Popt4jlib: A Parallel/Distributed Optimization Library for Java0
SLSGD: Secure and Efficient Distributed On-device Machine Learning0
Predict Globally, Correct Locally: Parallel-in-Time Optimal Control of Neural Networks0
Prescribed-time Convergent Distributed Multiobjective Optimization with Dynamic Event-triggered Communication0
Coordinated Day-ahead Dispatch of Multiple Power Distribution Grids hosting Stochastic Resources: An ADMM-based Framework0
Privacy-Preserving Distributed Market Mechanism for Active Distribution Networks0
Privacy-Preserving Distributed Optimization and Learning0
Privacy-Preserving Peer-to-Peer Energy Trading via Hybrid Secure Computations0
Privacy-Preserving Push-Pull Method for Decentralized Optimization via State Decomposition0
Private Learning on Networks0
Private Learning on Networks: Part II0
Problem-dependent convergence bounds for randomized linear gradient compression0
Projected Push-Sum Gradient Descent-Ascent for Convex Optimizationwith Application to Economic Dispatch Problems0
Provable Privacy Advantages of Decentralized Federated Learning via Distributed Optimization0
Provably Doubly Accelerated Federated Learning: The First Theoretically Successful Combination of Local Training and Communication Compression0
Show:102550
← PrevPage 15 of 22Next →

No leaderboard results yet.