SOTAVerified

Distributed Optimization

The goal of Distributed Optimization is to optimize a certain objective defined over millions of billions of data that is distributed over many machines by utilizing the computational power of these machines.

Source: Analysis of Distributed StochasticDual Coordinate Ascent

Papers

Showing 126150 of 536 papers

TitleStatusHype
Convergence Theory of Generalized Distributed Subgradient Method with Random Quantization0
A Plug and Play Distributed Secondary Controller for Microgrids with Grid-Forming Inverters0
Client Selection in Federated Learning: Convergence Analysis and Power-of-Choice Selection Strategies0
Anytime MiniBatch: Exploiting Stragglers in Online Distributed Optimization0
Centralised and Distributed Optimization for Aggregated Flexibility Services Provision0
Cell Zooming with Masked Data for Off-Grid Small Cell Networks: Distributed Optimization Approach0
A Novel Decentralized Algorithm for Coordinating the Optimal Power and Traffic Flows with EVs based on Variable Inner Loop Selection0
A Distributed ADMM-based Deep Learning Approach for Thermal Control in Multi-Zone Buildings under Demand Response Events0
Acceleration for Compressed Gradient Descent in Distributed Optimization0
Accelerated Methods with Compressed Communications for Distributed Optimization Problems under Data Similarity0
Accelerated consensus via Min-Sum Splitting0
CEDAS: A Compressed Decentralized Stochastic Gradient Method with Improved Convergence0
CEC: Crowdsourcing-based Evolutionary Computation for Distributed Optimization0
An Online Optimization Approach for Multi-Agent Tracking of Dynamic Parameters in the Presence of Adversarial Noise0
CANITA: Faster Rates for Distributed Convex Optimization with Communication Compression0
Can Competition Outperform Collaboration? The Role of Misbehaving Agents0
An Integrated Optimization + Learning Approach to Optimal Dynamic Pricing for the Retailer with Multi-type Customers in Smart Grids0
A Differential Private Method for Distributed Optimization in Directed Networks via State Decomposition0
Byzantine-Robust Learning on Heterogeneous Datasets via Resampling0
An Exact Quantized Decentralized Gradient Descent Algorithm0
Byzantine-Resilient Output Optimization of Multiagent via Self-Triggered Hybrid Detection Approach0
SHED: A Newton-type algorithm for federated learning based on incremental Hessian eigenvector sharing0
Debiased distributed learning for sparse partial linear models in high dimensions0
Accelerating variational quantum algorithms with multiple quantum processors0
Byzantine-Resilient Non-Convex Stochastic Gradient Descent0
Show:102550
← PrevPage 6 of 22Next →

No leaderboard results yet.