SOTAVerified

Distributed Optimization

The goal of Distributed Optimization is to optimize a certain objective defined over millions of billions of data that is distributed over many machines by utilizing the computational power of these machines.

Source: Analysis of Distributed StochasticDual Coordinate Ascent

Papers

Showing 101150 of 536 papers

TitleStatusHype
ADMM for Downlink Beamforming in Cell-Free Massive MIMO Systems0
A Reinforcement Learning Approach to Parameter Selection for Distributed Optimal Power Flow0
Communication-Efficient Distributed Optimization of Self-Concordant Empirical Loss0
A Provably Communication-Efficient Asynchronous Distributed Inference Method for Convex and Nonconvex Problems0
A Distributed Second-Order Algorithm You Can Trust0
Communication Efficient, Differentially Private Distributed Optimization using Correlation-Aware Sketching0
Communication-Efficient Accurate Statistical Estimation0
Communication/Computation Tradeoffs in Consensus-Based Distributed Optimization0
A primal-dual method for conic constrained distributed optimization problems0
Acceleration in Distributed Optimization under Similarity0
Combining Graph Attention Networks and Distributed Optimization for Multi-Robot Mixed-Integer Convex Programming0
Communication-Efficient Distributed Kalman Filtering using ADMM0
Approximate Gradient Coding with Optimal Decoding0
Communication-Efficient Distributed SGD with Compressed Sensing0
Collaborative Satisfaction of Long-Term Spatial Constraints in Multi-Agent Systems: A Distributed Optimization Approach (extended version)0
Communication Efficient Federated Learning with Linear Convergence on Heterogeneous Data0
Communication-Efficient Projection-Free Algorithm for Distributed Optimization0
Communication-efficient Variance-reduced Stochastic Gradient Descent0
Collaborative Learning over Wireless Networks: An Introductory Overview0
Concentration of Non-Isotropic Random Tensors with Applications to Learning and Empirical Risk Minimization0
Consensus optimization approach for distributed Kalman filtering: performance recovery of centralized filtering with proofs0
Continual Learning with Distributed Optimization: Does CoCoA Forget?0
Convergence rate of sign stochastic gradient descent for non-convex functions0
Convergence Rates of Two-Time-Scale Gradient Descent-Ascent Dynamics for Solving Nonconvex Min-Max Problems0
Convergence Theory of Flexible ALADIN for Distributed Optimization0
Convergence Theory of Generalized Distributed Subgradient Method with Random Quantization0
A Plug and Play Distributed Secondary Controller for Microgrids with Grid-Forming Inverters0
Client Selection in Federated Learning: Convergence Analysis and Power-of-Choice Selection Strategies0
Anytime MiniBatch: Exploiting Stragglers in Online Distributed Optimization0
Centralised and Distributed Optimization for Aggregated Flexibility Services Provision0
Cell Zooming with Masked Data for Off-Grid Small Cell Networks: Distributed Optimization Approach0
A Novel Decentralized Algorithm for Coordinating the Optimal Power and Traffic Flows with EVs based on Variable Inner Loop Selection0
A Distributed ADMM-based Deep Learning Approach for Thermal Control in Multi-Zone Buildings under Demand Response Events0
Acceleration for Compressed Gradient Descent in Distributed Optimization0
Accelerated Methods with Compressed Communications for Distributed Optimization Problems under Data Similarity0
Accelerated consensus via Min-Sum Splitting0
CEDAS: A Compressed Decentralized Stochastic Gradient Method with Improved Convergence0
CEC: Crowdsourcing-based Evolutionary Computation for Distributed Optimization0
An Online Optimization Approach for Multi-Agent Tracking of Dynamic Parameters in the Presence of Adversarial Noise0
CANITA: Faster Rates for Distributed Convex Optimization with Communication Compression0
Can Competition Outperform Collaboration? The Role of Misbehaving Agents0
An Integrated Optimization + Learning Approach to Optimal Dynamic Pricing for the Retailer with Multi-type Customers in Smart Grids0
A Differential Private Method for Distributed Optimization in Directed Networks via State Decomposition0
Byzantine-Robust Learning on Heterogeneous Datasets via Resampling0
An Exact Quantized Decentralized Gradient Descent Algorithm0
Byzantine-Resilient Output Optimization of Multiagent via Self-Triggered Hybrid Detection Approach0
SHED: A Newton-type algorithm for federated learning based on incremental Hessian eigenvector sharing0
Debiased distributed learning for sparse partial linear models in high dimensions0
Accelerating variational quantum algorithms with multiple quantum processors0
Byzantine-Resilient Non-Convex Stochastic Gradient Descent0
Show:102550
← PrevPage 3 of 11Next →

No leaderboard results yet.