SOTAVerified

Distributed Optimization

The goal of Distributed Optimization is to optimize a certain objective defined over millions of billions of data that is distributed over many machines by utilizing the computational power of these machines.

Source: Analysis of Distributed StochasticDual Coordinate Ascent

Papers

Showing 6170 of 536 papers

TitleStatusHype
A survey on secure decentralized optimization and learning0
Asymptotic Network Independence in Distributed Stochastic Optimization for Machine Learning0
An Integrated Optimization + Learning Approach to Optimal Dynamic Pricing for the Retailer with Multi-type Customers in Smart Grids0
A Differential Private Method for Distributed Optimization in Directed Networks via State Decomposition0
An Exact Quantized Decentralized Gradient Descent Algorithm0
SHED: A Newton-type algorithm for federated learning based on incremental Hessian eigenvector sharing0
Debiased distributed learning for sparse partial linear models in high dimensions0
Accelerating variational quantum algorithms with multiple quantum processors0
An Equivalent Circuit Approach to Distributed Optimization0
Accelerated Distributed Optimization with Compression and Error Feedback0
Show:102550
← PrevPage 7 of 54Next →

No leaderboard results yet.