SOTAVerified

Distributed Optimization

The goal of Distributed Optimization is to optimize a certain objective defined over millions of billions of data that is distributed over many machines by utilizing the computational power of these machines.

Source: Analysis of Distributed StochasticDual Coordinate Ascent

Papers

Showing 141150 of 536 papers

TitleStatusHype
Byzantine-Robust Learning on Heterogeneous Datasets via Resampling0
An Exact Quantized Decentralized Gradient Descent Algorithm0
Byzantine-Resilient Output Optimization of Multiagent via Self-Triggered Hybrid Detection Approach0
SHED: A Newton-type algorithm for federated learning based on incremental Hessian eigenvector sharing0
Debiased distributed learning for sparse partial linear models in high dimensions0
Accelerating variational quantum algorithms with multiple quantum processors0
Byzantine-Resilient Non-Convex Stochastic Gradient Descent0
Byzantine-Resilient Federated Learning via Distributed Optimization0
Byzantine Machine Learning Made Easy by Resilient Averaging of Momentums0
Byzantine Fault Tolerant Distributed Linear Regression0
Show:102550
← PrevPage 15 of 54Next →

No leaderboard results yet.