SOTAVerified

Distributed Optimization

The goal of Distributed Optimization is to optimize a certain objective defined over millions of billions of data that is distributed over many machines by utilizing the computational power of these machines.

Source: Analysis of Distributed StochasticDual Coordinate Ascent

Papers

Showing 8190 of 536 papers

TitleStatusHype
A Mirror Descent-Based Algorithm for Corruption-Tolerant Distributed Gradient Descent0
Bias-Variance Reduced Local SGD for Less Heterogeneous Federated Learning0
Information-Geometric Barycenters for Bayesian Federated Learning0
Byzantine Machine Learning Made Easy by Resilient Averaging of Momentums0
Byzantine-Resilient Federated Learning via Distributed Optimization0
Byzantine-Resilient Non-Convex Stochastic Gradient Descent0
Byzantine-Resilient Output Optimization of Multiagent via Self-Triggered Hybrid Detection Approach0
Algorithm Unrolling-Based Distributed Optimization for RIS-Assisted Cell-Free Networks0
Byzantine-Robust Learning on Heterogeneous Datasets via Resampling0
Adaptive Consensus ADMM for Distributed Optimization0
Show:102550
← PrevPage 9 of 54Next →

No leaderboard results yet.