SOTAVerified

Distributed Optimization

The goal of Distributed Optimization is to optimize a certain objective defined over millions of billions of data that is distributed over many machines by utilizing the computational power of these machines.

Source: Analysis of Distributed StochasticDual Coordinate Ascent

Papers

Showing 521530 of 536 papers

TitleStatusHype
Bias-Variance Reduced Local SGD for Less Heterogeneous Federated Learning0
Byzantine Fault Tolerant Distributed Linear Regression0
Byzantine Machine Learning Made Easy by Resilient Averaging of Momentums0
Byzantine-Resilient Federated Learning via Distributed Optimization0
Byzantine-Resilient Non-Convex Stochastic Gradient Descent0
Byzantine-Resilient Output Optimization of Multiagent via Self-Triggered Hybrid Detection Approach0
Byzantine-Robust Learning on Heterogeneous Datasets via Resampling0
Can Competition Outperform Collaboration? The Role of Misbehaving Agents0
CANITA: Faster Rates for Distributed Convex Optimization with Communication Compression0
CEC: Crowdsourcing-based Evolutionary Computation for Distributed Optimization0
Show:102550
← PrevPage 53 of 54Next →

No leaderboard results yet.