SOTAVerified

Distributed Optimization

The goal of Distributed Optimization is to optimize a certain objective defined over millions of billions of data that is distributed over many machines by utilizing the computational power of these machines.

Source: Analysis of Distributed StochasticDual Coordinate Ascent

Papers

Showing 5175 of 536 papers

TitleStatusHype
PIM-Opt: Demystifying Distributed Optimization Algorithms on a Real-World Processing-In-Memory SystemCode0
A primal-dual perspective for distributed TD-learningCode0
An Accelerated Communication-Efficient Primal-Dual Optimization Framework for Structured Machine LearningCode0
Accelerating Exact and Approximate Inference for (Distributed) Discrete Optimization with GPUsCode0
Dynamic communication topologies for distributed heuristics in energy system optimization algorithmsCode0
Federated Learning with Compression: Unified Analysis and Sharp GuaranteesCode0
Accelerated Primal-Dual Algorithms for Distributed Smooth Convex Optimization over NetworksCode0
Optimization for Large-Scale Machine Learning with Distributed Features and ObservationsCode0
Distributed optimization for nonrigid nano-tomographyCode0
Distributed Optimization, Averaging via ADMM, and Network TopologyCode0
Distributed Optimization using Heterogeneous Compute SystemsCode0
Qsparse-local-SGD: Distributed SGD with Quantization, Sparsification and Local ComputationsCode0
Efficient Randomized Subspace Embeddings for Distributed Optimization under a Communication BudgetCode0
DASHA: Distributed Nonconvex Optimization with Communication Compression, Optimal Oracle Complexity, and No Client SynchronizationCode0
Distributed Markov Chain Monte Carlo Sampling based on the Alternating Direction Method of MultipliersCode0
Distributed Optimization with Arbitrary Local SolversCode0
Communication-Efficient Distributed Stochastic AUC Maximization with Deep Neural NetworksCode0
Communication Compression for Byzantine Robust Learning: New Efficient Algorithms and Improved RatesCode0
Communication- and Computation-Efficient Distributed Submodular Optimization in Robot Mesh NetworksCode0
Communication Efficient Distributed Optimization using an Approximate Newton-type MethodCode0
Communication-Efficient Federated Linear and Deep Generalized Canonical Correlation AnalysisCode0
Byzantine-Robust Loopless Stochastic Variance-Reduced GradientCode0
Differentially Private Distributed Estimation and LearningCode0
Distributed Adversarial Training to Robustify Deep Neural Networks at ScaleCode0
Federated Learning: Challenges, Methods, and Future DirectionsCode0
Show:102550
← PrevPage 3 of 22Next →

No leaderboard results yet.