SOTAVerified

Distributed Optimization

The goal of Distributed Optimization is to optimize a certain objective defined over millions of billions of data that is distributed over many machines by utilizing the computational power of these machines.

Source: Analysis of Distributed StochasticDual Coordinate Ascent

Papers

Showing 301325 of 536 papers

TitleStatusHype
LocalNewton: Reducing Communication Bottleneck for Distributed Learning0
Innovation Compression for Communication-efficient Distributed Optimization with Linear Convergence0
An Efficient Learning Framework For Federated XGBoost Using Secret Sharing And Distributed OptimizationCode1
Improving the Transient Times for Distributed Stochastic Gradient Methods0
Distributed Energy Trading Management for Renewable Prosumers with HVAC and Energy Storage0
Mean Field MARL Based Bandwidth Negotiation Method for Massive Devices Spectrum Sharing0
Distributed Experiment Design and Control for Multi-agent Systems with Gaussian Processes0
Distributed Newton-like Algorithms and Learning for Optimized Power Dispatch0
Efficient Randomized Subspace Embeddings for Distributed Optimization under a Communication BudgetCode0
Moshpit SGD: Communication-Efficient Decentralized Training on Heterogeneous Unreliable DevicesCode1
Gradient-Tracking over Directed Graphs for solving Leaderless Multi-Cluster Games0
Decentralized Riemannian Gradient Descent on the Stiefel ManifoldCode1
Distributed Second Order Methods with Fast Rates and Compressed Communication0
Smoothness Matrices Beat Smoothness Constants: Better Communication Compression Techniques for Distributed Optimization0
Straggler-Resilient Distributed Machine Learning with Dynamic Backup Workers0
Bias-Variance Reduced Local SGD for Less Heterogeneous Federated Learning0
Concentration of Non-Isotropic Random Tensors with Applications to Learning and Empirical Risk Minimization0
Delayed Projection Techniques for Linearly Constrained Problems: Convergence Rates, Acceleration, and Applications0
Design of heterogeneous multi-agent system for distributed computation0
Convergent Adaptive Gradient Methods in Decentralized Optimization0
Cost-efficient SVRG with Arbitrary Sampling0
Fairness-Oriented User Scheduling for Bursty Downlink Transmission Using Multi-Agent Reinforcement Learning0
Byzantine-Resilient Non-Convex Stochastic Gradient Descent0
Linear Convergence of Distributed Mirror Descent with Integral Feedback for Strongly Convex Problems0
Wyner-Ziv Estimators for Distributed Mean Estimation with Side Information and OptimizationCode0
Show:102550
← PrevPage 13 of 22Next →

No leaderboard results yet.