SOTAVerified

Distributed Optimization

The goal of Distributed Optimization is to optimize a certain objective defined over millions of billions of data that is distributed over many machines by utilizing the computational power of these machines.

Source: Analysis of Distributed StochasticDual Coordinate Ascent

Papers

Showing 401450 of 536 papers

TitleStatusHype
Theoretically Better and Numerically Faster Distributed Optimization with Smoothness-Aware Quantization Techniques0
Smoothness Matrices Beat Smoothness Constants: Better Communication Compression Techniques for Distributed Optimization0
Solving Non-smooth Constrained Programs with Lower Complexity than O(1/ ): A Primal-Dual Homotopy Smoothing Approach0
Sparse-SignSGD with Majority Vote for Communication-Efficient Distributed Learning0
Sparse sketches with small inversion bias0
Sparsification as a Remedy for Staleness in Distributed Asynchronous SGD0
Sparsity Constrained Distributed Unmixing of Hyperspectral Data0
Spatial Reuse in Dense Wireless Areas: A Cross-layer Optimization Approach via ADMM0
Distributed Optimization by Network Flows with Spatio-Temporal Compression0
Spatio-Temporal Communication Compression in Distributed Prime-Dual Flows0
StochaLM: a Stochastic alternate Linearization Method for distributed optimization0
On the Convergence of Distributed Stochastic Bilevel Optimization Algorithms over a Network0
Stochastic, Distributed and Federated Optimization for Machine Learning0
Stochastic Distributed Optimization for Machine Learning from Decentralized Features0
Stochastic Distributed Optimization under Average Second-order Similarity: Algorithms and Analysis0
Straggler Mitigation in Distributed Optimization Through Data Encoding0
Straggler-Resilient Distributed Machine Learning with Dynamic Backup Workers0
Streamlining in the Riemannian Realm: Efficient Riemannian Optimization with Loopless Variance Reduction0
Structured Reinforcement Learning for Incentivized Stochastic Covert Optimization0
SUCAG: Stochastic Unbiased Curvature-aided Gradient Method for Distributed Optimization0
Supervised MPC control of large-scale electricity networks via clustering methods0
Survey of Distributed Algorithms for Resource Allocation over Multi-Agent Systems0
TernaryVote: Differentially Private, Communication Efficient, and Byzantine Resilient Distributed Optimization on Heterogeneous Data0
The Communication Complexity of Optimization0
The Geometry of Sign Gradient Descent0
The Limits and Potentials of Local SGD for Distributed Heterogeneous Learning with Intermittent Communication0
The Minimax Complexity of Distributed Optimization0
Tie-Line Characteristics based Partitioning for Distributed Optimization of Power Systems0
Tighter Performance Theory of FedExProx0
Toward Communication Efficient Adaptive Gradient Method0
Towards privacy-preserving cooperative control via encrypted distributed optimization0
Towards Scalable Multi-View Reconstruction of Geometry and Materials0
Fairness-Oriented User Scheduling for Bursty Downlink Transmission Using Multi-Agent Reinforcement Learning0
Trading Computation for Communication: Distributed Stochastic Dual Coordinate Ascent0
Training Deep Neural Networks via Optimization Over Graphs0
Trajectory Normalized Gradients for Distributed Optimization0
Unbiased and Sign Compression in Distributed Learning: Comparing Noise Resilience via SDEs0
Unbiased Compression Saves Communication in Distributed Optimization: When and How Much?0
Uncertain Multi-Agent Systems with Distributed Constrained Optimization Missions and Event-Triggered Communications: Application to Resource Allocation0
Understanding A Class of Decentralized and Federated Optimization Algorithms: A Multi-Rate Feedback Control Perspective0
Utilizing Redundancy in Cost Functions for Resilience in Distributed Optimization and Learning0
Variance Reduction in Deep Learning: More Momentum is All You Need0
vqSGD: Vector Quantized Stochastic Gradient Descent0
When Evolutionary Computation Meets Privacy0
Widely-distributed Radar Imaging Based on Consensus ADMM0
Without-Replacement Sampling for Stochastic Gradient Methods: Convergence Results and Application to Distributed Optimization0
Without-Replacement Sampling for Stochastic Gradient Methods0
Asynchronous Message-Passing and Zeroth-Order Optimization Based Distributed Learning with a Use-Case in Resource Allocation in Communication Networks0
Zeroth-Order Feedback-Based Optimization for Distributed Demand Response0
Zeroth Order Nonconvex Multi-Agent Optimization over Networks0
Show:102550
← PrevPage 9 of 11Next →

No leaderboard results yet.