SOTAVerified

Distributed Optimization

The goal of Distributed Optimization is to optimize a certain objective defined over millions of billions of data that is distributed over many machines by utilizing the computational power of these machines.

Source: Analysis of Distributed StochasticDual Coordinate Ascent

Papers

Showing 451500 of 536 papers

TitleStatusHype
Privacy-Preserving Push-Pull Method for Decentralized Optimization via State Decomposition0
Private Learning on Networks0
Private Learning on Networks: Part II0
Problem-dependent convergence bounds for randomized linear gradient compression0
Projected Push-Sum Gradient Descent-Ascent for Convex Optimizationwith Application to Economic Dispatch Problems0
Provable Privacy Advantages of Decentralized Federated Learning via Distributed Optimization0
Provably Doubly Accelerated Federated Learning: The First Theoretically Successful Combination of Local Training and Communication Compression0
Proximal gradient flow and Douglas-Rachford splitting dynamics: global exponential stability via integral quadratic constraints0
Q-SHED: Distributed Optimization at the Edge via Hessian Eigenvectors Quantization0
Qsparse-local-SGD: Distributed SGD with Quantization, Sparsification, and Local Computations0
Quantization Avoids Saddle Points in Distributed Optimization0
Quantized Frank-Wolfe: Faster Optimization, Lower Communication, and Projection Free0
Achieving Linear Speedup with ProxSkip in Distributed Stochastic Optimization0
Rate Analysis of Coupled Distributed Stochastic Approximation for Misspecified Optimization0
Real-Time Distributed Model Predictive Control with Limited Communication Data Rates0
Recurrent Averaging Inequalities in Multi-Agent Control and Social Dynamics Modeling0
Reducing the Communication of Distributed Model Predictive Control: Autoencoders and Formation Control0
Redundancy Techniques for Straggler Mitigation in Distributed Optimization and Learning0
Graph neural networks-based Scheduler for Production planning problems using Reinforcement Learning0
Residual-Evasive Attacks on ADMM in Distributed Optimization0
Review of Mathematical Optimization in Federated Learning0
Revisiting EXTRA for Smooth Distributed Optimization0
Robust Distributed Optimization With Randomly Corrupted Gradients0
Robust Optimization, Structure/Control co-design, Distributed Optimization, Monolithic Optimization, Robust Control, Parametric Uncertainty0
ROML: A Robust Feature Correspondence Approach for Matching Objects in A Set of Images0
Scalable Centralized Deep Multi-Agent Reinforcement Learning via Policy Gradients0
Seamless Integration: Sampling Strategies in Federated Learning Systems0
Secure Architectures Implementing Trusted Coalitions for Blockchained Distributed Learning (TCLearn)0
Semantics, Representations and Grammars for Deep Learning0
Short vs. Long-term Coordination of Drones: When Distributed Optimization Meets Deep Reinforcement Learning0
Sign Operator for Coping with Heavy-Tailed Noise in Non-Convex Optimization: High Probability Bounds Under (L_0, L_1)-Smoothness0
Simulation-Integrated Distributed Optimal Power Flow for Unbalanced Power Distribution Systems0
Simultaneous Contact-Rich Grasping and Locomotion via Distributed Optimization Enabling Free-Climbing for Multi-Limbed Robots0
Single Point-Based Distributed Zeroth-Order Optimization with a Non-Convex Stochastic Objective Function0
Smoothed Normalization for Efficient Distributed Private Optimization0
Theoretically Better and Numerically Faster Distributed Optimization with Smoothness-Aware Quantization Techniques0
Distributed Optimization using Heterogeneous Compute SystemsCode0
Private Multi-Task Learning: Formulation and Applications to Federated LearningCode0
GradSkip: Communication-Accelerated Local Gradient Methods with Better Computational ComplexityCode0
Sparsified SGD with MemoryCode0
Distributed Optimization with Arbitrary Local SolversCode0
Distributed Markov Chain Monte Carlo Sampling based on the Alternating Direction Method of MultipliersCode0
Efficient Randomized Subspace Embeddings for Distributed Optimization under a Communication BudgetCode0
Communication-Efficient Distributed Stochastic AUC Maximization with Deep Neural NetworksCode0
Distributed Adversarial Training to Robustify Deep Neural Networks at ScaleCode0
Adding vs. Averaging in Distributed Primal-Dual OptimizationCode0
ZOOpt: Toolbox for Derivative-Free OptimizationCode0
Accelerated Primal-Dual Algorithms for Distributed Smooth Convex Optimization over NetworksCode0
Shuffle-QUDIO: accelerate distributed VQE with trainability enhancement and measurement reductionCode0
Differentially Private Distributed Estimation and LearningCode0
Show:102550
← PrevPage 10 of 11Next →

No leaderboard results yet.