SOTAVerified

Distributed Optimization

The goal of Distributed Optimization is to optimize a certain objective defined over millions of billions of data that is distributed over many machines by utilizing the computational power of these machines.

Source: Analysis of Distributed StochasticDual Coordinate Ascent

Papers

Showing 476500 of 536 papers

TitleStatusHype
Scalable Centralized Deep Multi-Agent Reinforcement Learning via Policy Gradients0
Seamless Integration: Sampling Strategies in Federated Learning Systems0
Secure Architectures Implementing Trusted Coalitions for Blockchained Distributed Learning (TCLearn)0
Semantics, Representations and Grammars for Deep Learning0
Short vs. Long-term Coordination of Drones: When Distributed Optimization Meets Deep Reinforcement Learning0
Sign Operator for Coping with Heavy-Tailed Noise in Non-Convex Optimization: High Probability Bounds Under (L_0, L_1)-Smoothness0
Simulation-Integrated Distributed Optimal Power Flow for Unbalanced Power Distribution Systems0
Simultaneous Contact-Rich Grasping and Locomotion via Distributed Optimization Enabling Free-Climbing for Multi-Limbed Robots0
Single Point-Based Distributed Zeroth-Order Optimization with a Non-Convex Stochastic Objective Function0
Smoothed Normalization for Efficient Distributed Private Optimization0
Theoretically Better and Numerically Faster Distributed Optimization with Smoothness-Aware Quantization Techniques0
Distributed Optimization using Heterogeneous Compute SystemsCode0
Private Multi-Task Learning: Formulation and Applications to Federated LearningCode0
GradSkip: Communication-Accelerated Local Gradient Methods with Better Computational ComplexityCode0
Sparsified SGD with MemoryCode0
Distributed Optimization with Arbitrary Local SolversCode0
Distributed Markov Chain Monte Carlo Sampling based on the Alternating Direction Method of MultipliersCode0
Efficient Randomized Subspace Embeddings for Distributed Optimization under a Communication BudgetCode0
Communication-Efficient Distributed Stochastic AUC Maximization with Deep Neural NetworksCode0
Distributed Adversarial Training to Robustify Deep Neural Networks at ScaleCode0
Adding vs. Averaging in Distributed Primal-Dual OptimizationCode0
ZOOpt: Toolbox for Derivative-Free OptimizationCode0
Accelerated Primal-Dual Algorithms for Distributed Smooth Convex Optimization over NetworksCode0
Shuffle-QUDIO: accelerate distributed VQE with trainability enhancement and measurement reductionCode0
Differentially Private Distributed Estimation and LearningCode0
Show:102550
← PrevPage 20 of 22Next →

No leaderboard results yet.