SOTAVerified

Distributed Optimization

The goal of Distributed Optimization is to optimize a certain objective defined over millions of billions of data that is distributed over many machines by utilizing the computational power of these machines.

Source: Analysis of Distributed StochasticDual Coordinate Ascent

Papers

Showing 251300 of 536 papers

TitleStatusHype
Federated Optimization: Distributed Machine Learning for On-Device Intelligence0
Federated Optimization with Doubly Regularized Drift Correction0
Federated TD Learning over Finite-Rate Erasure Channels: Linear Speedup under Markovian Sampling0
FedSplit: An algorithmic framework for fast federated optimization0
Finite-Time Consensus Learning for Decentralized Optimization with Nonlinear Gossiping0
Distributed Optimization with Quantized Gradient Descent0
Flattened one-bit stochastic gradient descent: compressed distributed optimization with controlled variance0
FLIX: A Simple and Communication-Efficient Alternative to Local Methods in Federated Learning0
FL-MISR: Fast Large-Scale Multi-Image Super-Resolution for Computed Tomography Based on Multi-GPU Acceleration0
Fractional Order Distributed Optimization0
From Centralized to Decentralized Federated Learning: Theoretical Insights, Privacy Preservation, and Robustness Challenges0
Fundamental Bias in Inverting Random Sampling Matrices with Application to Sub-sampled Newton0
Fundamental Resource Trade-offs for Encoded Distributed Optimization0
Generalized Gradient Descent is a Hypergraph Functor0
Geometrically Convergent Distributed Optimization with Uncoordinated Step-Sizes0
GIANT: Globally Improved Approximate Newton Method for Distributed Optimization0
Goal-Oriented Wireless Communication Resource Allocation for Cyber-Physical Systems0
GoSGD: Distributed Optimization for Deep Learning with Gossip Exchange0
Gradient-Consensus: Linearly Convergent Distributed Optimization Algorithm over Directed Graphs0
Gradient flows and proximal splitting methods: A unified view on accelerated and stochastic optimization0
Gradient Primal-Dual Algorithm Converges to Second-Order Stationary Solution for Nonconvex Distributed Optimization Over Networks0
Gradient Sparsification for Communication-Efficient Distributed Optimization0
Gradient-Tracking over Directed Graphs for solving Leaderless Multi-Cluster Games0
Graph Neural Network-Based Distributed Optimal Control for Linear Networked Systems: An Online Distributed Training Approach0
Graph Neural Networks Gone Hogwild0
Graphon Particle Systems, Part II: Dynamics of Distributed Stochastic Continuum Optimization0
Hemingway: Modeling Distributed Optimization Algorithms0
Hessian Riemannian Flow For Multi-Population Wardrop Equilibrium0
Simple and Scalable Algorithms for Cluster-Aware Precision Medicine0
High-performance Kernel Machines with Implicit Distributed Optimization and Randomization0
High-Probability Convergence for Composite and Distributed Stochastic Minimization and Variational Inequalities with Heavy-Tailed Noise0
Hybrid Decentralized Optimization: Leveraging Both First- and Zeroth-Order Optimizers for Faster Convergence0
Hyperspectral Unmixing Based on Clustered Multitask Networks0
Impact of Redundancy on Resilience in Distributed Optimization and Learning0
Improving Rate of Convergence via Gain Adaptation in Multi-Agent Distributed ADMM Framework0
Improving the Transient Times for Distributed Stochastic Gradient Methods0
Improving the Worst-Case Bidirectional Communication Complexity for Nonconvex Distributed Optimization under Function Similarity0
Innovation Compression for Communication-efficient Distributed Optimization with Linear Convergence0
Graph Learning Under Partial Observability0
Is Local SGD Better than Minibatch SGD?0
Iterative Pre-Conditioning to Expedite the Gradient-Descent Method0
KKT Conditions, First-Order and Second-Order Optimization, and Distributed Optimization: Tutorial and Survey0
Model Aggregation via Good-Enough Model Spaces0
LAGO: Few-shot Crosslingual Embedding Inversion Attacks via Language Similarity-Aware Graph Optimization0
LASER: Linear Compression in Wireless Distributed Optimization0
Layer-wise Adaptive Gradient Sparsification for Distributed Deep Learning with Convergence Guarantees0
Leader Stochastic Gradient Descent for Distributed Training of Deep Learning Models: Extension0
Learning-Accelerated ADMM for Distributed Optimal Power Flow0
Learning Autonomy in Management of Wireless Random Networks0
Distributed Model Predictive Control Design for Multi-agent Systems via Bayesian Optimization0
Show:102550
← PrevPage 6 of 11Next →

No leaderboard results yet.