SOTAVerified

Distributed Optimization

The goal of Distributed Optimization is to optimize a certain objective defined over millions of billions of data that is distributed over many machines by utilizing the computational power of these machines.

Source: Analysis of Distributed StochasticDual Coordinate Ascent

Papers

Showing 326350 of 536 papers

TitleStatusHype
Federated Active Learning (F-AL): an Efficient Annotation Strategy for Federated Learning0
Federated Conditional Stochastic Optimization0
Federated K-Means Clustering via Dual Decomposition-based Distributed Optimization0
Federated Learning Assisted Distributed Energy Optimization0
Federated Learning: From Theory to Practice0
A Unified Linear Speedup Analysis of Federated Averaging and Nesterov FedAvg0
Federated Learning with Compression: Unified Analysis and Sharp Guarantees0
Federated Minimax Optimization: Improved Convergence Analyses and Algorithms0
Federated Multi-Level Optimization over Decentralized Networks0
Federated Optimization:Distributed Optimization Beyond the Datacenter0
Federated Optimization: Distributed Machine Learning for On-Device Intelligence0
Federated Optimization with Doubly Regularized Drift Correction0
Federated TD Learning over Finite-Rate Erasure Channels: Linear Speedup under Markovian Sampling0
FedSplit: An algorithmic framework for fast federated optimization0
Finite-Time Consensus Learning for Decentralized Optimization with Nonlinear Gossiping0
Distributed Optimization with Quantized Gradient Descent0
Flattened one-bit stochastic gradient descent: compressed distributed optimization with controlled variance0
FLIX: A Simple and Communication-Efficient Alternative to Local Methods in Federated Learning0
FL-MISR: Fast Large-Scale Multi-Image Super-Resolution for Computed Tomography Based on Multi-GPU Acceleration0
Fractional Order Distributed Optimization0
From Centralized to Decentralized Federated Learning: Theoretical Insights, Privacy Preservation, and Robustness Challenges0
Fundamental Bias in Inverting Random Sampling Matrices with Application to Sub-sampled Newton0
Fundamental Resource Trade-offs for Encoded Distributed Optimization0
Generalized Gradient Descent is a Hypergraph Functor0
Geometrically Convergent Distributed Optimization with Uncoordinated Step-Sizes0
Show:102550
← PrevPage 14 of 22Next →

No leaderboard results yet.