SOTAVerified

Distributed Optimization

The goal of Distributed Optimization is to optimize a certain objective defined over millions of billions of data that is distributed over many machines by utilizing the computational power of these machines.

Source: Analysis of Distributed StochasticDual Coordinate Ascent

Papers

Showing 341350 of 536 papers

TitleStatusHype
Distributed Optimization with Quantized Gradient Descent0
Flattened one-bit stochastic gradient descent: compressed distributed optimization with controlled variance0
FLIX: A Simple and Communication-Efficient Alternative to Local Methods in Federated Learning0
FL-MISR: Fast Large-Scale Multi-Image Super-Resolution for Computed Tomography Based on Multi-GPU Acceleration0
Fractional Order Distributed Optimization0
From Centralized to Decentralized Federated Learning: Theoretical Insights, Privacy Preservation, and Robustness Challenges0
Fundamental Bias in Inverting Random Sampling Matrices with Application to Sub-sampled Newton0
Fundamental Resource Trade-offs for Encoded Distributed Optimization0
Generalized Gradient Descent is a Hypergraph Functor0
Geometrically Convergent Distributed Optimization with Uncoordinated Step-Sizes0
Show:102550
← PrevPage 35 of 54Next →

No leaderboard results yet.