SOTAVerified

Distributed Optimization

The goal of Distributed Optimization is to optimize a certain objective defined over millions of billions of data that is distributed over many machines by utilizing the computational power of these machines.

Source: Analysis of Distributed StochasticDual Coordinate Ascent

Papers

Showing 226250 of 536 papers

TitleStatusHype
Power Bundle Adjustment for Large-Scale 3D ReconstructionCode2
Optimization-Based Ramping Reserve Allocation of BESS for AGC Enhancement0
Distributed Dynamic Safe Screening Algorithms for Sparse Regularization0
FedADMM: A Federated Primal-Dual Algorithm Allowing Partial Participation0
Competition-Based Resilience in Distributed Quadratic Optimization0
Multi-Edge Server-Assisted Dynamic Federated Learning with an Optimized Floating Aggregation Point0
Distributed Sketching for Randomized Optimization: Exact Characterization, Concentration and Lower Bounds0
Distributed Dual Quaternion Based Localization of Visual Sensor Networks0
Optimal Methods for Convex Risk Averse Distributed Optimization0
Federated Minimax Optimization: Improved Convergence Analyses and Algorithms0
Correlated quantization for distributed mean estimation and optimization0
Acceleration of Federated Learning with Alleviated Forgetting in Local TrainingCode1
Distributed Methods with Absolute Compression and Error Compensation0
Distributed-MPC with Data-Driven Estimation of Bus Admittance Matrix in Voltage Control0
Multi-objective Distributed Optimization for Zonal Distribution System with Multi-Microgrids0
Signal Decomposition Using Masked Proximal OperatorsCode1
Escaping Saddle Points with Bias-Variance Reduced Local Perturbed SGD for Communication Efficient Nonconvex Distributed Learning0
Distributed saddle point problems for strongly concave-convex functions0
Spatial Reuse in Dense Wireless Areas: A Cross-layer Optimization Approach via ADMM0
SHED: A Newton-type algorithm for federated learning based on incremental Hessian eigenvector sharing0
Communication Efficient Federated Learning via Ordered ADMM in a Fully Decentralized Setting0
DASHA: Distributed Nonconvex Optimization with Communication Compression, Optimal Oracle Complexity, and No Client Synchronization0
Federated Active Learning (F-AL): an Efficient Annotation Strategy for Federated Learning0
Recycling Model Updates in Federated Learning: Are Gradient Subspaces Low-Rank?Code1
End-to-End Quality-of-Service Assurance with Autonomous Systems: 5G/6G Case Study0
Show:102550
← PrevPage 10 of 22Next →

No leaderboard results yet.