SOTAVerified

Second-order methods

Use second-order statistics to process data.

Papers

Showing 101125 of 181 papers

TitleStatusHype
FedNL: Making Newton-Type Methods Applicable to Federated Learning0
Tensor Normal Training for Deep Learning ModelsCode0
Exact Stochastic Second Order Deep Learning0
Quasi-Newton Quasi-Monte Carlo for variational Bayes0
Research of Damped Newton Stochastic Gradient Descent Method for Neural Network Training0
A Distributed Optimisation Framework Combining Natural Gradient with Hessian-Free for Discriminative Sequence Training0
Learning-Augmented Sketches for Hessians0
Distributed Second Order Methods with Fast Rates and Compressed Communication0
Kronecker-factored Quasi-Newton Methods for Deep Learning0
A Chaos Theory Approach to Understand Neural Network Optimization0
Adaptive Single-Pass Stochastic Gradient Descent in Input Sparsity Time0
Noise and Fluctuation of Finite Learning Rate Stochastic Gradient Descent0
Utility Maximization for Large-Scale Cell-Free Massive MIMO Downlink0
Second-order Neural Network Training Using Complex-step Directional Derivative0
Debiasing Distributed Second Order Optimization with Surrogate Sketching and Scaled Regularization0
Second-Order Information in Non-Convex Stochastic Optimization: Power and Limitations0
When Does Preconditioning Help or Hurt Generalization?0
A block coordinate descent optimizer for classification problems exploiting convexity0
Enhance Curvature Information by Structured Stochastic Quasi-Newton Methods0
Learning Rates as a Function of Batch Size: A Random Matrix Theory Approach to Neural Network Training0
SONIA: A Symmetric Blockwise Truncated Optimization Algorithm0
Asymptotic Analysis of Conditioned Stochastic Gradient DescentCode0
On the Promise of the Stochastic Generalized Gauss-Newton Method for Training DNNsCode0
Critical Point-Finding Methods Reveal Gradient-Flat Regions of Deep Network Losses0
Stochastic Subspace Cubic Newton Method0
Show:102550
← PrevPage 5 of 8Next →

No leaderboard results yet.