SOTAVerified

Stochastic Optimization

Stochastic Optimization is the task of optimizing certain objective functional by generating and using stochastic random variables. Usually the Stochastic Optimization is an iterative process of generating random variables that progressively finds out the minima or the maxima of the objective functional. Stochastic Optimization is usually applied in the non-convex functional spaces where the usual deterministic optimization such as linear or quadratic programming or their variants cannot be used.

Source: ASOC: An Adaptive Parameter-free Stochastic Optimization Techinique for Continuous Variables

Papers

Showing 12011250 of 1387 papers

TitleStatusHype
Biased Importance Sampling for Deep Neural Network TrainingCode0
Limited-Memory Matrix Adaptation for Large Scale Black-box OptimizationCode0
Comments on the proof of adaptive submodular function minimization0
Accelerating Stochastic Gradient Descent For Least Squares Regression0
Stochastic Optimization from Distributed, Streaming Data in Rate-limited Networks0
Bandit Structured Prediction for Neural Sequence-to-Sequence LearningCode0
Performance Limits of Stochastic Sub-Gradient Learning, Part II: Multi-Agent Case0
Importance Sampled Stochastic Optimization for Variational Inference0
Larger is Better: The Effect of Learning Rates Enjoyed by Stochastic Optimization with Progressive Variance Reduction0
Probabilistic Line Searches for Stochastic OptimizationCode0
Stochastic Primal Dual Coordinate Method with Non-Uniform Sampling Based on Optimality Violations0
Guaranteed Sufficient Decrease for Variance Reduced Stochastic Gradient Descent0
Incorporating statistical model error into the calculation of acceptability prices of contingent claims0
Langevin Dynamics with Continuous Tempering for Training Deep Neural Networks0
Task-based End-to-end Model Learning in Stochastic OptimizationCode0
Learning to Optimize Neural Nets0
Dual Iterative Hard Thresholding: From Non-convex Sparse Minimization to Non-smooth Concave Maximization0
Fast Rates for Bandit Optimization with Upper-Confidence Frank-Wolfe0
Memory and Communication Efficient Distributed Stochastic Optimization with Minibatch-Prox0
Stochastic Canonical Correlation Analysis0
Revisiting Distributed Synchronous SGD0
Natasha: Faster Non-Convex Stochastic Optimization Via Strongly Non-Convex Parameter0
Communication-Efficient Algorithms for Decentralized and Stochastic Optimization0
Maximum Entropy Flow Networks0
Stochastic Multidimensional Scaling0
Asymptotic Optimality in Stochastic Optimization0
Coupling Adaptive Batch Sizes with Learning RatesCode0
An empirical analysis of the optimization of deep network loss surfaces0
Without-Replacement Sampling for Stochastic Gradient Methods0
One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities0
Scalable Bayesian Learning of Recurrent Neural Networks for Language Modeling0
Optimal Learning for Stochastic Optimization with Nonlinear Parametric Belief Models0
Optimizing connection weights in neural networks using the whale optimization algorithm0
Scalable Adaptive Stochastic Optimization Using Random Projections0
Scalable Approximations for Generalized Linear Problems0
Faster variational inducing input Gaussian process classification0
A Generalized Stochastic Variational Bayesian Hyperparameter Learning Framework for Sparse Spectrum Gaussian Process Regression0
The Asset Liability Management problem of a nuclear operator : a numerical stochastic optimization approach0
Greedy Step Averaging: A parameter-free stochastic optimization methodCode0
Learning from Untrusted Data0
Eve: A Gradient Based Optimization Method with Locally and Globally Adaptive Learning RatesCode0
Learning Deep Embeddings with Histogram LossCode0
Asynchronous Stochastic Block Coordinate Descent with Variance Reduction0
Reparameterization Gradients through Acceptance-Rejection Sampling AlgorithmsCode0
Statistics of Robust Optimization: A Generalized Empirical Likelihood Approach0
Variance-based regularization with convex objectivesCode0
Boost K-Means0
Stochastic Optimization with Variance Reduction for Infinite Datasets with Finite-Sum StructureCode0
Exact and Inexact Subsampled Newton Methods for Optimization0
Sooner than Expected: Hitting the Wall of Complexity in Evolution0
Show:102550
← PrevPage 25 of 28Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1AvaGradAccuracy81.24Unverified
2AdaShiftAccuracy81.12Unverified
3Adam (eps-adjusted)Accuracy81.04Unverified
4SGDAccuracy80.95Unverified
5AdamWAccuracy79.87Unverified
6AdaBoundAccuracy77.24Unverified
#ModelMetricClaimedVerifiedStatus
1Adam (eps-adjusted)Accuracy96.36Unverified
2AvaGradAccuracy96.2Unverified
3SGDAccuracy96.14Unverified
4AdaShiftAccuracy95.92Unverified
5AdamWAccuracy95.89Unverified
6AdaBoundAccuracy94.6Unverified
#ModelMetricClaimedVerifiedStatus
1SGD - cosine LR scheduleAccuracy95.55Unverified
2LookaheadAccuracy95.27Unverified
3SGDAccuracy95.23Unverified
4ADAMAccuracy94.84Unverified
#ModelMetricClaimedVerifiedStatus
1AvaGradTop 1 Accuracy76.51Unverified
2SGDTop 1 Accuracy75.99Unverified
3AdamWTop 1 Accuracy72.9Unverified
4AdaBoundTop 1 Accuracy72.01Unverified
#ModelMetricClaimedVerifiedStatus
1AdaBoundBit per Character (BPC)2.86Unverified
2AdaShiftBit per Character (BPC)1.27Unverified
3AdamWBit per Character (BPC)1.23Unverified
4AvaGradBit per Character (BPC)1.18Unverified
#ModelMetricClaimedVerifiedStatus
1Resnet18Accuracy (max)86.85Unverified
2Resnet34Accuracy (max)86.14Unverified
#ModelMetricClaimedVerifiedStatus
1Resnet18Accuracy (max)58.48Unverified
2Resnet34Accuracy (max)54.5Unverified
#ModelMetricClaimedVerifiedStatus
1SGDTop 5 Accuracy92.15Unverified
2LookaheadTop 1 Accuracy75.13Unverified
#ModelMetricClaimedVerifiedStatus
1LookaheadTop 1 Accuracy75.49Unverified
2SGDTop 1 Accuracy75.15Unverified
#ModelMetricClaimedVerifiedStatus
1BertAccuracy (max)93.99Unverified
#ModelMetricClaimedVerifiedStatus
1BertAccuracy (max)86.34Unverified
#ModelMetricClaimedVerifiedStatus
1MLPNLL0.05Unverified