SOTAVerified

L2 Regularization

See Weight Decay.

$L_{2}$ Regularization or Weight Decay, is a regularization technique applied to the weights of a neural network. We minimize a loss function compromising both the primary loss function and a penalty on the $L_{2}$ Norm of the weights:

$$L_{new}\left(w\right) = L_{original}\left(w\right) + \lambda{w^{T}w}$$

where $\lambda$ is a value determining the strength of the penalty (encouraging smaller weights).

Weight decay can be incorporated directly into the weight update rule, rather than just implicitly by defining it through to objective function. Often weight decay refers to the implementation where we specify it directly in the weight update rule (whereas L2 regularization is usually the implementation which is specified in the objective function).

Papers

Showing 91100 of 128 papers

TitleStatusHype
Multi-branch fusion network for hyperspectral image classification0
Construction of Differentially Private Empirical Distributions from a low-order Marginals Set through Solving Linear Equations with l2 Regularization0
What is the Effect of Importance Weighting in Deep Learning?Code0
Quantifying Generalization in Reinforcement LearningCode1
On Implicit Filter Level Sparsity in Convolutional Neural Networks0
dynamic Long Short-Term Memory Neural-Network-Based Indict Remaining-Useful-Life Prognosis for Satellite Lithium-ion Battery0
Edge-adaptive l2 regularization image reconstruction from non-uniform Fourier data0
Gradient-Coherent Strong Regularization for Deep Neural Networks0
Re-evaluating Continual Learning Scenarios: A Categorization and Case for Strong BaselinesCode1
Learning Sparse Low-Precision Neural Networks With Learnable Regularization0
Show:102550
← PrevPage 10 of 13Next →

No leaderboard results yet.