SOTAVerified

L2 Regularization

See Weight Decay.

$L_{2}$ Regularization or Weight Decay, is a regularization technique applied to the weights of a neural network. We minimize a loss function compromising both the primary loss function and a penalty on the $L_{2}$ Norm of the weights:

$$L_{new}\left(w\right) = L_{original}\left(w\right) + \lambda{w^{T}w}$$

where $\lambda$ is a value determining the strength of the penalty (encouraging smaller weights).

Weight decay can be incorporated directly into the weight update rule, rather than just implicitly by defining it through to objective function. Often weight decay refers to the implementation where we specify it directly in the weight update rule (whereas L2 regularization is usually the implementation which is specified in the objective function).

Papers

Showing 101128 of 128 papers

TitleStatusHype
Overcoming catastrophic forgetting in neural networks0
Parkinson's Disease Diagnosis Through Deep Learning: A Novel LSTM-Based Approach for Freezing of Gait Detection0
Perturbation of Deep Autoencoder Weights for Model Compression and Classification of Tabular Data0
Pricing Football Players using Neural Networks0
Probabilistic fine-tuning of pruning masks and PAC-Bayes self-bounded learning0
Regularisation Can Mitigate Poisoning Attacks: A Novel Analysis Based on Multiobjective Bilevel Optimisation0
Learning with Hyperspherical UniformityCode0
Less is More -- Towards parsimonious multi-task models using structured sparsityCode0
What is the Effect of Importance Weighting in Deep Learning?Code0
Learning a smooth kernel regularizer for convolutional neural networksCode0
Understanding and Stabilizing GANs' Training Dynamics with Control TheoryCode0
From large-eddy simulations to deep learning: A U-net model for fast urban canopy flow predictionsCode0
Monkeypox disease recognition model based on improved SE-InceptionV3Code0
How Infinitely Wide Neural Networks Can Benefit from Multi-task Learning -- an Exact Macroscopic CharacterizationCode0
DACN: Dual-Attention Convolutional Network for Hyperspectral Image Super-ResolutionCode0
Disturbing Target Values for Neural Network RegularizationCode0
Gradient-based bilevel optimization for multi-penalty Ridge regression through matrix differential calculusCode0
Neurogenesis-Inspired Dictionary Learning: Online Model Adaption in a Changing WorldCode0
The Limitations of Large Width in Neural Networks: A Deep Gaussian Process PerspectiveCode0
On Regularization Parameter Estimation under Covariate ShiftCode0
Data and Model Dependencies of Membership Inference AttackCode0
Convergence of a L2 regularized Policy Gradient Algorithm for the Multi Armed BanditCode0
WALINET: A water and lipid identification convolutional Neural Network for nuisance signal removal in 1H MR Spectroscopic ImagingCode0
Convolutional Neural Networks for Facial Expression RecognitionCode0
Sequence Length is a Domain: Length-based Overfitting in Transformer ModelsCode0
Collaboratively Weighting Deep and Classic Representation via L2 Regularization for Image ClassificationCode0
Planting and Mitigating Memorized Content in Predictive-Text Language ModelsCode0
Prevalidated ridge regression is a highly-efficient drop-in replacement for logistic regression for high-dimensional dataCode0
Show:102550
← PrevPage 3 of 3Next →

No leaderboard results yet.