SOTAVerified

L2 Regularization

See Weight Decay.

$L_{2}$ Regularization or Weight Decay, is a regularization technique applied to the weights of a neural network. We minimize a loss function compromising both the primary loss function and a penalty on the $L_{2}$ Norm of the weights:

$$L_{new}\left(w\right) = L_{original}\left(w\right) + \lambda{w^{T}w}$$

where $\lambda$ is a value determining the strength of the penalty (encouraging smaller weights).

Weight decay can be incorporated directly into the weight update rule, rather than just implicitly by defining it through to objective function. Often weight decay refers to the implementation where we specify it directly in the weight update rule (whereas L2 regularization is usually the implementation which is specified in the objective function).

Papers

Showing 125 of 128 papers

TitleStatusHype
Maintaining Plasticity in Deep Continual LearningCode2
It's Enough: Relaxing Diagonal Constraints in Linear Autoencoders for RecommendationCode1
Label-Only Membership Inference AttacksCode1
Rotational Equilibrium: How Weight Decay Balances Learning Across Neural NetworksCode1
Motion Correction and Volumetric Reconstruction for Fetal Functional Magnetic Resonance Imaging DataCode1
Distributionally Robust Neural NetworksCode1
Quantifying Generalization in Reinforcement LearningCode1
Neural Pruning via Growing RegularizationCode1
The Transient Nature of Emergent In-Context Learning in TransformersCode1
Towards Unsupervised Deep Image Enhancement with Generative Adversarial NetworkCode1
Re-evaluating Continual Learning Scenarios: A Categorization and Case for Strong BaselinesCode1
An Experiment on Feature Selection using Logistic Regression0
Action Classification with Locality-constrained Linear Coding0
An FPGA-Based On-Device Reinforcement Learning Approach using Online Sequential Learning0
A New Angle on L2 Regularization0
An efficient distributed learning algorithm based on effective local functional approximations0
A Comparative Study of Neural Network Compression0
A Bayesian traction force microscopy method with automated denoising in a user-friendly software package0
Construction of Differentially Private Empirical Distributions from a low-order Marginals Set through Solving Linear Equations with l2 Regularization0
A Closer Look at Rehearsal-Free Continual Learning0
Carbon price fluctuation prediction using blockchain information A new hybrid machine learning approach0
Analysis of overfitting in the regularized Cox model0
Comparative Study of Bitcoin Price Prediction0
Compressing Low Precision Deep Neural Networks Using Sparsity-Induced Regularization in Ternary Networks0
Automatic Parameter Tying in Neural Networks0
Show:102550
← PrevPage 1 of 6Next →

No leaderboard results yet.