SOTAVerified

L2 Regularization

See Weight Decay.

$L_{2}$ Regularization or Weight Decay, is a regularization technique applied to the weights of a neural network. We minimize a loss function compromising both the primary loss function and a penalty on the $L_{2}$ Norm of the weights:

$$L_{new}\left(w\right) = L_{original}\left(w\right) + \lambda{w^{T}w}$$

where $\lambda$ is a value determining the strength of the penalty (encouraging smaller weights).

Weight decay can be incorporated directly into the weight update rule, rather than just implicitly by defining it through to objective function. Often weight decay refers to the implementation where we specify it directly in the weight update rule (whereas L2 regularization is usually the implementation which is specified in the objective function).

Papers

Showing 111120 of 128 papers

TitleStatusHype
An FPGA-Based On-Device Reinforcement Learning Approach using Online Sequential Learning0
A Note on the Regularity of Images Generated by Convolutional Neural Networks0
Attention-Based End-to-End Speech Recognition on Voice Search0
Attentive Recurrent Tensor Model for Community Question Answering0
Automatic Discovery and Optimization of Parts for Image Classification0
Automatic Parameter Tying in Neural Networks0
Carbon price fluctuation prediction using blockchain information A new hybrid machine learning approach0
Construction of Differentially Private Empirical Distributions from a low-order Marginals Set through Solving Linear Equations with l2 Regularization0
Comparative Study of Bitcoin Price Prediction0
Compressing Low Precision Deep Neural Networks Using Sparsity-Induced Regularization in Ternary Networks0
Show:102550
← PrevPage 12 of 13Next →

No leaderboard results yet.