Types of regularization in ML.
What is regularization?
Regularization is used to reduce the complexity of a model. There are 3 types of regularization we can use in Deep NN.
L2 Regularization: We define the complexity of a model by
$L(data, model) = loss(data, model) + (w_0^2 + ... + w_n^2)$
DEFAULT
and try to reduce this.
As seen the derivative of a the
L1 Regularization: This is similar to L2 Regularization, but
The derivative of
Dropout: Unlike the other two, this is a layer in the neural network instead of a loss function.
A dropout layer sets weights of a random set of weights to 0. If we have 0.3 Dropout layer, it sets 30% of the weights to 0.