Posted on :: Tags: , ,

What is regularization?

Regularization is used to reduce the complexity of a model. There are 3 types of regularization we can use in Deep NN.

L2 Regularization: We define the complexity of a model by W=w02+w12++wn2 and add this to Loss function to get

$L(data, model) = loss(data, model) + (w_0^2 + ... + w_n^2)$
DEFAULT

and try to reduce this.

As seen the derivative of a the W is 2W so backpropagation reduces the weights by penalizing larger weights.

L1 Regularization: This is similar to L2 Regularization, but W is defined as:

|w0|+|w1|+|wn|

The derivative of W is a constant k this time, so weights can be reduced to zero unlike L2.

Dropout: Unlike the other two, this is a layer in the neural network instead of a loss function.

A dropout layer sets weights of a random set of weights to 0. If we have 0.3 Dropout layer, it sets 30% of the weights to 0.