[Google_Bootcamp_Day18]
Updated:
Convolutional Neural Networks 2
Why convolutions?
- parameter sharing :
A feature detector that’s useful in one part of the image is probably useful in another part of the image - sparsity of connections :
In each layer, each output value depends only on a small number of inputs
Training process (same as standard nn)
LeNet-5
AlexNet
VGG-16
ResNet
Problem of deep neural network : vanishing gradient problem
Residual Block
Residual Network
Why Resnet works well?
- z[l+2] , a[l] can be same dimmension by using “same” convolution
- easy to learn the identity function to just copy a[l] to a[l+2] using despite the addition of these two layers.
[source] https://www.coursera.org/learn/convolutional-neural-networks
Leave a comment