[Google_Bootcamp_Day18]

Updated:

Convolutional Neural Networks 2

Why convolutions?

  • parameter sharing :
    A feature detector that’s useful in one part of the image is probably useful in another part of the image
  • sparsity of connections :
    In each layer, each output value depends only on a small number of inputs

Training process (same as standard nn)

cnn_train

LeNet-5

lenet-5

AlexNet

alexnet

VGG-16

vgg-16

ResNet

Problem of deep neural network : vanishing gradient problem

Residual Block resnet_skip

Residual Network resnet

Why Resnet works well?

why_resnet

  • z[l+2] , a[l] can be same dimmension by using “same” convolution
  • easy to learn the identity function to just copy a[l] to a[l+2] using despite the addition of these two layers.

[source] https://www.coursera.org/learn/convolutional-neural-networks

Categories:

Updated:

Leave a comment