We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 11
Unio- 40)
Perceptrons are the building blocks of neural networks. It is a supervised
learning algorithm of binary classifiers.
The perceptron consists of 4 parts.
1. Input values or One input layer
2. Weights and Bias
3. Net sum
4. Activation Function
All the inputs x are multiplied with their weights w
b. Add all the multiplied values and call them Weighted Sum.
c. Apply that weighted sum to the correct Activation Function,
Weights shows the strength of the particular node.
A bias value allows you to shift the activation function curve up or down.
“In short, the activation functions are used to map the input between the
required values like (0, 1) or (1, 1).
Perceptron is usually used to classify the data into two parts. Therefore, it is also
known as a Linear Binarv Classifier. |
2021-22 2M
Gradient descent is an optimization algorithm which is commonly-used to train
machine learning models and neural networks, to find a local
minimum/maximum of a given function.
This method is commonly used in machine learning (ML) and deep learning(DL)
to minimize a cost/loss function.ees Wha aot descent ral
2021-22 2M.
In machine learning, the delta rule is a gradient descent learning rule for updating
the weights of the inputs to artificial neurons in a single-layer neural network. It is
a special case of the more general backpropagation algorithm.
2020-21 10M
Back-propagation is used for the training of neural network.
‘The Backpropagation algorithm looks for the minimum value of the error function
in weight space using a technique called the delta rule or gradient descent.
In an artificial neural network, the values of weights and biases are randomly
initialized. Due to random initialization, the neural network probably has errors in
giving the correct output.
We need to reduce error values as much as possible. So, for reducing these error
values, we need a mechanism that can compare the desired output of the neural
network with the network's output that consists of errors and adjusts its weights
and biases such that it gets closer to the desired output after each iteration.
For this, we train the network such that it back propagates and updates the weights
and biases. This is the concept of the back propagation algorithm.ickward propagation of errors." It is a
neural networks.
Backpropagation is a short form for "
standard method of training artifi
Backpropagation Algorithm:
Step 1: Inputs X, arrive through the preconnected path.
Step 2: The input is modeled using true weights W. Weights are usually chosen
randomly.
Step 3: Calculate the output of each neuron from the input layer to the hidden
layer to the output layer.
Step 4: Calculate the error in the outputs
Backpropagation Error= Actual Output - Desired Output
Step 5: From the output layer, go back to the hidden layer to adjust the weights
to reduce the error.
Step 6: Repeat the process until the desired output is achieved
“Why We Neal Backprapagation?
Most prominent advantages of Backpropagation are:
* Backpropagation is fast, simple and easy to program
* It is a flexible method as it does not require prior knowledge about the network
* It is a standard method that generally works well
* It does not need any special mention of the features of the function to be learned.
Types of Backpropagation Networks Two Types of Backpropagation Networks
are:
* Static Back-propagation
+ Recurrent Backpropagation
a
The output two runs of a neural network compete among themselves to become
active. Several output neurons may be active, but in competitive only single output
neuron is active at one time.2020-21 10M
Self Organizing Map (or Kohonen Map or SOM)
It follows an unsupervised learning approach and trained its network
through a competitive learning algorithm.
SOM is used for clustering and mapping (or dimensionality reduction)
techniques to map multidimensional data onto lower-dimensional which
allows people to reduce complex problems for easy interpretation.
SOM has two layers, one is the Input layer and the other one is the Output
layer.
The architecture of the Self Organizing Map with two clusters and n input
features of any sample is given below:
Aba=008 to S
Ja,-wu +=Step I: Initializing the Weights
We have randomly initialized the values of the weights
Step 2: Take a sample training input vector from the input layer.
Step 3: Calculating the Best Matching Unit/ winning neuron.
To determine the best matching unit, one method is to iterate through all the nodes
and calculate the Euclidean distance between each node’s weight vector and the
current input vector. The node with a weight vector closest to the input vector is
tagged as the winning neuron.
Distance = |¥i=9(X; — Wi)?
_StGp 4: Find the new weight between input vector sample and winning output
Neuron.
New Weights = Old Weights + at. (Input Vector — Old Weights)
Step 5: Repeat step 2 to 4 until weight updation is negligible. That is, new weight
are similar to old weight or feature map stop changing.Convolutional Neural Networks (CNNs) are specially designed to
work with images. Convolutional Neural Networks (CNNs) are
specially designed to work with images. An image consists of pixels. In
deep learning, images are represented as arrays of pixel values.
There are three main types of layers in a CNN:
* Convolutional layers
* Pooling layers
¢ Fully connected (dense) layers.
There are four main types of operations in a CNN: Convolution
operation, Pooling operation, Flatten
operation and Classification (or other relevant) operation.
Convolutional layers and convolution operation: The first
layer in a CNN is a convolutional layer. It takes the images as the input
and begins to process.
There are three elements in the convolutional layer: Input
image, Filters and Feature map.
Tp a Vtien =feok Eras.
Core!goraia hr
tori Osa
-O'r0
alwlolo
wlolala
Que ob oe
Filter: This is also called Kernel or Feature Detector.o*% | + | *
Image section: The size of the image section should be equal to the
size of the filter(s) we choose. The number of image sections depends
on the Stride.
Feature map: The feature map stores the outputs of different
convolution operations between different image sections and the
filter(s).‘The number of steps (pixels) that we shift the filter over the input
image is called Stride.
Padding adds additional pixels with zero
values to each side of the image. That helps to
get the feature map of the same size as the
Pescing=1
3
input. ofolslo|1folo
ofafo(rfo[s|s
+[s[2folt
tp fole
Pooling layers and pooling operation At
Pooling layers IED
layerg in a CNN. Each convolutional layer is
followed by a pooling layer. So, convolution and pooling layers are used
together as pairsy PLycL_ AL PL
It Reduce the dimensionality (number of pixels) of the output returned
from previous convolutional layers.
‘There are three elements in the pooling layer: Feature
+ Max pooling: Get the maximum value in the area where the
filter is applied.
+ Average pooling: Get the average of the values in the area3S.
channels.
Fully connected (dense) layers
These are the final layers in a CNN. The input is the previous flattened
layer. There can be multiple fully connected layers. The final layer does
the classification (or other relevant) task. An activation function is used
in each fully connected layer.
It Classify the detected features in the image into a class label.
2020-21 10M.Step I : to construct convolutional matrix
Cnput Matrtx T Xt
Rum Mabax [ ftterz
Palo |— aie
qo Cakowade How has OF Quigt Mody
O- (t-RIts rah)
wetde Rd
-(-a)es 2 uta =8
L
fies, Of mertx ta XS.
sense:Trput Matix IT ant Suton fry
\_jo joj tia joli ito |
o lO li {*}' lott an
' lo oll
ol Olo
440
\
AAs Stride is not mentioned Iwill assume
Filter =2*2 Ste,
cn
Pls [es fo
L
Fy
)
ia
Bl)
2,
a i
S. [a5]
“&
axy $4235
avg pods Nott - .