A Convolu onal Neural Network (CNN) is a Recurrent Neural Network(RNN) is a type During training process also the weights
type of Deep Learning neural network of Neural Network where the output from remains fixed in these compe ve networks.
architecture commonly used in Computer the previous step is fed as input to the The idea of compe on is used among
Vision. Computer vision is a field of Ar ficial current step. In tradi onal neural networks, neurons for enhancement of contrast in their
Intelligence that enables a computer to all the inputs and outputs are independent of ac va on func ons. In this, two
understand and interpret the image or visual each other. S ll, in cases when it is required networks- Maxnet and Hamming networks.
data. When it comes to Machine to predict the next word of a sentence, the Maxnet network was developed by Lippmann
Learning, Ar ficial Neural Networks perform previous words are required and hence there in 1987. The Maxner serves as a sub net for
really well. Neural Networks are used in is a need to remember the previous words. picking the node whose input is larger. All the
various datasets like images, audio, and text. Thus RNN came into existence, which solved nodes present in this subnet are fully
Different types of Neural Networks are used this issue with the help of a Hidden Layer. interconnected and there exist symmetrical
for different purposes, for example for The main and most important feature of RNN weights in all these weighted
predic ng the sequence of words we is its Hidden state, which remembers some interconnec ons. The architecrure of
use Recurrent Neural Networks more informa on about a sequence. The state is Maxnet is a fixed symmetrical weights are
precisely an LSTM, similarly for image also referred to as Memory State since it present over the weighted interconnec ons.
classifica on we use Convolu on Neural remembers the previous input to the The weights between the neurons are
networks. In this blog, we are going to build a network. It uses the same parameters for inhibitory and fixed. The Maxnet with this
basic building block for CNN.Convolu onal each input as it performs the same task on all structure can be used as a subnet to select a
Neural Network (CNN) is the extended the inputs or hidden layers to produce the par cular node whose net input is the
version of ar ficial neural networks (ANN) output. This reduces the complexity of largest.
which is predominantly used to extract the parameters, unlike other neural networks. The Hamming network is a two-layer
feature from the grid-like matrix dataset. For The fundamental processing unit in a feedforward neural network for classifica on
example visual datasets like images or videos Recurrent Neural Network (RNN) is a of binary bipolar n-tuple input vectors using
where data pa erns play an extensive role. Recurrent Unit, which is not explicitly called a minimum Hamming distance denoted as
Convolu onal Neural Network consists of “Recurrent Neuron.” This unit has the unique DH(Lippmann, 1987). The first layer is the
mul ple layers like the input layer, ability to maintain a hidden state, allowing input layer for the n-tuple input vectors. The
Convolu onal layer, Pooling layer, and fully the network to capture sequen al second layer (also called the memory layer)
connected layers.The Convolu onal layer dependencies by remembering previous stores p memory pa erns. A p-class
applies filters to the input image to extract inputs while processing. Long Short-Term Hamming network has p output neurons in
features, the Pooling layer downsamples the Memory (LSTM) and Gated Recurrent Unit this layer. The strongest response of a neuron
image to reduce computa on, and the fully (GRU) versions improve the RNN’s ability to is indica ve of the minimum Hamming
connected layer makes the final predic on. handle long-term dependencies. distance between the stored pa ern and the
The network learns the op mal filters input vector.
through backpropaga on and gradient
descent.
They are mul layer network based on the Self Organizing Map (or Kohonen Map or A Counter Propaga on Network (CPN) is a
combina ons of the input, output, and SOM) is a type of Ar ficial Neural Network type of neural network that combines the
clustering layers. The applica on of which is also inspired by biological models of features of both supervised and
counterpropaga on net are data neural systems from the 1970s. It follows an unsupervised learning. It was proposed by
compression, func on approxima on and unsupervised learning approach and trained Hecht-Nielsen in 1987. The CPN consists of
pa ern associa on. The ccounterpropaga on its network through a compe ve learning three layers: the input layer, the Kohonen (or
network is basically constructed from an algorithm. SOM is used for clustering and compe ve) layer, and the Grossberg (or
instar-outstar model. This model is three mapping (or dimensionality reduc on) output) layer. Working of Counter
layer neural network that performs input-
output data mapping, producing an output techniques to map mul dimensional data Propaga on Network Normal Mode:
vector y in response to input vector x, on the onto lower-dimensional which allows people Input Layer: The input vector is presented to
basis of compe ve learning. The three layer to reduce complex problems for easy the input layer.Kohonen Layer: The input
in an instar-outstar model are the input layer, interpreta on. SOM has two layers, one is vector is then passed to the Kohonen layer,
the hidden(compe ve) layer and the output the Input layer and the other one is the where compe ve learning occurs. Each
layer.There are two stages involved in the Output layer. neuron in this layer competes to respond to
training process of a counterpropaga on net. A Kohonen Self-Organizing Map consists of a the input vector, and only the winning
The input vector are clustered in the first single layer linear 2D grid of neurons. The neuron (the one with the highest ac va on)
stage. In the second stage of training, the nodes do not know the values of their is ac vated.Grossberg Layer: The ac vated
weights from the cluster layer units to the neighbors. The architecture of Kohonen Self- neuron in the Kohonen layer then ac vates
output units are tuned to obtain the desired Organizing Maps (KSOM) consists of a grid of the corresponding neuron in the Grossberg
response. There are two types of neurons arranged in a two-dimensional layer, producing the output vector.
counterpropaga on net:
1. Full counterpropaga on network la ce. Each neuron in the grid is connected Training Mode
2. Forward-only counterpropaga on network to the input layer and receives input signals The training of a CPN involves two phases:
Full CPN efficiently represents a large from the input data. The neurons in the grid the Kohonen phase and the Grossberg phase.
number of vector pair x:y by are arranged in a way that preserves the Kohonen Phase:The input vector is
adap vely construc ng a look-up-table. The topology of the input space, which means presented to the input layer. The Kohonen
full CPN works best if the inverse func on that neighboring neurons in the grid are layer neurons compete, and the winning
exists and is con nuous. The vector x and y more likely to respond to similar input data. neuron is iden fied.The weights of the
propagate through the network in a The weights of links are updated as a winning neuron are adjusted to move closer
counterflow manner to yield output vector func on of the given inputs. However, all the to the input vector.Grossberg Phase :The
x* and y*. A simplified version of full CPN is nodes on the grid are directly linked to the desired output vector is presented to the
the forward-only CPN. Forward-only CPN input vectors. Grossberg layer.The weights between the
uses only the x vector to form the cluster on winning Kohonen neuron and the Grossberg
the Kohonen units during phase I training. In
case of forward-only CPN, first input vectors layer are adjusted to minimize the error
are presented to the input units. between the actual and desired output
vectors.