0 ratings0% found this document useful (0 votes) 240 views28 pagesNNDL Unit1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here.
Available Formats
Download as PDF or read online on Scribd
Neural Net
[mPoRTANT
a
@.
&.
@.
@.
6.
qr
@
@.
Quo.
an
2
1
BRSRARBR
Quo.
ait.
au
a2.
@.
4.
Learns =
List OF
avorks and DEEP
UNIT -1
Give the History of ANN.
Compae Biological and Abfifcla neurons.
Difference between the human brain and computers in ferms of
Explain the
how information is processed:
Explain in detail the basic models of ANN.
Discuss in brief on Adapiive Linear Neuron.
Discussin bref on Muitiple Adaptive Linear Neuron.
Evaluate generalized Delta Learning Rule,
Explain in detail Reinforcement Learning
Explain in detail on Associative memory mode. :
Explain in dtailon Hopfield network as a Dynamical system.
Discuss in brief on BAM.
UNIT -
Give a bre introduction on Unsupervised learning networks.
Give abrief note on fixed weight competitive nets,
Explain in brief on Manet
Explain in detail on Kohonen self organizing feature Map.
Explain the unsupervised leaming technique Restricted Boltzmann machine.
Explain in detail on Brain-state-n-a Box Network.
Explain in deta on LVQ architecture
Explain in detail on Full counter propagation network.
Explain briefly Forward-only counterpropagation network.
Explain the applications of ART.
Explain the types of networks?
UNIT -1n
What are the ype of deep earning newer? Explain,
explain briefon Deep ering loth,
Feed Forward Process in Deep Neural Network?
Explain in etalon
Flan he into of edi bse opin,
QUESTIONS Wits Key
62
oo
a
Wait, 0.0.3,
(Unit, ON,
Wnit4,O.Na5,
(Unit, O.Na6)
(Unit, O.No9)
(Unit, Q.No.13)
(Units, Q.No.15)
(Unit-1, Q.No.17)
(Unit-l, Q.No.18)
(Unit-I1, Q.No.1)
(Unitl, Q.No.2)
(Unit-I1, Q.No.3)
(Unit:I1, Q.No.5)
(Unit.Il, Q.No.7)
(Unitll, Q.No.8)
(Unit-I, Q.No.10)
(Unit-H, Q.No.12)
(nittl, Q.No.13)
(niet, Q.No.16)
(Wnittl, Q.No.17)
Unit.ttt Q.No,
No.2)
(Unitstt, Q.No, a
(Unit, @.No.6)915.8
aah ote on Batch adler deta, aL Netwen
te ch gradient descent. al Networks and Deep Learning
pucussin bie on Gradient based lean
7 in the diference between Gradient (nit, Q.No9)
Or. plain gradient descent in linear guaket = ‘and normal equation, (Unit-tlt, Q.No.10)
8 Frain the Architecture and Leaning proces (Unit, Q.No.12)
Preuss in detail on Bock propaga 1 Process in neural network (Unitatt, Q.No.13)
10. tion through time (RN) (Unit-HH, Q.No.15)
(Unitttt, Q.No.16)
UNIT - Iv
gy. What isthe! difference between L2 and L1 parameter norm:
tp. Basanindetal.on Penaies 8 constoined optima — (Unit4V, Q.No2)
ag. Dicuss brie on Dataset augmentation : (Unit-1V, Q.No4)
gapeinin brief on Noise robustness. (UnitV, No)
Gp, Gueabeef introduction on Semi supervised amin Cate
oj Deserbe in bref on Multaskleaming, (Unit 1V, QNo.10)
tp. Whatismeantby Explating the validation dat canoe
soja pore repesniatonin deta ae (UnitaV, QNo.15)
CAC ae (UnitV, QNo17,
@. Explain Bagging in detail. (UnitsV,QNo18)
qo. Explain Adversarial Training in detail. (Unit-1V, Q.No.20)
it. Explain the challenges and drawbacks of adversarial traning UnitV, QNo21)
@i2. Explain in brief on Tangent Distance. (Unit 1V, QNo22)
UNIT -V
Ql. Explain the gradient descent algorithm briefly (Unit, QNo2)
(2. Explain the types of gradient descent ‘and challenges faced by it (Unit-V, Q.No.3)
©, Explain Mini Batch Stochastic Gradient Descent (MB-SGD). (Univ, QNOS)
GA. Discussin brief on NAG algorithm briefly. (Unit, QN06)
6. Discuss in detail on AdaDelta (Unit-V, Q.No.8)
©. Explain Adaptive Moment Estimation in deta (Unit V, Q.No.10)
G1. Whatare Approximate Second order methods? Explain. (Unit-V, QNo.13)
Discuss in detail on Optimization strategies and meta-algorithms. (UnitV, Q.No.14)
© Discuss on Image classification and Image clssfcation vith localization in detail. (UnitNs anor)
10. Discuss on Object detection and Object segmentation : bes a a
(8: acetal ocx proceed TION PO works? SN
Ql2. Listout the benefits and challenges of natura languede processing (eee—B Neural Networks and Deep Learning \
UNIT
ARTIFICIAL NEURAL NETWORKS
Short Qu
Ss
Qi. What is BAM in neural network?
20 0 2
0-220
20 0-2
M-l2 0 0 2 7
0 2-20
200.2
Bidirectional associative memory (BAM) is
atype of recurrent neural network. BAM was introduced
by Batt Kosko in 1988. There are two types of associative
memory, auto-associative and hetero-associatve.
Bidirectional recurrent neural networks (BRN)
connect two hidden layers of opposite directions to the
same output. With this form of generative deep leaming,
the output layer can get information from past
(backwards) and future (forward) states simultaneous
Q3. How does an ANN Work?
Answer:
An attifi neural network is an attempt to
simulate the network of neurons that make up 2! human
brain so that the computer will be able to lear (nnd
and make decisions in a humanlike .
created by programming regular computers 10 D2
asthough they are interconnected brain cel
manner. ANNs are
Maes ee
As
ANNs are a type of computer program that can
bbe ‘taught to emulate relationships in sts of data, Once
the ANN hasbeen ‘trained’, itcan be used to predict the
‘outcome of another new set of input data, e.g. another
composite system or a different stress environment.
Q5. What are unsupervised neural networks?
Answer:
‘Unsupervised learning méans you're only expesing
machine to input data. Tere is no corresponding output
dala to teach the system the answers it should be artving
at. With unsupervised learning, you train the machine
with unlabeled data that offers it no hints about what
itsseeing,
Q6. What is unsupervised learning example?
Answer:
BF e+ o>
Acs =-@
“The goal of unsuperised learning iso find the
_undeying structure of dataset, group that data according
tosimiarties, and represent that dataset ina compressed
format. Example: Suppose the unsupervised learning
‘algorithm is given an input dataset containing images of
different types of cats and dogs.
Q7. What are 3 major categories of neural
networks? E
Answer:
ance anego DE ee
se ines CN
; New
2 Corsa Netwons ENN
ea rearing wes
or
unsupervised learning
«Sy, clustering — includes
rntation, or understanding
pe around which o build
‘cases for
Some use
—more specifi
ve Gareics, fr example custering DNA patemns to
analyze evolutionary biology.
@._ What are different types of unsupervised
learning?
Answer:
‘Unsupervised machin learning helps youto finds
sil knd of unlnown pattems in data. Clustering and
‘Association are two types of Ur earning. Four
types of clustering methods are 1) Exclusive
2) Agglomerative 3) Overlapping 4) Probabilistic
Q10. How do neural network improvise and
Answer:
Neural networks generally perform supervised
Jearing tasks, building knowledge from data sets where
the right answer is provided in advance. The networks
then leam by tuning themselves to find the ight answer
contheir own, increasing the accuracy oftheir predictions.
1. is
OQ What isthe difference between ANN and
Answer:
“Technical, an artificial neural network (ANN)
hasa ot of avers isa Deep Neural Network ONN
Gia, What te beck pr
Answer:
neurabnetwork? P*8*tON a
The pacn rentals
eaierore arcane
Graght by the chain rule Iteffinibyermney
ta time, nike a native direct computates
the gradient, but itdoes not define hearths
used
Q13. What are the four main steps in
propagation algorithm?
Itetmyen
b
back
> =
Beloiw are the steps involved in Backpropacstin
Step — 1: Forward Propagation, Step ~ 2: Bachar
Propagation. Step — 3: Putting all the values ogee
and calculating the updated weight value.
Q14. What are the
propagation network?
Answer:
‘Back
‘Propagation
TAigoritnm —*
Disadvantages of Back Propagation
Algorithm:
Itrelies on input to perform ona specific prcte™
~ Sensitive to complew/noisy data.
Itmeeds the derivatives of activation functions
the network design time. —
Q15. What are the features of back propagation
algorithm?
Answer:
The backpropagation ‘algorithm is based of
race though, adeep neural networks just anormal
neural network where the layers of the network are
. OF a network that uses functions not
{ypically foundin an artificial neural metuorhs
generalizing the Widrow-Hoff learning le. I
ov ised learning, which means that the algorithm ®
The saz with examples of the inputs and ouPus that
cacgtt9"* should compute, and then the err is
Selene Sire name Negi StGive the History of ANN,
a
power
viatory of Artilicil Neural Network
Thehision of neural netering ang
saves 12005 wh etc enous
«nthe human brain. In 1850, Wins ees
MS ft work about brain act pate
bf neculech an Pts cate mde neuron,
{Mi ae toy nani neal nets
‘his mode! Is segmented in two parts
_Asummation over-weighted inputs.
‘an output function of the sum,
‘Artificial Neural Network (ANN):
In 1949, Donald Hebb published “
nit oer acenat ees ates The
neuron learning. This law, later known as
$ipbian Learning in honor of Donald Hebb, is one of
Het sight forward and simple laming rules or
caficial neural: networks.
In 1951, Narvin Minsky made the first Artificial
Nawal Network (ANN) while working at Princeton.
In 1958, “The Computer and the Brain” were
subshed, a year after Jhon von Neumann's death. In
Perbook, von Neumann proposed numerous extreme
anges to how analysts had been modeling the brain.
Perceptron:
Perceptron was created in 1958, at Cornell
Universy by Frank Rosenblatt. The perceptron was an
crizsvor to use neural network procedures for character
‘ecogition. Perceptron was a linear system and was
arable for solving issues where the input lasses were
Inenty separable inthe put space. In 1960, Rosenblatt
rublshed the book principles of neurodynamics,
eanining a bit of his research and ideasabout modeling
thebrain.
Despite the early accomplishment of the
perceptron and artificial neural networkresearch, there
Were many individuals who felt that there wAs ©
constrained guarantee in these methods.
vere Marvin Minsky and Seymour Papert, whose 1969.
bookperceptrons were used to dishonor ANN, research
andfocus attention on the apparent constrains of ANN
Work One ofthe imitations that Minsky and Paper's
highight was the fact that the Perceptron vt not
Caable of distinguishing patterns that are not linearly
Separable in input space with a linear classification
Problem. Regardless othe dsapporiment of error
te deal witha indy separable dat, srt 28
nero ae oth ecg, ba ater of a,
Hecht Nice ded hone perceptron (Mar
1950 tat ewig machine a wa epee
frac nen near seperation poten, Pacers
Inrrhcd uit recath et yes” where ANN
arch wns at minima of interest
‘The backpropodntion algorithm, intial found by
Weibnin 1974 mere 1986 th ebook
{Leaming lero preston by Et Propognton
39 Rumelhart, Hinton, and Wiliams, Backpropagation
'satype of gradient descent algorithm used with artificial
‘neural networks for reduction and curve-tting
“el, 1987, the IEEE annua teratoma NN
conference wos begun or ANN sient 1987, he
Intemational Neural Netwenk Socety{INNS) was formed.
‘along with INNS neural Networking journal in 1988,
QZ. Give a brief introduction on ANN.
Answer:
wont itil Neural Neterk (AND), 2
information processing paradigm that is inspited by the
‘brain, ANN, ike people, leamby exarnples. An ANN is
‘configured for a specific application, such as pattern
recognition or data clasfication, through a learning
‘process. Learning largely involves adjustments to the
‘Synaptic connections that exist between the neurons.
“The model of Artificial neural network can be
specified by three entities:
(© Interconnections
‘© Activation functions
© —_Leamingrules
interconnections:
Interconnection can be defined as the way
Jng elements (Neuron) in ANN are connected to
Pach other, Hence, the arrangements ofthese processing
flements and geometry of interconnections are very
essential in ANN.
“These arrangements always have two layers that
are common toall network architectures the Input ave
a output layer where the input ayer buffers the ‘input
signal, and the output layer generaies the output of the
etwork, The third layers the Hidden layer, in which
Murons are neither Kept in the InP layer nor in the
nearer ayer These neronsare en fom he PSE
‘who are interfacing with the system and acts asa black
tox: to them. On increasing the hidden layers with
pily is VIABLE to face LEGAL proceedingsNeural Networks = ‘fneuron connection architecture
There exist five basic types O° 8)
1 Sing inerond owordnetnk
layer feed-forward net 5a —
2 3 feedback 35a)
8, Silene with tow eet gree
4 singlyerreourertnta taser Doe
unt network putput
5. Mublayer recurrent peor
1. Singe-layer feed forward network ' 7S
feedbe
recur
4
In this type of network, we have only two layets input layer and output layer but the input layer doesss
‘count because no computation is performed inthis layer. The output layer is formed when different weighs:
applied on input nodes and the cumulative effect per node is taken. After this, the neurons collectively gv tx
‘output layer to compute the output signals.
2, - Multilayer feedforward network © -
aout
yerox
pm
2 prnlyer lo has a hiddenayer ‘hat isinternaly
enctalmconste haan
Tea cee eons
et ‘There are no feedback connarye "PU funet
Freoutput ‘keonnectionsin whee
jetworks and Deep Learning
the:
les the
stints no det nts th ene
hehwork tobe computations aronge lee oa
tion, and the int
termediate computations er ode?
ups of the model are fed back ino sel
Feeneack
single Node with own Feedback
f realsin
en outputs can be directed back as inputs tothe same ayer or pectin liye ode, hen it
Ji networs.Recurentnetwot as eethackoumrone ah onclonn Te oe sae show ede
feeent network having a single neuron with feedback el
ron ingle layer recurrent network
element's
Jnich the processing os
onnection arent eral networks 2
peta ee oral Ares gaa ‘Tris alows
single-layer
Theabove networkisasinge oer ros ore
i it
Output can be deste oat one rte
of artificial neural net #
itoeahibit dyamic tempor en
theirnternal state (merhory) #0 PPOceeural Networks and Deep Learning
:
5. Multilayer rect
of network, processing element output can be directed tothe processing lementin the sane,
and in the a eng yer forming a mullayerecurent network They perform the same task or eve exe
a sequence, uith the output being dependent onthe previous computations. Inputs are not needed a exh
‘step. The main feature of a Recurrent Neural Networks its hidden state, which captures some information ab:
sequence
3. Compare Biological and Artificial neurons.
Answer:
Difference between Biological Neurons and Artificial Neurons
Biological Neurons
Artificial Neurons
Major components: Axions, Dendhites, Synapse
‘Major Components: Nodes, Inputs, Output,
Weights, Bias
Information from other neurons in the form
‘of electrical impulses, enters the dendrites at
‘connection points called synapses. The information
flows from the dendrites tothe ell where it
'sprocessed. The output signal a train of impulses,
isthen sent down the axon tothe synapse of
otherneurons.
‘The arrangements and connections of the news |
made up the network and have three layers.
The first layer is called the input layer andis
the only layer exposed to external signals. The
input layer transmits signalsto the neurons
in the next layer, which is called a hidden lavet
The hidden layer extracts relevant feaires ot
patterns from the received signals, Those features
‘or patterns that are considered important a@
then directed to the output layer, which is.
‘A synapse is able to increase or decrease the
strength ofthe connection, This is where
Information is stored,
‘Approx 10" newons,
the final layer ofthe network
‘The artificial signals can be changed by weights
‘$a manner similar to the: ‘Physical changes
{hat occur in the synapses.
ee | 10% 10* neurons with ‘current technology
at
we
AQ4. Explain the characteristics and applications of ANI
sand applications of ANI
newer
Characteristics
Characters ica Neral ewe
Heo nolan
contains hie numer f ntereoneted ye
1g elements called neurons to do all operations
.
.
‘Information stored in
: the neurons are basically the weighted linkage of neurons
‘The input signals arrive nd weights.
‘at the processing elements through connectiot i
sctions and connecting wei
Ithas the ability to learn nd ger from yent and adjustment
+ Tecall and generalize from the given data by suitable assignment *
suitable a i
of weights,
‘¢ _ The collective behavior of the neurons dese
cribes its computational power, and no single neuron carries
specific information
Application of Neural Network
1. Every new technolo
Every naw ehncon need atone roth ewan. dia om these data are
1nd cons should be studied correctly ll ofthese things are possible only through
thehelp of neural network.
Neural network is su
2. leural table for the research on Animal behavicr, predatorprey relationships and population
Itwould be easic
2 fr todo proper vahiation of property buildings, automobiles, machinery et. with the help of
cycles
neural network.
4, NeuralINetwork can be used in betting on horse rac
5, Itcan be used to predict the correct judgment for
and the resulting sentences as output. :
6. Byanalyzing data and determining which of the
1s, sporting events, and most importantly in stock market
any crime by using a lage data of crime details as input
data has any fault (files diverging from peers) called as
Data mining, cleaning and validation can be achieved through ‘neural network.
7, NeuralNetwork can be used to predict targets with
‘and magnetic instruments.
8, Itcanbe used efficiently in Employee hiring
upon the skils the employee hasand
the hel of echo pattems we get from sonas, radar, seismic
sothai any company can hire the right employee depending
| what should be its productivity in future.
9, ___ Ithasa large application in Medical Research’
10. ha mpg nc, Ne
he human brain and computers in eerms of ho! information is.
5. Explain the Difference between
processed.
terms of how information is processed
> iter
Difference between the human brain and computersin
ie ficial Neuron Network)
‘Human Brain(Biological Newron Network) ee eas son
ter
“The human brain works asynchronously pul y
impute slowly (several ‘Artificial Neurons compute fast (<1
ool | _saroscond per comet an |
mre computor tion it Jn computer programs ‘every bit has to! function
sa represents nvormation n? aah
TN 1S are unreliable asintended otherwise these programs wot crast
so
+ | agitributed way because net
any te it fvity between the electronic
_| and could ee Their connectivity Ove! time The ae esr
: ic nents in
‘Our brain cna ‘compo! a
rorepresens ne unless we replace
* ontsimposed on US
meg at ee ae
a poe
ita aay DUE as HENSfepolagies,
Se | ee
al
Researchersave alto find out how a
the brain actually learns
S|
Snorer
1.2. Basic moons oF ANN, Iarontant TO | }
2 3
Biologcal neural = ub
\tworks have complicated So
Q6. Explain in detail the baste models of ANN:
Anawer:
“The deol ANN ae sec y thee ba nen mame”
1. Themoder s snapic interconnect tion
synaptic interconnection sng tneconecion nh
2 Tretoningrecrlenning nies adopted for updatingan 0 Pec t
3, Theiractvation funebons |
1. Connections sy tateach t
elements sue Processinnw, |
Connection ntl gh erconneced ester Sent ot 10a dy ag
ah rterconneced er rors Cees andthe gene
‘output is found to be connected through wea
cup on a es theaangerent of SS Pnngcon cngnates 2nd emia 4
free connections are etal fran ANN, The Poi ee ide spec
interconnections are es racesng element nan ANN Sho
ad conc eh eee enact ate en nme mn |
called the network architecture. ;
“Thos are ve basic types of neuron connection architectures”
‘Single layer feed forward network.
Mattlayer feedforward network : k
Single node with its own feedback .
Single ayer recurrent network t
Muitiayer recurrent network
Single layer feed forward network
Input Laver oupua Layer
re eenene
| Networ
rr iaeee 7 = Neural
‘Single Laver feed Pee ase
es forward Network round
ts
/neriormed by taking oceing met ad combining th oN ST. es ren
_ oli proceangnatcswiomed heteaircanbecomecea aca
a acted he npucont ome ted tothese
‘Thos. asinglelaver feedonward nenwork formed
Multilayer feedforward network
Teel teeta
senha inane some he wtecntin scan, Th layeristat
w «i is layer has no function e fering the input signal, The output layer generates
se netonk My njeret niobate ba web apt ajer vealed ee
ye
4, Single node with its own feedback
nowt ao
Feecbsek
If the feedback of the output of the processing elements i directed back as an input to the Procesing
semen in the same layer then its called lateral feedback.
anyone found gly ts ABLE to face LEGAL prosesNeural Networks suse 2s =
‘Competitive Net
‘The eompettve interconnections have fxs
Unsupervised earning network Category,
‘Single layer recurrent network
“
ed weight-88, This nets called Maxnet ang
We sug
ving]
4.
2
Fig: - Single Layer Recurrent Network
Recurrent networks are the feedback networks with a closed loop.
5. Multilayer recurrent networkyey
be Pe taterlinhition structure
6 Neural Networks and Deep Learning
“Input Pe = .
= Actual Output
Error Signal
Error (D-Y) Generator = o st
oO |
(b) Unsupervised Learning: |
x ANN
—_x_|
WwW
Input i
Learning:
(0 Reinforcement!
Xx) Net Y
it
Input ‘Actual Outpu
. _——
function that has two possible outputs. Tis function returns 1, if the input is postive, and O for any negati?
input.
Q8. Explain briefly Training algorithm used in perceptron.
Answer:
Training Algorithm
Perception network canbe trained for single output unit as well as mukiple output unit y
Training Algorithm for Single Output Unit :
Step 1- Initialize the following to start the raining - }
© Weights
© Bias ‘
© Leamingrate a a
Foreasy calculation and simplicity,
equate,
Step 2 - Continue step 3-8 when.
‘Step 3 - Continue step 4-6 for ev
‘Step 4 - Activate each input uni
xi=s(i=1ton}xi=siti= Ion)
Stn Now oan he nti he towing raion
viowb+ Zhuuiyn=i 5a :
weights and blas must be set equal to 0 and the learning rate must be
the stopping condition is not true,
ery training vector x,
it 28 follows -tuereNiori> oop,
007 -hdbt evwagusnaian
ott yen,
fine.
"<0 (yn) = fy
las 28 follows . 908-9 00-0 <0 iy
. nhs
forx = Lton and) = 1. tomas follows “
Step 6~
jeliom-
‘tyin)= | [10-26 oit-0 svinis
vey 7- Adjust th weight an bB8
ity, 4.
Case HEY, 2 Me fod) rt
Case2 “ify, = then.
nu) =okdnatinew =wioK)
oddone) bilo) a
‘actual output and "isthe desired/target out
ops. ‘condition, which wil happen when there is no change in we
‘Answer:
‘Adaptive Linear Neuron (Adaline)
‘Adaline which stands for Adaptive Line
by Widrow and Hoff in 1960. Some important
¢Itusesbipolaractivation function.
thse deta re fr taining to minimize the Mean Squared Error (MSE) between the actual ouput
desrediarget output. 7
© Theweights and the bias are adustable.
Architecture +
‘The basic structure of Adaline is similar to perceptron having an extra feedback loop with the
the actual. ae
peacoat s compare wh he destedtarget cup Alar comparison onthe basis of traning sas
ar Neuron, isa network having a single linea unt. twas des
{points about Adaline are as follows “* ae
Weiee target output
Aalusmenn
b —+ pus
Training Algorithm
Step 1 -Intialize the followin
to start th :
. training
© Bias
@ —_Leamingrate o
‘Warning : Xeror/Photocopying of
‘book is a CRIMINAL Act, Anyone found guilty isleural Networks and Deep Lea
beset equal
5s ——_——
soe sr
2 inn nis wn equ enna ea
1) yep - Continue step 3-8 when the stopping condition isnot rue.
$408 3 Continue step 4-6 forever bipolar taining st
Fg Activate eoch input unit asfllows
ton)xi=sli=lton)
(Obtain the net input with the following relation -
esi
step 5
gineb+ Zimdwiyin=b+ > ins
Hove ‘bis bias and ' isthe total number of input neurons
Step 6 Apply the following activation function o obtain the final output *
1 -Aifyin > Oiyin Otis
Output atthe hidden (Adaline) uit
Q=HQiniIY=H(Qinj)
Final output of the network
yfiy=flvn)
ie vinj=bO-+ 3 mj=1Qhivinj=bO-+ j= 1mOy
Step 7- Calculate the errand adjust the weights as flows -
Case 1-ify « tandt = 1 then,
‘wij new) =a old) + c(1-Qinj)xiwailnew)= wal) + a (1-Qin
bln) =bjlld)+ a (1-Qinbjinew)=ilold}+ a (1-Qind
In this case, the weights would be updated on Q, where the net input is close to 0 because t = 1
Case 2-ify ¢ tandt = -1 then,
wild reews)=wiklokd) + a. ("1"Qink)xiwikinew) =wik(old) + c (1-Qink)xi
‘bhinew) =bh{old) + « (°1"Qink}bk{new)=bk(old) + « (1-Qink)xi
Inthis case, the weights would be updated on Q, where the net input is postive because
Here y'isthe actual output and’ isthe desiedtarget output
Case 3-ify = tthen
There would be no change in weights.
Step 8 - Test for he stopping condition, which mnwhen there isn ct
wc Ge uti ccna ee no seo ea
Q11. Discuss in brief on Back propagation Neural networks.
Answer:
Back Propagation Neural Networks
Back Propagation Neural (BPN) isa mulilayer neural network consisting ofthe input layer, at o
‘ong: Kerovhotoopying ofthis Bok is CRIMINAL, At tone fund pully ABLE o fee LEG ast oneMee
Aaling
nest
137d output lever. ‘sits name suggests, back propagating
yer a back propagating wil ake place in this networt
pose ee ited tthe tt ner, by comparing the sae anti te actual output wil be ropageted
Set este np aver
architecture
inthe diagram, the architecture of BPN er
spss has three interconnected avers having weigh
pineal he re Sets 1,onther. Asis ear oO oF
pete go BPNis into phases, One phase sends th gar he not =H tothe outputlavers
Dag egos back propagtes the er om the “output layer tothe input layer
‘Training Algorithm
For training, BPN will
three phases.
‘© Phase 1 -Feed Forward Phase
‘© Phase2 - Back Propagation of error
© Phase 3 Updating of weights
‘use binary sigmoid activation function. The training of BPN willhave the following
‘Althese steps willbe concluded in the algorithm as flows
‘Step 1 - Initialize the following to start the training
Weights
© Leamingrate a z
For easy calculation: ‘and simplicity, take some small random values.
jing condition is not true,
Step 2- Continue step 3-11 when the stoppi
Step 3- Continue step 4-10 for every taining pat
Phase 1
Step 4 - Each input unit receives {nput signal x, and sends t tothe hidden unit for alli ="1 ton
Step 5 - Calculate the net input atthe’ hidden unit using the following relation “
ib} pi=Inxiij=1topQin)=bO}+ Zins ‘top
Here b, isthe bias on hidden unt v4
isthe weight on unit ofthe hidden layer coming from i unit ofthe input
TIE mil fe FARE to face LEGAL proceedingSa aaa ae
function
ng
sorts and Deep tearing 8
a ‘by applying the following activation
Neural Ni
~ culate the net output
Nowcal -
Send these output tat the output layer unit using the following relation -
~ Calculate the net inp!
sto j= nC ominkebOKY 5 ]= IpQiue= tom
rant bison ouput nit Me igh on kunt ofthe ouput yer coming om jy
iden layer oe
‘Calculate the not output by appbing the following activation Funct
etiyinkly=flvink)
Phase 2
‘Step 7 - Comput the err correcting term, in correspondence with the target pattern received ate,
output unt as follows *
k= (ksh) 6k» (9h) (vine)
Onthis basis, update the weight and bias as follows -
dujk = aBkQAW}K = BK
“AbOk = akAbOk = adk
‘Then, send gk5k back to the hidden layer.
Step 8 - Now each hidden unit willbe the sum ofits delta inputs from the output units,
inj = Dk = LmBkwjkdinj = Bk = ImBkewjk,
Error term can be calculated as follovis-
8) = Binjt (Qinj)5j = Sinif (Qin)
On this basis, usdate the weight and biasas follows ~
Ail = adixiaulj = ajxi
“AbO} = jab} =a) *
Phase 3 is
Step 9 - Each output unit (y,k = 1 tom) updates the weight and bias as follows -
‘ik(new) =vjk(old) + a vkusk(new)=viklold)+ 4 ve
‘bOk(new) =bOk(old) + 1 bOkbOk(new}=bOk(old)+ 4 bOk.
‘Step 10- Each output unit (2) = 1 to p) updates the weight and bias as follows -
wiirew)=wilold) + « wiwiinew) wild) + aij ‘
bO}(new) =bOyold) + 4 bO;DOK new) =bOYold) + 4 BO}
Step 11 - Check forthe stopping condition, which may be either the num
output matches the actual output ber of epochs reached or the target
QI2. Evaluate generalized Delta Learning Rule.
Answer:
~ Generalized Delta Learning Rule |
Déita rule works only forthe output ayer. On the other hand, generalized deta rule, also cal
propagation rule, is a way of creating the desired values of the hidden layer. led as back.
‘Mathematical Formulation
Forthe activation function yk=ynkIvk=lynk) the derivation of netinpUt on Hidden lave ag
‘output layer can be given by Wellas on,» _____w Neural Networks and Deep Learning
Pe eisink= Det
eayni= Zvi
iow the err which hastobe minimized is
ele KishI2E=125 teshd2
pyusing the chain rule, we have
aeav = 0 (122K| thy] 2) EA = O12 Yk]2)
“souk ?12[ th —t(yink)]22 = 2owjk222[ th t(yink)2?7]
th — yk] Ow (vin) = ~[th ~ yc] (ine)
vk] (vink) ok (yink) = —[th~ yk] (yink) dow (yink)
2th vi (vin) j= = [th — vi (yin) 5 = [1 — yk] vi)
Now letus say a= —[th—yk]1"(yink) Bk = —[th— yk] (vink)
“Theweightson connections to the hidden unit, can be given by =
ans = 246GOI(yink) BED) = —EKBKGOVA( ink)
Putting the value of yinkyink we will get the following
8) = “Dkk (zinj) 8) =—EkBlowjk (zinj) .
Weight updating can be done as follows - ‘
Forthe output unit .
Aik = -adE ww = 03 EWI
= odky) = able
Forthe hidden unit -
Avi) =-adB2viiAvi = ~aEOvi)
axl
ia. Explain in detail Reinforcement Learning.
Answer: ‘
Unsupervised learning:
UUnsuperised learring is used when tis absurd to augmentthe taining data sts With class identities(labels).
This dffealy happens in situation where there is fio knowledge of the system, oF the cost ‘of obtaining such
Knowledge i oo high. In unsupervised leaming, as its name suggests, the ANN is not under the guidance of a
“teacher Instead, itis provided with unlabelled data sets (contains ‘only the input data) and left to discover the
pattems in the data and build a new model from In this situation, ANN figures out how to arrange the data by
exploiting the separation between clusters withiA it.
Reinforcement learning:
Reinforcement learning s another {ype of unsupervised leaming, Includes cooperation with the syster™,
i = 1.2, ...mandj= 1.2.
ve ansnusna ese ihn wasp anginal omen ma
Wray wk
Where a = Constructing constant.
Errors and noise:
“The input pattern may hold errors and noise or may contain an i .
cecum rag ere anton re manatee oe ro
sprite actual input patter. The exterce ofnobe or eorenteony ieee eae at
‘Dealdogradation inthe ficiency ofthe network. Thus, associative memoren are oka ee a
ne et nis pertominghighy prlelanddsrbuted computer a ees and evorree because of
Performance Measures:
“The measures taken forthe asodave memory performance
content caresbiiy, Memory cpacty canbe defined asthe maximumrreriee end oe EMOTY capaci
corte towed and comely recovered, Content addrssabilty veers to the gg ete patter pairs La
-comrect stored patter. ability of the network to recover the
input ptters are mutually orthogonal ere! recovery spose
‘orthogonal, non-perfect recovery cam mde lo Intersection among the pane Pt Patterns are not
Sa aap a MINA a sa
ILE to face LEG,
ac
P Proceedingssw Newat Nel
‘on Associative memory model
answer
“associative memory mode!
aeier actor esimpiet and mont wid used asociaive thi
sees acne cgrpoel a
ea dnttehaocmn Te tentatomn mace
‘inp. Birectonal Associative Memony TAM) od the Foplicld modal are some He" 1
retook
Nenvork architectures of Associate Memory Models: ana
acral exocave memory model pra various nut network re i j
eaves eer a singe ager of Wolves. The net ans crt ee er a
tt rns of woe of diferent rocesing ut The Dyersevingastheinpst Ve nit a
axe tut inv, The Hopfield mode veers single yer of OSS elements i) model Z
tnt Sed wath evry other unit inthe given network. The bidirectional associative mer
tee linear associator, but the associations are bidirectional
ee
“Fhe neural netwérk architectures ofthese given models and the structure ofthe correspondingassociation
ight mattoew ofthe associative memory are depicted.
st
Linear Associator model (wo layers: ( ¢
' Treat esociator models food orvard type network where produced output 06 form of single 4
exhorted computation, The model comprises of wo yes processing unis cre Wes input layer wile z
eee her work as an output layer. The input is drecty associated wih the MStputs, through a series of weights. The e
ee coring weghstnk each input to every output. Te ation of he produce the weights and the i
eats determined in each neuron node, The architecture ofthe near assorietor = given below.
x
xX
XP
Linear associator model
{Allp inputs units are associated to all q output units ia associated weight matrix
W="w,Jp*qwhere w, describes the stenath ofthe ‘unidirectional association of the f* input unit tothe j*
output unit.
Tre connection weight matt sores the z diferent associated pattem Pais {KY ps ke 2B
ry is bulding the connection weight matrix w such tat if an input pater s
Constructing an associative met
presented, the stored pattern associated with the input pattern is recovered.
a tia ok iv» CRIMINAL Act Anwone found sult TIARLE to face LEGAT proceedine™
T¥ienesPhotocopyinskt
il peratnetvoth
iy Hoel ine ation. The a is
huced an artic
Hoof
fp 1080. do nor id net
\- ran either 9 iron. A Hopfiel tems by uncovering partial op on Cont
Mex rears om he NT Tere any othe Kare eters the closest Pater, Tn Hex
seis road 10 eventually settles Goer ition. we
oat lis ally 9 Dae hich the neurons ae Eni on pI
Satine Jedyered and TU eg thenthereisa nner, |
‘Hopf networks 8 siraher neurons. Hthere are 60 NE
2 ron sasooated =, wi
each neue veh is symmetric W, = yrons having vahies
wr bee bonwcen ther which oa ‘Disgiven below Here, the ven three ne : 7
wie ere comets wet Wi ' a
ates X= 21 B | r
olts
fixed
}
the
Fe
te
Updating rule:
Consider N newors = 1, , Nwith values X, +1,
| ‘The update rule is applied to the node tis given by:
th, then x1 otherwise x, 1
Where h, = 2,4 is called field ati, with bE R a bias.
Thus, x, san(h), where the value of sgn{r}=1, ifr e 20, and the value of sgn(r)=-1, ifr < 0-
‘We need to put b,=0 so that it makes no difference {raining the network with random pattems: i1288 cb ID
125-5 so dierontopproncbesto dae neds
rously: 7
Sri approach, the update of alte
oes taking pce dmutandouy at cach me
uly:
approach, at each point of time,
0% of time update one node chosen randomly or aconing to some rue
ints
12 ain dtl on Hoel tory
mica pte
sore
ropfied Network asa Dynamical system:
caren K= (1 1) Nsothatach state xX i
Oe eee a
Theeon desrbe a meticon by sing the Har
Haring dance betwcen any to =e
moran ts) * es
Nite, P ba meticth Ody N. indoor smmeticand eee
rere ofthe anchor ynchronaus ping ies, et dart ne snail sem.
Tp updating rule up: XX dexrbes a map.
_andUp: XX ely. continous
Example:
Sappone we have ony two neurons N= 2
“There are two non-trivial choices for connectivities:
and [-1,-1. Allott converges to one
fixed points termed as [-
Joined through one of these. For any
inthe ist case, there ate two attracting!
sft Pera second the fixed poinsare[-1.1]and {1-1} andahorsisaze)
est awapping alte sgn gives another ed Pot
‘Synchronous updating:
Fe feed sevnd cases, hough here ate ied pots nonecan Ba
thy rena attracting fixed points. Some ois oscillate ore. 5
Energy function evaluat
Hopfield networks have an ent
Fora givenstate X " (“1, 1} Nof the
le,
acted to nearby points, €-.
that diminishes orisunchanged wth asynehronous updstng
ights W, with W, = wand, =0
ergy function
network and for any set of association wel
B= 1/25 Wax)
Here, wé need to update X, to X', and dencte he net energy by.’ and show that.
Using the above equation,
1, then X,, “Xe
Similarly if X,, =1 and X,
Thus,E-E<0. 0”we
“ee 2X0 a oaher
ah away frome neurons and)
sgn W, = Wi bomen
empl
nthe weighted au es We PERINE Thus the value of,
wm, he |
Va
in pulled by towards its value X)= 1
try the valve of ao pay
.e conto!
ips = ethane cone
3 irene and X, Boe
xed then WX, is pulled by the value of} By syne
tw, > 0, then the value of
‘Thus. |
the value of |
Wi, < 0. then t
Ie follows that fora paticl
pe ieNtheseection of wei
etn comelatesto the Hebbian rule
evohieof is pushed away bythe valve of
jarset ofvalues Xe 1-1, 1) for
js taken as W, = XX, for:
4 rnfK=0)
“aining the network: One patirrtK
os veetorx = eaXye-ay) € (In1} "sa pattern that we tke to store in the Hopfield neu,
Toto Hopfldnewerk that recognizes x, we need to select connection weight W, according
ifwe slat W, =) XX for 1d, )4"N (Here), wheres > OTsthie learning rate, then the vec 1
‘ui not change under updating condition as we illustrate below.
Worave N= S-WAX) =n XK) = AEX = n(N—1)X, |
Itimplies thatthe value of X, whether 1 or -1 will not change, so that x’ is a fixed point.
QI8. Discuss in brief on BAM.
Answer:
“These kinds of neural networks work onthe basis of pattern association, which means they can store fer: |
potiems and atthe time of giving an output they can produce one of the stored patterns by matching them wih be
Snneitve memory etworkcias 1 numberof‘Waning XeroyBhotocopying ofthis book is a CRIMINAL Act
For training, thisnetwork isusing the Hebb or Delta learning rule.
Step 1 - Initialize all the weights to zero as w, = 0i=Iton,j=ltoni=lton,j=lton
‘Step 2 - Perform steps 3-4for each input vector.
Step 3 - Activate each input unit s follows -
3i=sii=1ton)xi=si(i=Hton)
Step 4 - Activate each output unit as follows -
yi=sfj=Honlyj=3=1t0n)
Step 5 - Adjust the weights as follows -
wr) = wai old) +x, lnew) = will) +94
Testing Algorithm aes
Step 1 -'Setthe weights obtained during training for Hebbsrule,
Step 2 - Perform steps 3-5 for each input vector.
Step 3 -Setthe activation ofthe input units equal to that ofthe input vector
Step 4 - Calculate the net input to each output unit) = 1 ton
ginj= 3 i= Inxiwijyini= 2 i= Irv
‘Step 5 - Apply the following activation fun:
(yin) = (+ 1-Lifyin}> Oifyin)} < Ovi =Alin)
Hetero Associative memory
tion to calculate the output
5) = {+ ifyini>O-Lifyin}} <0
this is also
‘a single layer neural network. However, in this
Similar to Auto Associative Memory network,
ecto are not the same. The weigh are determined 0 that
"network the input training vector and the output target
thenetworkstoresa set of pattems. Hetero associative
Unear and delay operations.
‘network is static in n
Architecture
Hetero Associative Memory networkhas
As shown n te following igure, the architecture of
input raining vectors and ‘m’ number of output target vectors
‘gone found guilty is IABLE to fee
ature, hence, there would be nonon-
‘number of
‘LEGAL proceedingsNeural Networks and Deep Learning @
‘Training Algorithm:
work is using the Hebb or Delta learning rule.
For training, this net
= Iton,j=1tomi= ton, = tom
Step 1 - Initialize all the weights to zero as W,
‘Step 2 - Perform steps 3-4 for each input veotor.
Step 3 - Activate each input unit as follows -
ton)xi=si(i= ton)
Step 5 - Adjust the weights as follows -
vwij(new) =wil(old) + xiyjwij(new) =wij(old) + xiyj
Testing Algorithm
Step 1 - Set the weights obtained during training for Hebb's rule.
Step 3 - Set the activation of the input units equal to that of the input vector,
Step 4 - Calculate the net input to each output unit j= 1 tom;
vinj= i= Inwyini= » i= Lil
‘Step 5 - Apply the following activation fuinction to calculate the output
(yinj) {\| || +10 —lifyik >.Oifyink = Ofyinj <0 2
EEE