0 ratings0% found this document useful (0 votes) 43 views12 pagesAnn Tutorial
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here.
Available Formats
Download as PDF or read online on Scribd
Priyanrr V. Mourya
o
2023TRIS
| eiol Neural Nehwone Tutorial
i its bovie Campormets *
> Define dificil Neural Network and explain its bose compo
joral Model inspired
AutRciot Neural Nelwosk CANN) is O or ae eer sin
by the way biolagical neural networks in ey
wones’. Ty avMfaddd 1A Machine’ iebubing, am eeidaae:
imleitigence tases auch a> palleen recognition , classi ,
gration ancl — clustesing.
‘S Neurons (Nodes) :- Neurons are the fundamental units of a
Neural networ .
PrOCensen
Activation
Fach neuron receius on os more inputs ,
them wing a Weighled sum and applies an
function’ fo produce an oubpul
© Waleighls - Weights are agsosiated with the connections betwen
Neurons. These weights Are adjusted to minimize He difference
ects ane reals eae ce tee ee autpub
a pctivat ion = An activation fanchion is applied to the
| Mighledl “sum of
jopuls ina neuron to intreduce Non ~Kineority
inko netwerx. Common aelivolion Punctions include sigmoid,
Hhanh and rectified lineaw unit CRetu).
“ Layers - Neurcns ina Neural nehworr are
layers | There are Yhree types of layers
o Input layes 5 Hidden layer cy Oukpul layer,
2 Bian Each neuron NYpicaly hos an associated bias term,
The Bice attews’ neuron to Adjust Hein Oulpus indopendintly
Poet the inputs. .
plain the skuclure of 0 Single layer perceptvon and haut i) roves
Jain a discussion an Hhe activation function. ;
decision. Include CSuPd is the simples) Form of a neural
A Bingle“loyer Perception? odes y woithouk
ly loyer of outpul nodes wai any
ee cs a problems
O- SER are sel for binary Cloosific (oe pay
J learn to classify inpwh padterns cay es
vir Reature- ‘
organized into
percephron - un
“ayer” coruisls2> Ineights - Each inpub Features is ausspiote uaith if wegh the a]
Weights represent the shength of the infuance cf each poing
input Peake on Ihe output. During brining We pescepinn, a
dusts these weights based on the inp data to learn wy
the optimal decision boundary. “7
38 Summation function - “The weighted Sum of inpubs is to
caculated by multiplying each inpub fealue by its weig
eyo weighh and summing them up of 4
iaD- “The weighted gum is then Posed * Lee
eee AD Gttivation Function. ‘The purpase af the atlivaiin of
mois is ta: inlroduce! ash- Mneavity foto the medel, ‘the
anes st Qelivation function iss crucial ao j} determines ft tt
SOPs Saint PeACePton, Common aciivation fun étions Used in Valu
owas co gisy MMe Skep funclion, whieh produces binary OLte
ths owpus nie vind {fhe sigmoid function, which squahed 7 Abe
Ott probabilitie, and 1. attowing the perceptron to iy
Davie
3) Describe the bacepropagatian algosithrs wed foo training ANNs ee
How cloep it minimize the enor in predictions? :
S20 Bacrpropagalion is supesvised Neaining algesithr wed For k Hie,
Waining artificial neural nehwooes CANNs) Lt minimice peed
_ Yhe ester in predictions by adjusting He weights and bices yp
af the netutesk posed om the difference betuscen the prdicied—
Output and the aciual outpul C ground tulh). Here's how the OMA
x Be eahion algovithrn wons: Quant}
p : Predict,
Rpt ata ts fed Ruworcl th Pe ay ate
npul 0
te produce the predicted output. ‘.
“Meuron, the cusighted sum of inpuls is. calculates
s hy ain activation function to produce
ie ‘ if SEP awolyeer
Tl
type
Comme,
qu——K;;;-—E——_—~—=—~E~<_—=—E{_—E=—I—Eo
fh
The ears grodients are calculated wing Whe chain rubs of calculuo,
Pols: 7 7Gntebaueeneibabialderiiom ‘te leeneute thas bmneattal each
Fer, Meuvon’s eutput an the overall exror.
pS aay ‘ .
“ The gradients calewated luring Yhe bacrusesd poss are Uoed
to update the weights ind “biaoe of the network: The
ae Ond’ biases are adjusted in the Opposite direction,
e
@ laa Qradient Aiming to eae the error.
[alin | 5 oes ts hyperparometer, is ured to control the size
whe 8 les ees Guiting Hy Weights and bios updates.
i ane. kgarithrs From overshecting the eptimal
ie * aaa a Sleps are repeated fos a predefined Dumber of
k — fe the error eomerge to a salisfactory level.
Binves, Ee ee Me Nelwose refines j45 weighls and
ion “4 Minimizing the emor im predictions.
% aus the significance of a Joss function in Ann haining.
ws ee ip ut Commonly used toss functions >
ed ony
43a critical component
in boinin oy
fos Significay Nieo i
PACOUIING how wel the mewn ic Cormeen Meo In
true
ube
Har
and
‘eble
meoss ~Enbsopy toss?
ae Poe CIP ETT. ocual Ci JS K hog Cy-predicked Ci,i9\ 25 Disey
ce is wed for multiclass clossification problems. It Caleula, — Fegu
the toss Fox each class and sums them up. U1 ¢
W Winge Loss CUM Loss) ' |
Formula + Hinge toss = ClJn) Emax Coys “4 adtual ¥ Y predi ced) perl
Hinge toss is aflen wed in Suppost Lectar machines CSV) moe
and is suitable Fer binary clossification bases. “Ii encourdg, — SUb
Cowecl claasiReations to have a margin: ef at least! , 2 Ee
©) Exploin the wole of activation functions in ANNs. Provide oud
examples of Cornmon activation functions and their Fey
conti,
Characteristics
Sal’ 4 Activation function play a crucial vole in artificial neural Prve!
Peiearke CANN) by introducing non-linearity into the Res
Nelwovie . “This Mon -lineasity ablows ANN to learn Compler Ada
i, i the lata and make predictions For awide magni
erste ene Hip let tas Classification, regression and Squay
MN. Withot activation Functions, ANNs
; i i a0)
: Se pee ike ae linear medely regardless a? the Dep
Cane iddler, layers. Tendering Her Unable to trainir
CeCe Patton tr, the dlata . Feat
omenon Activation “Function :- eacall
: snclia *
re oF day
a * Batch
BNIB Opal belweer, © and I’. maxing i} Uwe fut Nosrmad
zi binary Classification problems where the ow pul Overfith
NS probabilities. * Early
Apply
during
the y,
ean
jon functions are mainly
wed in regrension
- outpub ,
ia a continuous value.—— ~~ _
qi
Rec 6) Discuss stwategies
le
‘choy,
Ie)
uy,
he
lide
ural
Ve
plex
wide
and
aI 1) Cross Validation- Split
J indludin.
to prevent opexfiNinaihio ON Tean es 3
‘regularization techniques the’ dataset into rmuliiple subsels Fos
validdation helps asseas the model's
ensuring thal the
Across VOvioUd
training and testing. Cross -
perfomance on diffeent data splits
model's generalization is consistent
Subsets of the data.
2) Eady Stopping : Monitor the models performance on a
validation dataset during training. “Ef the validation
performance starts to degrade while the training pes fosmonce a
continues to improve, Stop he fraining process easly. This
Prevents the pnedlel From. ovefiting fe training data
® Requlevization ~Techaiques’
ey and LO Ragulovization ( Weight Decay) :
Penalty teem to the loss function based on the
Magnitudes of Hy weights CLI Sr al 2 fe
Squared Value, a Se nee
* Dropout +
Dropout candmly deactivates a fraction of heurons during
haining . Foscing the network 40 learn more robust
Features. Ty prevents ‘reliance on specific neurons and
Encourages Hy» network to learn Multiple sepresental io
oF data. ae ‘
* Batch Normalization :
Nosmmodizing, We input of each ayes can mitigate
Overfitting by reducing internal covariate shifls
* Early Dropout |
Apply dropout to the input layer and possibly hidden layer
dusing the early epochs cf teaining. This helps prevent
Hhe network Fromm memorizing the training data toe
© arly in the learping — process.
Dix how ANN can be applied to image secognition bases
“jneluding a brief overview ot the steps for image proce’sing.
"9
iT sI2e
s
SEBS:
yo
“neural network have been Highly succeasfil in image #
2 ron faves, datmonsbaling state oF the art performana
0 :
domain.ip f -
f OF—ANN'S im Tmage Recognition | - a) Tre
7 ainualulional Neural Neturnz (CNN) Suitat
CNW Ore speciabized type of eural Netwose designed Fey ot pi
image melated f0ske) They Consisks of Corvolutional layers 53 Mo
4 Automatically learn hiesaxchica! Dproertations of dlataset
atures in an image ® Mod
2) rm ing: on i
* Normalization : Scale pixel values to a standard range to 2) Testi
ensure Numerical Stability during haining. measur
aS * Standardize’ image sizes for Consistency ir) the eel
heerlen, 5 Apply transformations to ONifitially inerease Applicat;
€ diversih : ViSiog
2 Convoulistal yp eo a © Bw he
MAEeelAYers Use Ritere to. conebe, avye. the inpul image, CNN
Captusing loca) Patterns and Feahires for pr
A) Activation Funeti i Non linear adlivation functions introduce Archite,
Non = lineavity to the model, allowing if te learn complex learn
Beeesbips in tu data ena
DT ntahion ; wohl eS
Hadiey Dwr con be Used fer image segmuntedion which ino, 2 in
divi N99 an image into Meaning segmunts os regions. texture)
©) Irn
; % Active
Ceenevative models lice Genevalive Adversasieal Netwour and ae
Fats) “Rutdermdess “Can gerutake realistic images fae
Dv ition : ’
Deep Neuat rutwesK are Widaly Used Fox Face recognition tax
BWlewing Eysterm to idintify individuals im phetos and video.
iM image processing fox recognition bats: "|
ge _Acauisi Oblain image From Vatiers sourcess such o>
Acquisition :
5 “
cessing t
J the fase, Image ray be conited to graysct
Color infesmation.9} Explain the shuclure and Unique propertie af RNNS. How do they Oiffe)
difte from boditiona) Feedfiwword neural netoors ? 3s
S219 RNNS Ore o Hype of neural Networx deignad to hondle Sequential aoe
dato by introducing Connections beluxen neurons thal Form tack
ivecked cycles: Unlixe | traditional Feedfesword neural networrs 2) Por
RNNS have Feedfiacr oops, allowing them to maintain a hidden uaigh
Stale ov memory of i “The booie s 1 on
y cf previous inpwls. The pi¢ shucture of or Facit|
RNN includes Tra
Se
Y Recurrent Connections i eval
a 3 Han
Niurons “in an RNN have ‘connections. thal sloop baer on Foo ha
Hemseives , allowing
peal information to persict across difixent 4) flees
Vasyin
® Nidden Slate CMemowy) : saan
RNAs maintain a hiddin stale thab evolue with each inpul :
And the previo hiddin stale ad
Sanh Oulput and Hidden layers : ioe
Rulon. 10 feedfowward nehorics, RNAs, Coruist, ‘of inp) , > Teng
Culp and hidden layers Framey
Fropestien of = RIN Apex
OTe ve Neurat
RNINS “are well-suited) for tases where Hu osdes and > Pyto
timing of inputs matter. ‘The abilily fo maintain a hidden feamen
State allows ther to Capture temposad dependencies in marine
Sequential dato. Mating them elfective fos tases Vive time =
Bea get. Nahirad language processing, and speech eee
3 4S Keron
Same set of ghks and biaves are applied at each time run or
ay OMowing the netwosr to learn and generalize paticins
different positions in the sequerce.
le input Sequinces of variable |
fo dynamic and changing data.
architect ure, including one toane
Me and mony to“4 Difference from traditional Feedfosword |
> Memen RNNs have memosy clue to . mn to
ig ee about previous inputs » allowing iar
Process Sequential data. Troditional, Feedfesward netwost
Or Yack this explicit mernory.
didde, » Farametes Sharing over dime : In an RNN, the same eet of
oa Se SO hig applied ad each lime step,
ealieteg We Mchating) are leavses porns Garcue the
2 Handling Sequential, Data: RNWs are deigned specifically
eo} ie handling sequential clata and capturing temporal dependence. 5
* io Input length: Riis ‘cam Wandte inputs of
6 Vasying 1 : : $
ae Seauines oan Hum adaptable to task involving
hey
W
; Jenghts: 4
> Provide exarnple ot Popalas neural network simulators and a
te Fey Features and applications. 4
8 saree tctics learning
Ree ame tiered by Google. 0.3900 ,
on ee
ee toast Iieg de.
° eae
(Os 593269 x0. 4) + (os a6aRux 0.5)+0.6
ee Layer © 9 Neuyons OXI) x2)
Rae Layer © 2 Neurors Chisha) with @ sigmoid activation fy
ioe Seer | 1 Neuron COI) with a sigmoid activation Fur,
rie [erection Mean Squared Error
a
4“) Upe
lds
We
Wh
losbs- Cnxdo)
O6- Cos 0.138595)
= 0-530)
Se eo)
> 0.35 - Ca.sx0-01336)
* 0-84ya39°
Bs be Cn x dhe)
= 0-35 (os x 0.01667)
* 0.341665