Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
240 views28 pages

NNDL Unit1

Uploaded by

ksaipraneeth1103
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
0% found this document useful (0 votes)
240 views28 pages

NNDL Unit1

Uploaded by

ksaipraneeth1103
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 28
Neural Net [mPoRTANT a @. &. @. @. 6. qr @ @. Quo. an 2 1 BRSRARBR Quo. ait. au a2. @. 4. Learns = List OF avorks and DEEP UNIT -1 Give the History of ANN. Compae Biological and Abfifcla neurons. Difference between the human brain and computers in ferms of Explain the how information is processed: Explain in detail the basic models of ANN. Discuss in brief on Adapiive Linear Neuron. Discussin bref on Muitiple Adaptive Linear Neuron. Evaluate generalized Delta Learning Rule, Explain in detail Reinforcement Learning Explain in detail on Associative memory mode. : Explain in dtailon Hopfield network as a Dynamical system. Discuss in brief on BAM. UNIT - Give a bre introduction on Unsupervised learning networks. Give abrief note on fixed weight competitive nets, Explain in brief on Manet Explain in detail on Kohonen self organizing feature Map. Explain the unsupervised leaming technique Restricted Boltzmann machine. Explain in detail on Brain-state-n-a Box Network. Explain in deta on LVQ architecture Explain in detail on Full counter propagation network. Explain briefly Forward-only counterpropagation network. Explain the applications of ART. Explain the types of networks? UNIT -1n What are the ype of deep earning newer? Explain, explain briefon Deep ering loth, Feed Forward Process in Deep Neural Network? Explain in etalon Flan he into of edi bse opin, QUESTIONS Wits Key 62 oo a Wait, 0.0.3, (Unit, ON, Wnit4,O.Na5, (Unit, O.Na6) (Unit, O.No9) (Unit, Q.No.13) (Units, Q.No.15) (Unit-1, Q.No.17) (Unit-l, Q.No.18) (Unit-I1, Q.No.1) (Unitl, Q.No.2) (Unit-I1, Q.No.3) (Unit:I1, Q.No.5) (Unit.Il, Q.No.7) (Unitll, Q.No.8) (Unit-I, Q.No.10) (Unit-H, Q.No.12) (nittl, Q.No.13) (niet, Q.No.16) (Wnittl, Q.No.17) Unit.ttt Q.No, No.2) (Unitstt, Q.No, a (Unit, @.No.6) 915.8 aah ote on Batch adler deta, aL Netwen te ch gradient descent. al Networks and Deep Learning pucussin bie on Gradient based lean 7 in the diference between Gradient (nit, Q.No9) Or. plain gradient descent in linear guaket = ‘and normal equation, (Unit-tlt, Q.No.10) 8 Frain the Architecture and Leaning proces (Unit, Q.No.12) Preuss in detail on Bock propaga 1 Process in neural network (Unitatt, Q.No.13) 10. tion through time (RN) (Unit-HH, Q.No.15) (Unitttt, Q.No.16) UNIT - Iv gy. What isthe! difference between L2 and L1 parameter norm: tp. Basanindetal.on Penaies 8 constoined optima — (Unit4V, Q.No2) ag. Dicuss brie on Dataset augmentation : (Unit-1V, Q.No4) gapeinin brief on Noise robustness. (UnitV, No) Gp, Gueabeef introduction on Semi supervised amin Cate oj Deserbe in bref on Multaskleaming, (Unit 1V, QNo.10) tp. Whatismeantby Explating the validation dat canoe soja pore repesniatonin deta ae (UnitaV, QNo.15) CAC ae (UnitV, QNo17, @. Explain Bagging in detail. (UnitsV,QNo18) qo. Explain Adversarial Training in detail. (Unit-1V, Q.No.20) it. Explain the challenges and drawbacks of adversarial traning UnitV, QNo21) @i2. Explain in brief on Tangent Distance. (Unit 1V, QNo22) UNIT -V Ql. Explain the gradient descent algorithm briefly (Unit, QNo2) (2. Explain the types of gradient descent ‘and challenges faced by it (Unit-V, Q.No.3) ©, Explain Mini Batch Stochastic Gradient Descent (MB-SGD). (Univ, QNOS) GA. Discussin brief on NAG algorithm briefly. (Unit, QN06) 6. Discuss in detail on AdaDelta (Unit-V, Q.No.8) ©. Explain Adaptive Moment Estimation in deta (Unit V, Q.No.10) G1. Whatare Approximate Second order methods? Explain. (Unit-V, QNo.13) Discuss in detail on Optimization strategies and meta-algorithms. (UnitV, Q.No.14) © Discuss on Image classification and Image clssfcation vith localization in detail. (UnitNs anor) 10. Discuss on Object detection and Object segmentation : bes a a (8: acetal ocx proceed TION PO works? SN Ql2. Listout the benefits and challenges of natura languede processing (eee —B Neural Networks and Deep Learning \ UNIT ARTIFICIAL NEURAL NETWORKS Short Qu Ss Qi. What is BAM in neural network? 20 0 2 0-220 20 0-2 M-l2 0 0 2 7 0 2-20 200.2 Bidirectional associative memory (BAM) is atype of recurrent neural network. BAM was introduced by Batt Kosko in 1988. There are two types of associative memory, auto-associative and hetero-associatve. Bidirectional recurrent neural networks (BRN) connect two hidden layers of opposite directions to the same output. With this form of generative deep leaming, the output layer can get information from past (backwards) and future (forward) states simultaneous Q3. How does an ANN Work? Answer: An attifi neural network is an attempt to simulate the network of neurons that make up 2! human brain so that the computer will be able to lear (nnd and make decisions in a humanlike . created by programming regular computers 10 D2 asthough they are interconnected brain cel manner. ANNs are Maes ee As ANNs are a type of computer program that can bbe ‘taught to emulate relationships in sts of data, Once the ANN hasbeen ‘trained’, itcan be used to predict the ‘outcome of another new set of input data, e.g. another composite system or a different stress environment. Q5. What are unsupervised neural networks? Answer: ‘Unsupervised learning méans you're only expesing machine to input data. Tere is no corresponding output dala to teach the system the answers it should be artving at. With unsupervised learning, you train the machine with unlabeled data that offers it no hints about what itsseeing, Q6. What is unsupervised learning example? Answer: BF e+ o> Acs =-@ “The goal of unsuperised learning iso find the _undeying structure of dataset, group that data according tosimiarties, and represent that dataset ina compressed format. Example: Suppose the unsupervised learning ‘algorithm is given an input dataset containing images of different types of cats and dogs. Q7. What are 3 major categories of neural networks? E Answer: ance a nego DE ee se ines CN ; New 2 Corsa Netwons ENN ea rearing wes or unsupervised learning «Sy, clustering — includes rntation, or understanding pe around which o build ‘cases for Some use —more specifi ve Gareics, fr example custering DNA patemns to analyze evolutionary biology. @._ What are different types of unsupervised learning? Answer: ‘Unsupervised machin learning helps youto finds sil knd of unlnown pattems in data. Clustering and ‘Association are two types of Ur earning. Four types of clustering methods are 1) Exclusive 2) Agglomerative 3) Overlapping 4) Probabilistic Q10. How do neural network improvise and Answer: Neural networks generally perform supervised Jearing tasks, building knowledge from data sets where the right answer is provided in advance. The networks then leam by tuning themselves to find the ight answer contheir own, increasing the accuracy oftheir predictions. 1. is OQ What isthe difference between ANN and Answer: “Technical, an artificial neural network (ANN) hasa ot of avers isa Deep Neural Network ONN Gia, What te beck pr Answer: neurabnetwork? P*8*tON a The pacn rentals eaierore arcane Graght by the chain rule Iteffinibyermney ta time, nike a native direct computates the gradient, but itdoes not define hearths used Q13. What are the four main steps in propagation algorithm? Itetmyen b back > = Beloiw are the steps involved in Backpropacstin Step — 1: Forward Propagation, Step ~ 2: Bachar Propagation. Step — 3: Putting all the values ogee and calculating the updated weight value. Q14. What are the propagation network? Answer: ‘Back ‘Propagation TAigoritnm —* Disadvantages of Back Propagation Algorithm: Itrelies on input to perform ona specific prcte™ ~ Sensitive to complew/noisy data. Itmeeds the derivatives of activation functions the network design time. — Q15. What are the features of back propagation algorithm? Answer: The backpropagation ‘algorithm is based of race though, adeep neural networks just anormal neural network where the layers of the network are . OF a network that uses functions not {ypically foundin an artificial neural metuorhs generalizing the Widrow-Hoff learning le. I ov ised learning, which means that the algorithm ® The saz with examples of the inputs and ouPus that cacgtt9"* should compute, and then the err is Selene Sire name Negi St Give the History of ANN, a power viatory of Artilicil Neural Network Thehision of neural netering ang saves 12005 wh etc enous «nthe human brain. In 1850, Wins ees MS ft work about brain act pate bf neculech an Pts cate mde neuron, {Mi ae toy nani neal nets ‘his mode! Is segmented in two parts _Asummation over-weighted inputs. ‘an output function of the sum, ‘Artificial Neural Network (ANN): In 1949, Donald Hebb published “ nit oer acenat ees ates The neuron learning. This law, later known as $ipbian Learning in honor of Donald Hebb, is one of Het sight forward and simple laming rules or caficial neural: networks. In 1951, Narvin Minsky made the first Artificial Nawal Network (ANN) while working at Princeton. In 1958, “The Computer and the Brain” were subshed, a year after Jhon von Neumann's death. In Perbook, von Neumann proposed numerous extreme anges to how analysts had been modeling the brain. Perceptron: Perceptron was created in 1958, at Cornell Universy by Frank Rosenblatt. The perceptron was an crizsvor to use neural network procedures for character ‘ecogition. Perceptron was a linear system and was arable for solving issues where the input lasses were Inenty separable inthe put space. In 1960, Rosenblatt rublshed the book principles of neurodynamics, eanining a bit of his research and ideasabout modeling thebrain. Despite the early accomplishment of the perceptron and artificial neural networkresearch, there Were many individuals who felt that there wAs © constrained guarantee in these methods. vere Marvin Minsky and Seymour Papert, whose 1969. bookperceptrons were used to dishonor ANN, research andfocus attention on the apparent constrains of ANN Work One ofthe imitations that Minsky and Paper's highight was the fact that the Perceptron vt not Caable of distinguishing patterns that are not linearly Separable in input space with a linear classification Problem. Regardless othe dsapporiment of error te deal witha indy separable dat, srt 28 nero ae oth ecg, ba ater of a, Hecht Nice ded hone perceptron (Mar 1950 tat ewig machine a wa epee frac nen near seperation poten, Pacers Inrrhcd uit recath et yes” where ANN arch wns at minima of interest ‘The backpropodntion algorithm, intial found by Weibnin 1974 mere 1986 th ebook {Leaming lero preston by Et Propognton 39 Rumelhart, Hinton, and Wiliams, Backpropagation 'satype of gradient descent algorithm used with artificial ‘neural networks for reduction and curve-tting “el, 1987, the IEEE annua teratoma NN conference wos begun or ANN sient 1987, he Intemational Neural Netwenk Socety{INNS) was formed. ‘along with INNS neural Networking journal in 1988, QZ. Give a brief introduction on ANN. Answer: wont itil Neural Neterk (AND), 2 information processing paradigm that is inspited by the ‘brain, ANN, ike people, leamby exarnples. An ANN is ‘configured for a specific application, such as pattern recognition or data clasfication, through a learning ‘process. Learning largely involves adjustments to the ‘Synaptic connections that exist between the neurons. “The model of Artificial neural network can be specified by three entities: (© Interconnections ‘© Activation functions © —_Leamingrules interconnections: Interconnection can be defined as the way Jng elements (Neuron) in ANN are connected to Pach other, Hence, the arrangements ofthese processing flements and geometry of interconnections are very essential in ANN. “These arrangements always have two layers that are common toall network architectures the Input ave a output layer where the input ayer buffers the ‘input signal, and the output layer generaies the output of the etwork, The third layers the Hidden layer, in which Murons are neither Kept in the InP layer nor in the nearer ayer These neronsare en fom he PSE ‘who are interfacing with the system and acts asa black tox: to them. On increasing the hidden layers with pily is VIABLE to face LEGAL proceedings Neural Networks = ‘fneuron connection architecture There exist five basic types O° 8) 1 Sing inerond owordnetnk layer feed-forward net 5a — 2 3 feedback 35a) 8, Silene with tow eet gree 4 singlyerreourertnta taser Doe unt network putput 5. Mublayer recurrent peor 1. Singe-layer feed forward network ' 7S feedbe recur 4 In this type of network, we have only two layets input layer and output layer but the input layer doesss ‘count because no computation is performed inthis layer. The output layer is formed when different weighs: applied on input nodes and the cumulative effect per node is taken. After this, the neurons collectively gv tx ‘output layer to compute the output signals. 2, - Multilayer feedforward network © - aout yer ox pm 2 prnlyer lo has a hiddenayer ‘hat isinternaly enctalmconste haan Tea cee eons et ‘There are no feedback connarye "PU funet Freoutput ‘keonnectionsin whee jetworks and Deep Learning the: les the stints no det nts th ene hehwork tobe computations aronge lee oa tion, and the int termediate computations er ode? ups of the model are fed back ino sel Feeneack single Node with own Feedback f realsin en outputs can be directed back as inputs tothe same ayer or pectin liye ode, hen it Ji networs.Recurentnetwot as eethackoumrone ah onclonn Te oe sae show ede feeent network having a single neuron with feedback el ron ingle layer recurrent network element's Jnich the processing os onnection arent eral networks 2 peta ee oral Ares gaa ‘Tris alows single-layer Theabove networkisasinge oer ros ore i it Output can be deste oat one rte of artificial neural net # itoeahibit dyamic tempor en theirnternal state (merhory) #0 PPOce eural Networks and Deep Learning : 5. Multilayer rect of network, processing element output can be directed tothe processing lementin the sane, and in the a eng yer forming a mullayerecurent network They perform the same task or eve exe a sequence, uith the output being dependent onthe previous computations. Inputs are not needed a exh ‘step. The main feature of a Recurrent Neural Networks its hidden state, which captures some information ab: sequence 3. Compare Biological and Artificial neurons. Answer: Difference between Biological Neurons and Artificial Neurons Biological Neurons Artificial Neurons Major components: Axions, Dendhites, Synapse ‘Major Components: Nodes, Inputs, Output, Weights, Bias Information from other neurons in the form ‘of electrical impulses, enters the dendrites at ‘connection points called synapses. The information flows from the dendrites tothe ell where it 'sprocessed. The output signal a train of impulses, isthen sent down the axon tothe synapse of otherneurons. ‘The arrangements and connections of the news | made up the network and have three layers. The first layer is called the input layer andis the only layer exposed to external signals. The input layer transmits signalsto the neurons in the next layer, which is called a hidden lavet The hidden layer extracts relevant feaires ot patterns from the received signals, Those features ‘or patterns that are considered important a@ then directed to the output layer, which is. ‘A synapse is able to increase or decrease the strength ofthe connection, This is where Information is stored, ‘Approx 10" newons, the final layer ofthe network ‘The artificial signals can be changed by weights ‘$a manner similar to the: ‘Physical changes {hat occur in the synapses. ee | 10% 10* neurons with ‘current technology at we A Q4. Explain the characteristics and applications of ANI sand applications of ANI newer Characteristics Characters ica Neral ewe Heo nolan contains hie numer f ntereoneted ye 1g elements called neurons to do all operations . . ‘Information stored in : the neurons are basically the weighted linkage of neurons ‘The input signals arrive nd weights. ‘at the processing elements through connectiot i sctions and connecting wei Ithas the ability to learn nd ger from yent and adjustment + Tecall and generalize from the given data by suitable assignment * suitable a i of weights, ‘¢ _ The collective behavior of the neurons dese cribes its computational power, and no single neuron carries specific information Application of Neural Network 1. Every new technolo Every naw ehncon need atone roth ewan. dia om these data are 1nd cons should be studied correctly ll ofthese things are possible only through thehelp of neural network. Neural network is su 2. leural table for the research on Animal behavicr, predatorprey relationships and population Itwould be easic 2 fr todo proper vahiation of property buildings, automobiles, machinery et. with the help of cycles neural network. 4, NeuralINetwork can be used in betting on horse rac 5, Itcan be used to predict the correct judgment for and the resulting sentences as output. : 6. Byanalyzing data and determining which of the 1s, sporting events, and most importantly in stock market any crime by using a lage data of crime details as input data has any fault (files diverging from peers) called as Data mining, cleaning and validation can be achieved through ‘neural network. 7, NeuralNetwork can be used to predict targets with ‘and magnetic instruments. 8, Itcanbe used efficiently in Employee hiring upon the skils the employee hasand the hel of echo pattems we get from sonas, radar, seismic sothai any company can hire the right employee depending | what should be its productivity in future. 9, ___ Ithasa large application in Medical Research’ 10. ha mpg nc, Ne he human brain and computers in eerms of ho! information is. 5. Explain the Difference between processed. terms of how information is processed > iter Difference between the human brain and computersin ie ficial Neuron Network) ‘Human Brain(Biological Newron Network) ee eas son ter “The human brain works asynchronously pul y impute slowly (several ‘Artificial Neurons compute fast (<1 ool | _saroscond per comet an | mre computor tion it Jn computer programs ‘every bit has to! function sa represents nvormation n? aah TN 1S are unreliable asintended otherwise these programs wot crast so + | agitributed way because net any te it fvity between the electronic _| and could ee Their connectivity Ove! time The ae esr : ic nents in ‘Our brain cna ‘compo! a rorepresens ne unless we replace * ontsimposed on US meg at ee ae a poe ita aay DUE as HENS fepolagies, Se | ee al Researchersave alto find out how a the brain actually learns S| Snorer 1.2. Basic moons oF ANN, Iarontant TO | } 2 3 Biologcal neural = ub \tworks have complicated So Q6. Explain in detail the baste models of ANN: Anawer: “The deol ANN ae sec y thee ba nen mame” 1. Themoder s snapic interconnect tion synaptic interconnection sng tneconecion nh 2 Tretoningrecrlenning nies adopted for updatingan 0 Pec t 3, Theiractvation funebons | 1. Connections sy tateach t elements sue Processinnw, | Connection ntl gh erconneced ester Sent ot 10a dy ag ah rterconneced er rors Cees andthe gene ‘output is found to be connected through wea cup on a es theaangerent of SS Pnngcon cngnates 2nd emia 4 free connections are etal fran ANN, The Poi ee ide spec interconnections are es racesng element nan ANN Sho ad conc eh eee enact ate en nme mn | called the network architecture. ; “Thos are ve basic types of neuron connection architectures” ‘Single layer feed forward network. Mattlayer feedforward network : k Single node with its own feedback . Single ayer recurrent network t Muitiayer recurrent network Single layer feed forward network Input Laver oupua Layer re eene ne | Networ rr iaeee 7 = Neural ‘Single Laver feed Pee ase es forward Network round ts /neriormed by taking oceing met ad combining th oN ST. es ren _ oli proceangnatcswiomed heteaircanbecomecea aca a acted he npucont ome ted tothese ‘Thos. asinglelaver feedonward nenwork formed Multilayer feedforward network Teel teeta senha inane some he wtecntin scan, Th layeristat w «i is layer has no function e fering the input signal, The output layer generates se netonk My njeret niobate ba web apt ajer vealed ee ye 4, Single node with its own feedback nowt ao Feecbsek If the feedback of the output of the processing elements i directed back as an input to the Procesing semen in the same layer then its called lateral feedback. anyone found gly ts ABLE to face LEGAL proses Neural Networks suse 2s = ‘Competitive Net ‘The eompettve interconnections have fxs Unsupervised earning network Category, ‘Single layer recurrent network “ ed weight-88, This nets called Maxnet ang We sug ving] 4. 2 Fig: - Single Layer Recurrent Network Recurrent networks are the feedback networks with a closed loop. 5. Multilayer recurrent network yey be Pe taterlinhition structure 6 Neural Networks and Deep Learning “Input Pe = . = Actual Output Error Signal Error (D-Y) Generator = o st oO | (b) Unsupervised Learning: | x ANN —_x_| WwW Input i Learning: (0 Reinforcement! Xx) Net Y it Input ‘Actual Outpu . _—— function that has two possible outputs. Tis function returns 1, if the input is postive, and O for any negati? input. Q8. Explain briefly Training algorithm used in perceptron. Answer: Training Algorithm Perception network canbe trained for single output unit as well as mukiple output unit y Training Algorithm for Single Output Unit : Step 1- Initialize the following to start the raining - } © Weights © Bias ‘ © Leamingrate a a Foreasy calculation and simplicity, equate, Step 2 - Continue step 3-8 when. ‘Step 3 - Continue step 4-6 for ev ‘Step 4 - Activate each input uni xi=s(i=1ton}xi=siti= Ion) Stn Now oan he nti he towing raion viowb+ Zhuuiyn=i 5a : weights and blas must be set equal to 0 and the learning rate must be the stopping condition is not true, ery training vector x, it 28 follows - tuereNiori> oop, 007 -hdbt evwagusnaian ott yen, fine. "<0 (yn) = fy las 28 follows . 908-9 00-0 <0 iy . nhs forx = Lton and) = 1. tomas follows “ Step 6~ jeliom- ‘tyin)= | [10-26 oit-0 svinis vey 7- Adjust th weight an bB8 ity, 4. Case HEY, 2 Me fod) rt Case2 “ify, = then. nu) =okdnatinew =wioK) oddone) bilo) a ‘actual output and "isthe desired/target out ops. ‘condition, which wil happen when there is no change in we ‘Answer: ‘Adaptive Linear Neuron (Adaline) ‘Adaline which stands for Adaptive Line by Widrow and Hoff in 1960. Some important ¢Itusesbipolaractivation function. thse deta re fr taining to minimize the Mean Squared Error (MSE) between the actual ouput desrediarget output. 7 © Theweights and the bias are adustable. Architecture + ‘The basic structure of Adaline is similar to perceptron having an extra feedback loop with the the actual. ae peacoat s compare wh he destedtarget cup Alar comparison onthe basis of traning sas ar Neuron, isa network having a single linea unt. twas des {points about Adaline are as follows “* ae Weiee target output Aalusmenn b —+ pus Training Algorithm Step 1 -Intialize the followin to start th : . training © Bias @ —_Leamingrate o ‘Warning : Xeror/Photocopying of ‘book is a CRIMINAL Act, Anyone found guilty is leural Networks and Deep Lea beset equal 5s ——_—— soe sr 2 inn nis wn equ enna ea 1) yep - Continue step 3-8 when the stopping condition isnot rue. $408 3 Continue step 4-6 forever bipolar taining st Fg Activate eoch input unit asfllows ton)xi=sli=lton) (Obtain the net input with the following relation - esi step 5 gineb+ Zimdwiyin=b+ > ins Hove ‘bis bias and ' isthe total number of input neurons Step 6 Apply the following activation function o obtain the final output * 1 -Aifyin > Oiyin

You might also like