Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
5 views101 pages

Lect02 Intro ANN

Uploaded by

lakshya.santani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views101 pages

Lect02 Intro ANN

Uploaded by

lakshya.santani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 101

Introduction to

Neural Networks
Nandana Prabhu
Introduction
Concept:

➢ Neural networks are information processing systems that are implemented


to model the working of the human brain.
➢ Main objective: to develop a computational device for modelling the brain
to perform various computational tasks at a faster rate than the traditional
systems

22/05/2025 NN 2
Artificial Neural Network (ANN)
• Artificial Neural Network (ANN) is an information processing system that
possesses characteristics with biological neural networks.
• ANNs consists of large number of highly interconnected processing elements.
• These processing elements are called neurons or artificial neurons , nodes or
units.
• These neurons operate in parallel and are configured in regular architectures
• Every neuron is connected to the other neuron through the communication
link
• Each connection is associated with weights which contain information about
the input signal.
• This information is used by the neuron net to solve a particular problem.
22/05/2025 NN 3
Artificial Neural Network (ANN)
➢ Neural networks are basically inspired from human brain and the way the
neurons in the human brain works.

➢ it is motivated from human way of learning

➢ The essential properties of biological neural networks are considered in order


to understand the information processing tasks.

➢ This indeed will allow us to design abstract models of artificial neural


networks which can be simulated and analyzed.

22/05/2025 NN 4
Biological Neuron

• Biological nervous system is the most important part of human beings.


• Brain is centre of central nervous system
• The human brain consists of large number of neurons (approx. 10 11)with
numerous interconnections that processes information

22/05/2025 NN 5
Information flow in nervous system

22/05/2025 SC 6
Biological Neuron

Fig : Biological Neuron

22/05/2025 NN 7
Biological Neuron
The biological neuron consists of three major parts
1. Soma or Cell body- contains the cell nucleus. In general, processing occurs
here
2. Dendrites- branching nerve fibres that protrude from the cell body or soma.
The nerve is connected to the cell body.
3. Axon- It carries the impulses of the neuron. It carries information away from
the soma to other neurons.
Each strand of an axon splits into fine strands. Each fine strands terminates in
a small bulb-like organ called synapse. It is through synapse the neuron
introduces its signals to other neurons.

22/05/2025 NN 8
Biological Neuron
Electric impulses are passed between the synapse and dendrites

Synapses are of two types:


Inhibitory:- impulses hinder the firing of the receiving cell
Excitatory:- impulses cause the firing of the receiving cell

The signal transmission involves a chemical process.

Neuron fires when the total of the weights to receiving neuron exceeds the
threshold value within short time period called the latent summation period
After carrying a pulse an axon fiber is in a state of complete non excitability for a
certain time called the refractory period.
22/05/2025 NN 9
Biological Neural Network

22/05/2025 NN 10
Neuron and a sample of Pulse Train

22/05/2025 NN 11
Biological Neural Network
Working of the neuron
1. Dendrites receive activation signal from other neurons which is the internal
state of every neuron
2. Soma processes the incoming activity signals and convert its into output
activation signals.
3. Axons carry signals from the neuron and sends it to other neurons.
4. Electric impulses are passed between the synapses and the dendrites. The
signal transmission involves a chemical process called neuro-transmitters.

22/05/2025 NN 12
References

22/05/2025 IQAC Meeting _External 13


References

22/05/2025 NN 14
Artificial Neural Networks
Terminology relationships between Biological and Artificial neuron

Biological neuron Artificial Neuron


Cell Neuron
Dendrites Weights or
interconnections
Soma Net input
Axion Output

22/05/2025 NN 15
Artificial Neural Networks
Comparison between Biological neuron and Artificial Neuron:

1. Speed – Signals in human brain move at a speed dependent on the nerve


impulse. The biological neuron is slow in processing as compared to the
artificial neural networks which are modelled to process faster.
2. Processing- The biological neuron can perform massive parallel operations
simultaneously. A large number of simple units are organized to solve
problems independently but collectively. The artificial neurons also respond
in parallel but do not execute programmed instructions.

22/05/2025 16
Artificial Neural Networks
Comparison between Biological neuron and Artificial Neuron:

3. Size and Complexity- The size and complexity of the brain is comparatively
higher than that of artificial neural network. The size and complexity of an
ANN is different for different applications
4. Storage Capacity – The biological neuron stores the information in its
interconnection and in artificial neuron it is stored in memory locations.

22/05/2025 17
Artificial Neural Networks
Comparison between Biological neuron and Artificial Neuron:

5. Tolerance- The biological neuron has fault tolerant capability but artificial
neuron has no tolerant capability. Biological neurons considers redundancies
whereas artificial neurons cannot consider redundancies.
6. Control mechanism- There is no control unit to monitor the information
processed in to the network in biological neural networks whereas in artificial
neuron model all activities are continuously monitored by a control unit.

22/05/2025 18
Artificial Neural Networks
Comparison between Biological neuron and Artificial Neuron:

22/05/2025 19
Biological Neural Network
➢ The basic elements of artificial Neural Network are:
• input nodes
• Weights
• activation function and
• output node

➢ Inputs are associated with synaptic weights. They are all summed and
passed through an activation function giving output y.
➢ In a way, output is summation of the signal multiplied with synaptic weight
over many input channels.
19/06/2025 NN 20
Artificial Neuron Model

22/05/2025 NN 21
Artificial Neuron Model

22/05/2025 NN 22
Artificial neuron

22/05/2025 NN 23
Artificial Neural Networks
Characteristics of Artificial Neural Networks
1. It is a neurally implemented mathematical model.
2. Large number of highly interconnected processing elements known as
neurons are prominent in ANN
3. The interconnections with their weights are associated with neuronswhich hold the informative
knowledge
4. The input signals arrive at the processing elements through connections and
weights.
5. ANNs collective behavior is characterized by their ability to learn, recall and
generalize from the given data.
6. . The computational power can be demonstrated only by collective behavior of neurons . A
single neuron carries no specific information.
7.
22/05/2025 NN 24
Artificial Neuron Model

22/05/2025 NN 25
Artificial Neuron Model

22/05/2025 NN 26
Artificial Neural Networks
Every neuron model consist of a processing element with synaptic input
connections and a single output
The neuron output is given by the relation

22/05/2025 NN 27
Artificial Neural Networks

The function is called the activation function


The variable net is defined as scalar product of weight and input vector

Performs nonlinear operation f(net) through its activation function

22/05/2025 NN 28
Artificial Neural Networks
Basic model of ANN:

It is specified by three entities namely:


1.The model’s synaptic interconnections
2. The learning rules adopted for updating and adjusting the connection weights
3. Their activation functions

22/05/2025 IQAC Meeting _External 29


Artificial Neural Networks
Activation Functions:
• An activation function f is applied over the net input to calculate the output of
an ANN.
• The information processing of a processing element can be viewed as
consisting of input and output as two major parts
• An Integration function (say f) is associated with the input of a processing
element.

22/05/2025 IQAC Meeting _External 30


Artificial Neural Networks
• Purpose of function:
• to combine activation, information or evidence from an external source or
other processing elements into a net input to the processing element.
• The nonlinear activation function is used to ensure that a neurons response is
bounded – i. e. the actual response of the neuron is conditioned or dampened
as result of large or small activating stimuli and is thus controllable

• The choice of activation functions depends on the type of problems to be


solved by the network.

22/05/2025 IQAC Meeting _External 31


Activation Functions

• Identity function-
• It is a linear function.
It is defined as

f(net) = net for all net

The output is same as input. The input layer uses the identity function.

19/06/2025 Soft computing 32


Activation Functions
• Binary step function: The function can be defined as

where 𝜃 represents the threshold value


It is mostly widely used in single layer nets to convert the net input to an output
that is binary (1 or 0)

19/06/2025 Soft computing 33


Activation Functions
• Bipolar Step function: The function can be defined as

• Where 𝜃 represents the threshold value. It is mostly widely used in single


layer nets to convert the net input to an output that is bipolar (+1 or -1)

19/06/2025 Soft computing 34


Activation Functions
• Sigmoidal functions: These functions are used in back-propagation nets.
• (widely used in BPnets because of the relationship between the value of a
function at a point and the value of the derivative at that point which reduces
the computational burden during training
They are of two types:
• Binary Sigmoid function:
• Bipolar Sigmoid function:

19/06/2025 Soft computing 35


Activation Functions
• Binary Sigmoid function: It is known as unipolar sigmoid function or logistic
sigmoid function
• Unipolar continuous and unipolar binary activation(Binary Step) respectively

Here, 𝞴 is the steepness parameter. The range of the sigmoid function is from
0 to 1

19/06/2025 Soft computing 36


Activation Functions

Unipolar Continuous

19/06/2025 Soft computing 37


Activation Functions

Binary step(Unipolar Binary)

19/06/2025 Soft computing 38


Activation Functions
• Bipolar Sigmoid function: This function is defined as
• Bipolar continuous function

• Bipolar binary function

Here, 𝞴 is the steepness parameter. The range of the sigmoid function is from
-1 to +1
19/06/2025 Soft computing 39
Activation Functions

Bipolar Continuous

19/06/2025 Soft computing 40


Activation Functions
• Ramp function: The function can be defined as

19/06/2025 Soft computing 41


Activation Functions

19/06/2025 Soft computing 42


Connections

19/06/2025 Soft computing 43


Artificial Neural Networks
Connections:

• ANN consists of a set of highly interconnected neurons connected through


weights to the other processing elements or to itself.
• The arrangement of these processing elements and the geometry of their
interconnections are important for ANN.
• The arrangement of neurons to form layers and the connection pattern
formed within and between layers is called the network architecture.
• Layer is formed by taking a processing element and combining it with other
processing elements
• A layer implies a stage
22/05/2025 NN 44
Artificial Neural Networks
Types of neuron connection architecture:
1. Single-layer feed-forward network
2. Multilayer feed-forward network
3. Single node with its own feedback
4. Single-layer recurrent network
5. Multi-layer recurrent network

22/05/2025 NN 45
Artificial Neural Networks
Single-layer feed-forward network:
Interconnections

Feed forward Feed Back Recurrent

Single layer Single layer

Multilayer Multilayer

22/05/2025 NN 46
Artificial Neural Networks
Single-layer feed-forward network:

22/05/2025 NN 47
Artificial Neural Networks
Single-layer feed-forward network:
➢ It consists of a single layer of network where the inputs are directly connected
to the output, one per node with a series of various weights

22/05/2025 NN 48
Artificial Neural Networks
Multilayer feed-forward network:

22/05/2025 NN 49
Artificial Neural Networks
Multilayer feed-forward network:

22/05/2025 NN 50
Artificial Neural Networks
Multilayer feed-forward network:

➢ It is formed by interconnection of several layers


➢ It consists of multi layers where along with the input and output layers, there
are hidden layers
Input layer:
➢ is that which receives the input.
➢ It has no function except buffering the input signal
➢ Output layer:
➢ The output layer generates the output of the network.
22/05/2025 NN 51
Artificial Neural Networks
➢ Multilayer feed-forward network:
➢ Hidden layer :
➢ Any layer that is formed between the input and output layers.
➢ There can be zero to many hidden layers.
➢ The hidden layer is usually internal to the network and has no direct contact
with the environment
➢ More the number of the hidden layers, more is the complexity of the network.
This may, however, provide an efficient output response.
➢ In case of fully connected network, every output from one layer is connected
to each and every node in the next layer

22/05/2025 NN 52
Artificial Neural Networks
➢ Single layer recurrent network:
Feedforward network:
A network is said to be a feed-forward network if no neuron in the output layer is
an input to a node in the same layer or in the preceding layer.
Feedback network:
On the other hand, when outputs can be directed back as inputs to same or
preceding layer nodes then it results in the formation of feedback networks

22/05/2025 NN 53
Artificial Neural Networks
➢ Single layer recurrent network:
➢ If the feedback of the output processing elements is directed back as input to
the processing elements in the same layer then it is called lateral feedback.
➢ . Recurrent networks are feedback networks with closed loop.
➢ Figure below shows a simple recurrent neural network having a single neuron
with feedback to itself

22/05/2025 NN 54
Artificial Neural Networks
Single-layer recurrent network: A single layer network with feedback
connection in which a processing elements output can be directed back to the
processing element itself or to the other processing element or both.

22/05/2025 NN 55
Artificial Neural Networks
Multilayer recurrent network:
➢ a processing element output can be directed back to the nodes in a
preceding layer, Also, in these networks, a processing element output can be
directed back to the processing element itself and to other processing
elements in the same layer.

22/05/2025 NN 56
Artificial Neural Networks
Multilayer recurrent network:

22/05/2025 NN 57
Artificial Neural Networks
Why differenct architectures:
AND Singlelayer feedforward
XOR Multi layer feedforward
Supervised learning: Dynamic network

22/05/2025 NN 58
Artificial Neural Networks
Why different architectures:
Supervised learning: Dynamic network

22/05/2025 NN 59
Artificial Neural Networks
Single node with one feedback:

22/05/2025 NN 60
Artificial Neural Networks
Learning:
The main important part of ANN is it capability to train or learn.
It is a process by means of which a neural network adapts itself to a stimulus by
making proper parameter adjustments, in order to receive a desired response.
Two types of learning:
• Parameter learning: It updates the connecting weights in a neural net.
• Structure learning: It focusses on the change in network structure (this
includes the number of processing elements as well as connection types
• These two types of learning can be performed separately or parrallely

22/05/2025 NN 61
Artificial Neural Networks
Three categories of learning:
1. Supervised Learning
2. Unsupervised Learning
3. Reinforcement Learning

22/05/2025 IQAC Meeting _External 62


Artificial Neural Networks
Supervised learning:
The learning is performed with the help of a teacher.
In Supervised learning, it is assumed that the correct target output values are
known for each input pattern.
In this learning, a supervisor or teacher is needed for error minimization.
The difference between the actual and desired output vector is minimized using
the error signal by adjusting the weights until the actual output matches the
desired output.

22/05/2025 NN 63
Artificial Neural Networks
Supervised learning:
• The learning is performed with the help of a teacher.
Eg: learning process of a small child(pg17 deepa)
• In Supervised learning, during training, the input vector is presented to the
network, which results in an output vector.
• The output is the actual output vector.
• This actual output vector is compared with the desired(target) output vector.
• If there exists a difference between the two output vectors, then an error
signal is generated by the network.
• This error signal is used for adjusting the weights until the actual output
matches the desired (target)output.
22/05/2025 NN 64
Artificial Neural Networks
Supervised learning:
In this learning, a supervisor or teacher is needed for error minimization.
The difference between the actual and desired output vector is minimized using
the error signal by adjusting the weights until the actual output matches the
desired output

22/05/2025 NN 65
Artificial Neural Networks
Supervised learning:

22/05/2025 NN 66
Artificial Neural Networks
Supervised learning:we assume that at each instant of time when the
input is applied, the desired response d of the system is provided by the
teacher.
The distance p[d, o] between the actual and the desired response serves as an
error measure and is used to correct network parameters externally.
Since we assume adjustable weights, the teacher may implement a reward-and-
punishment scheme to adapt the network's weight matrix W.
.

22/05/2025 NN 67
Artificial Neural Networks
Supervised learning:

22/05/2025 NN 68
Artificial Neural Networks
Supervised learning:
For instance, in learning classifications of input patterns or situations with
known responses, the error can be used to modify weights so that the error
decreases.
This mode of learning is very pervasive. Also, it is used in many
situations of natural learning. A set of input and output patterns called a training
set is required for this learning mode.

22/05/2025 NN 69
Artificial Neural Networks
Supervised learning:

In ANNs , following the supervised learning , each input vector requires a target
vector, which represents the desired output.
The input along with the target vector is called the training pair
The network here is informed precisely about what should be emitted as output
(Extra pg 57 Z)

22/05/2025 NN 70
Artificial Neural Networks
UnSupervised learning:
➢ In Unsupervised learning
The learning is performed independently
the learning is performed without the help of a teacher or supervisor.
➢ Eg tadpole
➢ In the learning process, the input vectors of similar type are grouped together
to form clusters.
➢ The desired output is not given to the network.
➢ The system learns on its own with the input patterns.

22/05/2025 NN 71
Artificial Neural Networks
UnSupervised learning:
In ANNs,The input vectors of similar type are grouped without the use of training
data to specify how a member of each group looks or to which group a number
belongs.
In the training process, the network receives the input patterns and organizes
these patterns to form clusters.
When a new input pattern is applied, the neural network gives an output
response indicating which the input pattern belongs.
If for an input, a pattern class cannot be found then a new class is generated T

22/05/2025 NN 72
Artificial Neural Networks
UnSupervised learning:
it is clear that there is no feedback from the environment to inform what the
outputs should be or whether the outputs are correct.
Unsupervised learning algorithms use patterns that are typically redundant
raw data having no labels regarding their class membership, or associations.
In this case, the network must itself discover patterns, regularities, features or
categories from the input data and relations for the input data over the output.
While discovering all these features, the network undergoes change in its
parameters.
This process is called self organizing in which exact clusters will be formed by
discovering similarities and dissimilarities among the objects
22/05/2025 NN 73
Artificial Neural Networks
UnSupervised learning:
The desired response is not known; thus, explicit error information cannot be
used to improve network behavior.
Since no information is available as to correctness or incorrectness of
responses, learning must somehow be accomplished based on observations of
responses to inputs that we have marginal or no knowledge about.
For example, unsupervised learning can easily result in finding the boundary
between classes of input patterns distributed

22/05/2025 NN 74
Artificial Neural Networks
UnSupervised learning:

Suitable weight self-adaptation mechanisms have to be embedded in the


trained network, because no external instructions regarding potential clusters
are available.
One possible network adaptation rule is: A pattern added to the cluster has to
be closer to the center of the cluster than to the center of any other cluster.

22/05/2025 NN 75
Artificial Neural Networks
Unsupervised learning:

22/05/2025 IQAC Meeting _External 76


Artificial Neural Networks
Reinforcement learning:
The Reinforcement learning is a form of Supervised learning as the network
receives feedback from its environment.
Here the supervisor does not present the desired output but learns through the
critic information(ie the network may be told the output is 50% correct or so.
The learning based on this critic information is called reinforcement learning and
the feedback sent is called reinforcement signal.

22/05/2025 NN 77
Artificial Neural Networks
Reinforcment learning:

22/05/2025 IQAC Meeting _External 78


Artificial Neural Networks
Reinforcement learning:
The reinforcement learning is a form of supervised learning because the
network receives some feedback from its environment.
However, the feedback obtained here is only evaluative and not instructive.
The external reinforcement signals are processed in the critic signal generator,
and the obtained critic signals are sent to the ANN for adjustment of weights
properly so as to get better critic feedback in future.
The reinforcement learning is also called learning with a critic as opposed to
learning with a teacher, which indicates supervised learning

22/05/2025 NN 79
Classification of Learning Algorithms

22/05/2025 NN 80
Terminologies of ANNs
1. weights:
2. Bias
3. Threshold:
4. Learning rate
5. Momentum factor
6. Vigilance Parameter

22/05/2025 NN 81
Terminologies of ANNs
1. weights:
Weight is a parameter which contains information about the input signal. This information is
used by the net to solve a problem.
In ANN architecture, every neuron is connected to other neurons by means of a directed
communication link and every link is associated with weights.
The weight can be represented in the form of matrix
Weight matrix can also be called connection matrix
There are ‘n’ processing elements in the ANN and each processing elements has exactly ‘m’
adaptive weights
Wij is the weight from processing element ‘i’ source node to processing element ‘j’ destination
node.
i=1,2,3….n and j=1,2,3,….m

22/05/2025 NN 82
Terminologies of ANNs

22/05/2025 NN 83
Terminologies of ANNs
Weight matrix W is defined by

22/05/2025 NN 84
Terminologies of ANNs
1. weights:
If the weight matrix W contains all the adaptive elements of ANN, then the set of all W matrices
will determine the set of all possible information processing configuration for this ANN
The ANN can be realized by finding an appropriate matrix W.
Hence the weights encode Long Term Memory(LTM) and activation states of neurons encode
the Short Term Memory(STM) in a neural network

22/05/2025 NN 85
Terminologies of ANNs
2. Bias (b)
• The bias is a constant value included in the network.
• Its impact is seen in calculating the net input.
• The bias is included by adding a component x0 =1 to the input vector X
• Bias can be positive or negative.
• The positive bias helps in increasing the net input of the network.
• The negative bias helps in decreasing the net input of the network.
• The bias is considered like another weight i.e. w0j=b j

22/05/2025 NN 86
Terminologies of ANNs
3. Threshold:
Threshold value is asset value based on which the final final output of the
network is calculated.
Threshold is a set value used in the activation function.
A comparison is made between the calculated net input and the threshold to
obtain the network output.
In ANN, based on the threshold value the activation functions are defined and
the output is calculated

22/05/2025 NN 87
Terminologies of ANNs
4. Learning rate:
➢ The learning rate is denoted by ‘α’ .
➢ used to control the amount of weight adjustment at each step of training.
➢ The learning rate ranges from 0 to 1.
➢ It determines the rate of learning at each time step.

22/05/2025 NN 88
Terminologies of ANNs
5. Momentum factor:
• Convergence is made faster if a momenrum factor is added to the weight
updation process.
• This is generally done in the back propagation network.
• If momentum has to be used, the weights from one or more previous training
patterns must be saved.
• Momentum helps the net in reasonably large weight adjustments until the
• corrections are in the same general direction for several patterns.

22/05/2025 NN 89
Terminologies of ANNs
6. Vigilance parameter:
The vigilance parameter is denoted by ‘’.
It is generally used in Adaptive Resonance Theory(ART).
It is used to control the degree of similarity required to be assigned to patterns
to the same cluster unit.
The choice of vigilance parameter ranges approx. . From 0.7 to 1 to perform
useful work in controlling the number of clusters.

22/05/2025 NN 90
McCulloch-Pitts neuron
• The McCulloch-Pitts neuron was the earliest neural network discovered in
1943.
• It is usually called as M-P neuron.
• The M-P neurons are connected by directed weighted paths.
• It should be noted that the activation of a M-P neuron is binary, that is, at any
time step the neuron may fire or may not fire.
• The weights associated with the communication links may be excitatory
(weight is positive) or inhibitory (weight is negative).
• All the excitatory connected weights entering into a particular neuron will
have same weights.
.
22/05/2025 NN 91
McCulloch-Pitts neuron
.
The threshold plays a major role in M-P neuron:
There is a fixed threshold for each neuron.
If the net input to the neuron is greater than the threshold then the neuron fires.
Also, it should be noted that any nonzero inhibitory input would prevent the
neuron from firing.
The M-P neurons are most widely used in the case of logic functions

22/05/2025 NN 92
McCulloch-Pitts neuron
.

•The inputs xi, for i=1,2,3……….n are 0 or 1 depending on the


absence or presence of the input impulse at instant k.
•The neuron’s output signal denoted by 1 and 0

22/05/2025 NN 93
McCulloch-Pitts neuron
The firing rule for this model is defined as-

Subscripts k=0,1,2………….denote the discrete time instant.


Wi -> multiplicative weight connecting the ith Input with neuron membrane.
Wi =+1 for excitatory synapse for this model
Wi =-1 for inhibitory synapse for this model
T → neuron’s threshold value which need to be exceeded by the weighted sum
of signals.
22/05/2025 NN 94
McCulloch-Pitts neuron
This neuron model is very simplistic.
It has substantial computing potential.
It can perform the basic logic operations of NOT,OR and AND provided its
weights and threshold values are appropriately chosen.
Again NOR and NANDd can be implemented using OR and NOT
& AND and NOT

22/05/2025 NN 95

.
Three input NAND implementation
.

22/05/2025 NN 96
Three input NAND implementation
.

22/05/2025 NN 97
Memory Cell
.
1. A single neuron with a single input x and with the weight and threshold
values both of unity

2. This behaves as a single register cell which is able to retain the input for
one period elapsing between two instants
3. Once a feedback loop is closed round the neuron as shown, we obtain a
memory cell.
22/05/2025 NN 98
Memory Cell
.
4. An excitatory input of 1 initializes the firing in this memory cell and
an inhibitory input of 1 initializes a non firing state

5. The output value ,at the absence of the inputs is then sustained indefinitely.
6. This because the output of 0feedback to the input does not cause firing at the
next instant, while the output of 1 does.
22/05/2025 NN 99
References
Domain is the set of activation values i.e. net of the neuron model. Therefore we
use f(net).

net= wtx → the scalar product of weight and input vector

22/05/2025 IQAC Meeting _External 100


References
• “Principles of Soft Computing”, by S.N. Sivanandam and S.N. Deepa, 2019,
Wiley Publication, Chapter 2 and 3
• Neuro-fuzzy and soft computing, J.S.R. Jang, C. T. Sun and E. Mizutani,
Prentice Hall of India, 2004
• Neural Networks, Fuzzy Logic and Genetic Algorithms , S Rajashekaran, G A
Vijayalakshmi Pai
• Introduction to ANN by Zurada, Jaico publications (Chapter2)
• NPTEL Online course ”Introduction to SC” by Dr Debasis Samanta ,IIT
Kharagpur

22/05/2025 IQAC Meeting _External 101

You might also like