Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
53 views11 pages

Radial Basis Function

Uploaded by

nadarokba
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
53 views11 pages

Radial Basis Function

Uploaded by

nadarokba
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 11

Research in neural networks about

the Radial basis function (RBF) networks

for the student


Nada Mahmoud Mahmoud Rokba

Department of Physics and Computer

Under the supervision of Dr


Mona Gharib
Radial basis function (RBF) networks:

Radial basis function (RBF) networks have a fundamentally different


architecture than most neural network architectures. Most neural
network architecture consists of many layers and introduces nonlinearity
by repetitively applying nonlinear activation functions.
RBF network on the other hand only consists of an input layer, a single
hidden layer, and an output layer.

RBF Network Architecture:

The typical architecture of a radial basis functions neural network


consists of an input layer, hidden layer, and summation layer.

Input Layer

The input layer consists of one neuron for every predictor variable.
The input neurons pass the value to each neuron in the hidden layer.
N-1 neurons are used for categorical values, where N denotes the number
of categories.
The range of values is standardized by subtracting the median and
dividing by the interquartile range.
Hidden Layer

The hidden layer takes the input in which the pattern might not be
linearly separable and transform it into a new space that is more
linearly separable.

The hidden layer has higher dimensionality than the input layer
because the pattern that is not linearly separable often needs to be
transformed into higher-dimensional space to be more linearly
separable.

This is based on Cover’s theorem on the separability of patterns,


which states that a pattern that is transformed into a higher-
dimensional space with nonlinear transformation is more likely to be
linearly separable, therefore the number of neurons in the hidden layer
should be greater than the number of the input neuron.
With that said, the number of neurons in the hidden layer should be
less than or equal to the number of samples in the training set.

When the number of neurons in the hidden layer is equal to the


number of samples in the training set.

Each neuron in the hidden layer has a prototype vector and a


bandwidth denoted by μ and σ respectively. Each neuron computes the
similarity between the input vector and its prototype vector.

The computation in the hidden layer can be mathematically written as


follow:

With:

 X bar as the input vector


 μ bar as the iᵗʰ neuron’s prototype vector
 σ as the iᵗʰ neuron’s bandwidth
 phi as the iᵗʰ neuron’s output

The computations in the output layer are performed just like a


standard artificial neural network which is a linear
combination between the input vector and the weight vector.
The computation in the output layer can be mathematically
written as follow:

Output Layer or Summation Layer


The value obtained from the hidden layer is multiplied by a
weight related to the neuron and passed to the summation.
Here the weighted values are added up, and the sum is presented
as the network's output.
Classification problems have one output per target category, the
value being the probability that the case evaluated has that
category.

Structure of RBFNN vs MLP:


 Structure of RBFNN:

 Structure of MLP:
Comparison between RBFNN and MLP:

RBF MLP
SIGNAL TRANSMISSION Feed-forward Feed-forward
PROCESS OF BUILDING Two different independent
One stage
THE MODEL
stages:

-First stage:
the probability distribution
is established by means of
radial basis functions.
-Second stage:
the network between input
x and output y.
Note: The lag is only
visible in RBF in the
output layer.

THRESHOLD Yes No

TYPE OF Weight -Location and width of


PARAMETERS
and basis function.
thresholds -Weights binding basis
functions with output.

FUNCTIONING TIME Faster Slower (bigger memory


and size required)

LEARNING TIME Slower Faster


The Similarities between RBF and MLP:

They are both non-linear feed-forward networks.

They are both universal approximators.

They can both be used in similar application areas.

It is not surprising, then, to find that there always exists an RBF


network capable of accurately mimicking a specific MLP, and
vice versa.

The difference between RBF and MLP:


Multilayer perceptron (MLP) and Radial Basis Function (RBF)
are popular neural network architectures called feed-forward
networks.

The main differences between RBF and MLP are:

MLP consists of one or several hidden layers, while RBF consists


of just one hidden layer.

RBF network has a faster learning speed compared to MLP. In


MLP, training is usually done through backpropagation for every
layer. But in RBF, training can be done either through
backpropagation or RBF network hybrid learning.
The XOR Problem in RBF Form

Recall that sensible RBFs are M Gaussians φ j(x) centred at


random training data points:

To perform the XOR classification in an RBF network, one must


begin by deciding how many basis functions are needed.
Given there are four training patterns and two classes,

M = 2 seems a reasonable first guess.


Then the basis function centres need to be chosen.

The two separated zero targets seem a good random choice, so


µ1 = (0, 0) and µ2 = (1,1) and the distance between them is
dmax = √2. That gives the basis functions:
Example: The XOR Problem:

You might also like