[Introduction to Fuzzy/Neural Systems]
7
[Back propagation algorithm]
Module 7 Exp 7 : on Back propagation
algorithm
Course Learning Outcomes:
C4. Conduct experiments using advanced tools related to fuzzy and neural
systems.
C5. Extend existing knowledge in designing a feedback fuzzy controller and
single/multilayer neural networks based on the given Industrial
requirements.
C7. Interpret and evaluate through written and oral forms.
C8. Lead multiple groups and projects with decision making responsibilities.
Topics
Exp 7 : on Back propagation algorithm
1. Aim/Objective:
What is the objective of this experiment?
2. Theory
Provide explanation on the perceptron learning rule
3. Procedure:
Step by step implementation of the implementation
4. Program
Python code to implement Perceptron learning rule
5. Result
State the your inputs and its corresponding output and figures
6. Conclusion
Summarize what you have learned from the experiment
Exp 7 : on Back propagation algorithm
Python code to implement back-propagation training algorithm
import numpy as np
def sigmoid(x):
return 1.0/(1.0 + np.exp(-x))
Course Module
def sigmoid_prime(x):
return sigmoid(x)*(1.0-sigmoid(x))
def tanh(x):
return np.tanh(x)
def tanh_prime(x):
return 1.0 - x**2
class NeuralNetwork:
def __init__(self, layers, activation='tanh'):
if activation == 'sigmoid':
self.activation = sigmoid
self.activation_prime = sigmoid_prime
elif activation == 'tanh':
self.activation = tanh
self.activation_prime = tanh_prime
# Set weights
self.weights = []
# layers = [2,2,1]
# range of weight values (-1,1)
# input and hidden layers - random((2+1, 2+1)) : 3 x 3
for i in range(1, len(layers) - 1):
r = 2*np.random.random((layers[i-1] + 1, layers[i] + 1)) -1
self.weights.append(r)
# output layer - random((2+1, 1)) : 3 x 1
r = 2*np.random.random( (layers[i] + 1, layers[i+1])) - 1
self.weights.append(r)
def fit(self, X, y, learning_rate=0.2, epochs=100000):
# Add column of ones to X
# This is to add the bias unit to the input layer
ones = np.atleast_2d(np.ones(X.shape[0]))
X = np.concatenate((ones.T, X), axis=1)
for k in range(epochs):
if k % 10000 == 0: print ('epochs:', k)
i = np.random.randint(X.shape[0])
a = [X[i]]
for l in range(len(self.weights)):
dot_value = np.dot(a[l], self.weights[l])
activation = self.activation(dot_value)
a.append(activation)
# output layer
error = y[i] - a[-1]
[Introduction to Fuzzy/Neural Systems]
7
[Back propagation algorithm]
deltas = [error * self.activation_prime(a[-1])]
# we need to begin at the second to last layer
# (a layer before the output layer)
for l in range(len(a) - 2, 0, -1):
deltas.append(deltas[-1].dot(self.weights[l].T)*self.activation_prime(a[l]))
# reverse
# [level3(output)->level2(hidden)] => [level2(hidden)->level3(output)]
deltas.reverse()
# backpropagation
# 1. Multiply its output delta and input activation
# to get the gradient of the weight.
# 2. Subtract a ratio (percentage) of the gradient from the weight.
for i in range(len(self.weights)):
layer = np.atleast_2d(a[i])
delta = np.atleast_2d(deltas[i])
self.weights[i] += learning_rate * layer.T.dot(delta)
def predict(self, x):
a = np.concatenate((np.array([[1]]), np.array([x])), axis=1)
for l in range(0, len(self.weights)):
a = self.activation(np.dot(a, self.weights[l]))
return a
if __name__ == '__main__':
nn = NeuralNetwork([2,2,1])
X = np.array([[0, 0],
[0, 1],
[1, 0],
[1, 1]])
y = np.array([0, 1, 1, 0])
nn.fit(X, y)
for e in X:
print(e,nn.predict(e))
Python code to implement back-propagation training algorithm using sigmoidal
activation function
Course Module
clc;
clear all;
close all;
%%Initialize the parameters
i1=1; i2=4;
i3=5;
n=3;
b1=1;b2=1;
t1=0.1; t2=0.05;
wb1=0.5; wb2=0.5;
w1=0.1; w2=0.2;
w3=0.3;w4=0.4;
w5=0.5;w6=0.6;
w7=0.7;w8=0.8;
w9=0.9;w10=0.1;
% alpha=(1/1+n)
alpha=0.01;
%% Hidden layer
h1net=sum(i1*w1+i2*w3+i3*w5+b1*wb1);
h1=(1/(1+exp(-h1net)));
h2net=sum(i1*w2+i2*w4+i3*w6+b1*wb1);
h2=(1/(1+exp(-h2net)));
%% output layer
o1net=sum(h1*w7+h2*w9+b2*wb2);
o1=(1/(1+exp(-o1net)));
o2net=sum(h1*w8+h2*w10+b2*wb2);
o2=(1/(1+exp(-o2net)));
%% Error
E1=1/2*((t1-o1)^2);
E2=1/2*((t2-o2)^2);
E=E1+E2;
%% Error derivates
change_in_error_1=-(t1-o1);
change_in_error_2=-(t2-o2);
change_in_o1net=o1*(1-o1);
change_in_o2net=o2*(1-o2);
change_in_h1=change_in_error_1*change_in_o1net*w7+change_in_error_2*change_in_o2ne
t*w8;
change_in_h2=change_in_error_1*change_in_o1net*w9+change_in_error_2*change_in_o2ne
t*w10;
change_in_h1net=h1*(1-h1);
change_in_h2net=h2*(1-h2);
change_in_net_w7=h1*w7^(1-1);
change_in_net_w8=h1*w8^(1-1);
change_in_net_w9=h2*w9^(1-1);
change_in_net_w10=h2*w9^(1-1);
change_in_net_wb2=b2*wb2^(1-1);
change_in_net_w1=i1*w1^(1-1);
change_in_net_w2=i1*w2^(1-1);
change_in_net_w3=i2*w3^(1-1);
change_in_net_w4=i2*w4^(1-1);
[Introduction to Fuzzy/Neural Systems]
7
[Back propagation algorithm]
change_in_net_w5=i3*w5^(1-1);
change_in_net_w6=i3*w6^(1-1);
change_in_net_wb1=b1*wb1^(1-1);
%% gradient descent Input to Hidden layer
G1=change_in_h1*change_in_h1net*change_in_net_w1;
G2=change_in_h2*change_in_h2net*change_in_net_w2;
G3=change_in_h1*change_in_h1net*change_in_net_w3;
G4=change_in_h2*change_in_h2net*change_in_net_w4;
G5=change_in_h1*change_in_h1net*change_in_net_w5;
G6=change_in_h2*change_in_h2net*change_in_net_w6;
Gb1=change_in_error_1*change_in_o1net*w7*change_in_h1net*change_in_net_wb1+chang
e_in_error_2*change_in_o2net*w10*change_in_h2net*change_in_net_wb2;
%%%% gradient descent hidden to output layer
G7=change_in_error_1*change_in_o1net*change_in_net_w7;
G8=change_in_error_2*change_in_o2net*change_in_net_w8;
G9=change_in_error_1*change_in_o1net*change_in_net_w9;
G10=change_in_error_2*change_in_o2net*change_in_net_w10;
Gb2=change_in_error_2*change_in_o2net*change_in_net_wb2+change_in_error_1*change_i
n_o1net*change_in_net_wb2;
%% Updated weights
updated_W1=w1-alpha*G1;
updated_W2=w2-alpha*G2;
updated_W3=w3-alpha*G3;
updated_W4=w4-alpha*G4;
updated_W5=w5-alpha*G5;
updated_W6=w6-alpha*G6;
updated_W7=w7-alpha*G7;
updated_W8=w8-alpha*G8;
updated_W9=w9-alpha*G9;
updated_W10=w10-alpha*G10;
updated_Wb1=wb1-alpha*Gb1;
updated_Wb2=wb2-alpha*Gb2;
%%
<Exercise 1.
Lab Activity: Simulation
Design and develop the neural network system for the following experiment
Experiment 1: Back Propagation algorithm
1. Design and train a neural network system based on back propagation
algorithm using hyperbolic sigmoidal activation function.
2. Tune the neural network model and minimize the error by updating the
weights and perform the testing.
Course Module
3. Run the simulation in group and explain the working principles of the
algorithm.
4. Interpret the output of the designed neural network system by varying the
inputs.
References and Supplementary Materials
Books and Journals
1. Van Rossum, G. (2007). Python Programming Language. In USENIX annual technical
conference (Vol. 41, p. 36).
2. SN, S. (2003). Introduction to artificial neural networks.
3. Rashid, T. (2016). Make your own neural network. CreateSpace Independent Publishing
Platform.
Online Supplementary Reading Materials
1. Chaitanya Singh; How to Install Python. (n.d.). Retrieved 14 May 2020, from
https://beginnersbook.com/2018/01/python-installation/; 14-05-2020