Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
5 views2 pages

JS - Ann 8

This document presents a Python program for creating a Back Propagation Feed-forward neural network. It includes the implementation of the sigmoid function, initialization of weights, and the training process over 10,000 epochs. The program also outputs the loss at every 1000 epochs and the final outputs after training.

Uploaded by

Jotiram Shinde
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views2 pages

JS - Ann 8

This document presents a Python program for creating a Back Propagation Feed-forward neural network. It includes the implementation of the sigmoid function, initialization of weights, and the training process over 10,000 epochs. The program also outputs the loss at every 1000 epochs and the final outputs after training.

Uploaded by

Jotiram Shinde
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

Practical No : 8

Name : Shinde Jotiram Navanath


Class : T.E. AI&DS
Roll.NO : 25

Title: Write a python program in python program for creating a Back


Propagation Feed-forward neural network

Code:
import numpy as np
def sigmoid(x):
return 1/(1+np.exp(-x))
def sigmoid_derivative(x):
return x*(1-x)
X=np.array([[0,0],
[0,1],
[1,0],
[1,1]])
y=np.array([[0],
[1],
[1],
[0]])
np.random.seed(1)
input_size=2
hidden_size=4
output_size=1
W1=2*np.random.random((input_size,hidden_size))-1
W2=2*np.random.random((hidden_size,output_size))-1
learning_rate=0.9
epochs=10000
for epoch in range(epochs):
hidden_input=np.dot(X,W1)
hidden_output=sigmoid(hidden_input)
final_input=np.dot(hidden_output,W2)
final_output=sigmoid(final_input)
error= y - final_output
d_final=error*sigmoid_derivative(final_output)
error_hidden=d_final.dot(W2.T)
d_hidden=error_hidden*sigmoid_derivative(hidden_output)
W2+=learning_rate*hidden_output.T.dot(d_final)
W1+=learning_rate*X.T.dot(d_hidden)

if(epoch+1)%1000==0:
loss=np.mean(np.square(error))
print(f"Epoch{epoch+1}:Loss={loss:4f}")
print("\nFinal_outputs after training:")
print(final_output)
Output:

You might also like