Exp no: Date:
IMPLEMENTATION OF UNIFORMED SEARCH ALGORITHMS
BREADTH- FIRST SEARCH
Aim:
To Implement the Breadth First Search (BFS) in python.
Algorithm:
Step 1: Start
Step 2: Start by putting any one of the graphs vertices at the back of the queue.
Step 3: Take the front item of the queue and add it to the visited list.
Step 4: Create a list of that vertex’s adjacent nodes. Add those which are not within the
visited list to the rear of the queue.
Step 5: Repeat the steps 3 and 4 until the queue is empty.
Step 6: Stop.
Program:
graph = {
'5' : ['3','7'],
'3' : ['2', '4'],
'7' : ['8'],
'2' : ['3'],
'4' : ['8'],
'8' : []
}
visited = [] # List for visited nodes.
queue = [] #Initialize a queue
def bfs(visited, graph, node): #function for BFS
visited.append(node)
queue.append(node)
while queue: # Creating loop to visit each node
m = queue.pop(0)
print (m, end = " ")
for neighbour in graph[m]:
if neighbour not in visited:
visited.append(neighbour)
queue.append(neighbour)
# Driver Code
print("Following is the Breadth-First Search")
bfs(visited, graph, '5') # function calling
Result:
Thus the python program to implement breadth first search was excited and The output
was verified.
Exp no: Date:
IMPLEMENTATION OF DEPTH FIRST SEARCH
Aim:
To Implement Depth First Search (DFS) in python.
Algorithm:
Step 1: Start
Step 2: Start by putting anyone of the graph’s vertex on top of the stack.
Step 3: Take the top item of the stack and add it to the visited list of the Vertex.
Step 4: Create a list of that adjacent node of the vertex. Add the ones which Aren’t in
the visited list of vertexes to the top of stack.
Step 5: Repeat the steps 3 and 4 until the stack is empty.
Step 6: stop.
Program:
graph = {
'5': ['3', '7'],
'3': ['2', '4'],
'7': ['8'],
'2': ['3'],
'4': ['8'],
'8': []
}
visited = set()
def dfs(visited, graph, node):
if node not in visited:
print(node)
visited.add(node)
for neighbour in graph[node]:
dfs(visited, graph, neighbour)
print("Following is the depth-first search:")
dfs(visited, graph, '5')
Result:
Thus the above python program to implement depth first search was executed the output
was verified.
Exp no: Date:
IMPLEMENTATION OF INFORMED SEARCH
A*ALGORITHM
Aim:
To write and Implement A*Algorithm in python.
Algorithm:
Step 1: Start.
Step 2: Firstly, place the starting node into open and find its f(n) value.
Step 3: Then remove the node from open having the smallest f(n) value if it is a goal
node, then stop and return to success.
Step 4: Else remove the node from OPEN and find all the successors
.
Step 5: Find the f(n) value of all successors. Place the removed node to close.
Step 6: Go to step 3.
Step 7: Stop.
Program:
from collections import deque
class Graph:
def __init__(self, adjac_list):
self.adjac_list = adjac_list
def get_neighbours(self, v):
return self.adjac_list[v]
def h(self, n):
H = {
'A': 1,
'B': 1,
'C': 1,
'D': 1,
}
return H[n]
def a_star_algorithm(self, start, stop):
open_lst = set([start])
closed_lst = set([])
poo = {}
par = {}
poo[start] = 0
par[start] = start
while len(open_lst) > 0:
n = None
for v in open_lst:
if n is None or poo[v] + self.h(v) < poo[n] + self.h(n):
n = v
if n is None:
print('Path does not exist!')
return None
if n == stop:
reconst_path = []
while par[n] != n:
reconst_path.append(n)
n = par[n]
reconst_path.append(start)
reconst_path.reverse()
print('Path found:', reconst_path)
return reconst_path
for m, weight in self.get_neighbours(n):
if m not in open_lst and m not in closed_lst:
open_lst.add(m)
par[m] = n
poo[m] = poo[n] + weight + self.h(m)
else:
if poo[m] > poo[n] + weight + self.h(m):
poo[m] = poo[n] + weight + self.h(m)
par[m] = n
open_lst.remove(n)
closed_lst.add(n)
print('Path does not exist!')
return None
adjac_list = {
'A': [('B', 1), ('C', 3), ('D', 7)],
'B': [('D', 5)],
'C': [('D', 12)]
}
graph1 = Graph(adjac_list)
graph1.a_star_algorithm('A', 'D')
Result:
Thus the above python program to implement the informed search algorithm Of
A*Algorithm where written and executed successfully.
IMPLEMENT NAIVE BAYES MODEL
Aim:
To write a python program to implement Naive Bayes Model.
Algorithm:
Step 1: Start.
Step 2: Use numpy to convert the data into suitable format.
Step 3: We should always split the data into a training set and test set. Use 80%
of the data for training and remaining 20% for testing.
Step 4: Standardize the features of dataset.
Step 5: Import gaussian NB from sklearn to train Naive Bayes model using the training
data.
Step 6: Test the model using the testing data and measure the accuracy.
Step 7: Import matplotlib to visualize the accuracy of the model using a bar graph.
Step 8: Make predictions using the model on the testing data.
Step 9: Stop.
Program:
import pandas as pd
import numpy as np
from sklearn import preprocessing
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.naive_bayes import GaussianNB
import matplotlib.pyplot as plt
# Read the dataset
df = pd.read_csv("/content/drive/MyDrive/Datasets/play_tennis.csv")
print(df)
# Label Encoding
le = preprocessing.LabelEncoder()
for i in df.columns:
if isinstance(df[i][0], str):
df[i] = le.fit_transform(df[i])
print(df)
# Define features and target
x = df.iloc[:, 1:5]
y = df.iloc[:, -1]
print(x)
print(y)
# Train-test split
x_train, x_test, y_train, y_test = train_test_split(x, y, random_state=1,
test_size=0.2)
print(x_train.shape, y_train.shape, x_test.shape, y_test.shape)
# Standardization
scaler = StandardScaler()
scaler.fit(x_train)
x_train_std = scaler.transform(x_train)
x_test_std = scaler.transform(x_test)
print('Standardized features:')
print('Training data:\n', x_train_std)
print('Testing data:\n', x_test_std)
# Naive Bayes model
model = GaussianNB()
model.fit(x_train_std, y_train)
print('Training Accuracy = {}'.format(model.score(x_train_std, y_train)))
print('Testing Accuracy = {}'.format(model.score(x_test_std, y_test)))
# Visualization
x_labels = np.array(["Training accuracy", "Testing accuracy"])
y_values = np.array([model.score(x_train_std, y_train),
model.score(x_test_std, y_test)])
plt.title('Accuracy - Naive Bayes model', fontsize=24)
plt.ylabel('Accuracy Value', fontsize=14)
plt.xlabel('Accuracy training/testing', fontsize=14)
plt.bar(x_labels, y_values, color="hotpink", width=0.4)
plt.show()
# Predictions
y_predict = model.predict(x_test)
print(y_predict)
Result:
Thus the above the python program of implementation of naive bayes algorithm was
executed and verified successfully.