Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
13 views5 pages

AIML Lab 3 4

The document provides two Python programs demonstrating the use of decision trees and the Naive Bayes algorithm to classify the Iris dataset. The decision tree model achieves an accuracy of 1.0 and includes visualization of the tree, while the Naive Bayes model prints both correct and wrong predictions along with the accuracy and classification report. Both programs utilize sklearn for model training, evaluation, and data handling.

Uploaded by

Nikhil Nikki
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views5 pages

AIML Lab 3 4

The document provides two Python programs demonstrating the use of decision trees and the Naive Bayes algorithm to classify the Iris dataset. The decision tree model achieves an accuracy of 1.0 and includes visualization of the tree, while the Naive Bayes model prints both correct and wrong predictions along with the accuracy and classification report. Both programs utilize sklearn for model training, evaluation, and data handling.

Uploaded by

Nikhil Nikki
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

3. Write a program to demonstrate the working of the decision tree.

Use an
appropriate data set for building the decision tree and apply this knowledge to
classify a new sample

import numpy as np

import matplotlib.pyplot as plt

from sklearn.datasets import load_iris

from sklearn.model_selection import train_test_split

from sklearn.tree import DecisionTreeClassifier, plot_tree

from sklearn.metrics import accuracy_score, classification_report

# Step 1: Load the Iris dataset

data = load_iris()

X = data.data # Features

y = data.target # Target (class labels)

# Step 2: Split the dataset into training and testing sets

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Step 3: Create and train the decision tree model

model = DecisionTreeClassifier(random_state=42)

model.fit(X_train, y_train)

# Step 4: Make predictions on the test set

y_pred = model.predict(X_test)

# Step 5: Evaluate the model


accuracy = accuracy_score(y_test, y_pred)

print("Accuracy:", accuracy)

print("Classification Report:\n", classification_report(y_test, y_pred))

# Step 6: Visualize the decision tree

plt.figure(figsize=(12, 8))

plot_tree(model, feature_names=data.feature_names, class_names=data.target_names, filled=True)

plt.title('Decision Tree Visualization')

plt.show()

# Step 7: Classify a new sample

new_sample = np.array([[5.1, 3.5, 1.4, 0.2]]) # Example input (Sepal Length, Sepal Width, Petal
Length, Petal Width)

prediction = model.predict(new_sample)

print("New Sample Classification:", data.target_names[prediction[0]])

Output:

Accuracy: 1.0

precision recall f1-score support

0 1.00 1.00 1.00 10

1 1.00 1.00 1.00 9

2 1.00 1.00 1.00 11

accuracy 1.00 30

macro avg 1.00 1.00 1.00 30


4. Write a program to implement Naive Bayes algorithm to classify the iris data
set. Print both correct and wrong predictions.

import numpy as np

from sklearn.datasets import load_iris

from sklearn.model_selection import train_test_split

from sklearn.naive_bayes import GaussianNB

from sklearn.metrics import accuracy_score, classification_report


# Step 1: Load the Iris dataset

data = load_iris()

X = data.data # Features

y = data.target # Target (class labels)

# Step 2: Split the dataset into training and testing sets

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Step 3: Create and train the Naive Bayes model

model = GaussianNB()

model.fit(X_train, y_train)

# Step 4: Make predictions on the test set

y_pred = model.predict(X_test)

# Step 5: Print correct and wrong predictions

correct_predictions = [(X_test[i], y_test[i]) for i in range(len(y_test)) if y_test[i] == y_pred[i]]

wrong_predictions = [(X_test[i], y_test[i], y_pred[i]) for i in range(len(y_test)) if y_test[i] != y_pred[i]]

print("Correct Predictions:")

for sample, label in correct_predictions:

print(f"Sample: {sample}, Label: {data.target_names[label]}")

print("\nWrong Predictions:")

for sample, true_label, pred_label in wrong_predictions:

print(f"Sample: {sample}, True Label: {data.target_names[true_label]}, Predicted Label:


{data.target_names[pred_label]}")
# Step 6: Evaluate the model

accuracy = accuracy_score(y_test, y_pred)

print("\nAccuracy:", accuracy)

print("\nClassification Report:\n", classification_report(y_test, y_pred,


target_names=data.target_names))

You might also like