Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
742 views34 pages

Iva Lab Manual

The program performs various geometric transforms on images - rotation, scaling, skewing, affine and bilinear transforms. It loads images, performs the transforms using various OpenCV and PIL functions, and displays the original and transformed images.

Uploaded by

AI034 Kumar B
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
742 views34 pages

Iva Lab Manual

The program performs various geometric transforms on images - rotation, scaling, skewing, affine and bilinear transforms. It loads images, performs the transforms using various OpenCV and PIL functions, and displays the original and transformed images.

Uploaded by

AI034 Kumar B
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

PRATHYUSHA ENGINEERING COLLEGE

DEPARTMENT OF AI&DS

LAB MANUAL

FOR

CCS349-IMAGE AND VIDEO ANALYTICS LABORATORY

(Regulation 2021, VI Semester)

(EVEN Semester)

ACADEMIC YEAR: 2023 - 2024

1
2
3
Exp. No.2
QUADTREE

AIM:
To Write a program that derives the quad tree representation of an image
using the homogeneity criterion of equal intensity

Algorithm:
1. Define a Quadtree Node structure to represent each node in the
quadtree. Each node should contain the following information:
 Position (x, y): The top-left corner of the node within the image

Size: The width and height of the node.


 Color: The dominant color of the node.
 Children: An array or a dictionary to store child nodes.
 Termination Condition: A condition that determines when to stop
subdividing.
2. Initialize the quadtree by creating the root node, which represents the entire
image.
3. Define the termination condition, which could be based on a threshold
for color similarity, a maximum depth, or any other criterion. If the
termination condition is met, mark the current node as a leaf node.
4. If the termination condition is not met, subdivide the current node into
four quadrants, each representing a subregion of the image:
 Divide the current node's size by 2.
 Create four child nodes, one for each quadrant.
 Determine the dominant color for each quadrant.
 Recursively apply the quadtree algorithm to each child node.
5. Repeat the subdivision process for each child node until the termination
condition is met for each leaf node.

4
PROGRAM:
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np

img = mpimg.imread(r'C:\Users\lenovo\Pictures\image.jpeg')
img.shape
plt.imshow(img)
from operator import add
from functools import reduce

def split4(image):
half_split = np.array_split(image, 2)
res = map(lambda x: np.array_split(x, 2, axis=1), half_split)
return reduce(add, res)

split_img = split4(img)
print(split_img[0].shape)
fig, axs = plt.subplots(2, 2)
axs[0, 0].imshow(split_img[0])
axs[0, 1].imshow(split_img[1])
axs[1, 0].imshow(split_img[2])
axs[1, 1].imshow(split_img[3])

for ax in axs.flat:
ax.set_aspect('equal')

plt.show()

def concatenate4(north_west, north_east, south_west, south_east):


top = np.concatenate((north_west, north_east), axis=1)
bottom = np.concatenate((south_west, south_east), axis=1)
return np.concatenate((top, bottom), axis=0)
def checkEqual(myList):
first=myList[0]
return all((x==first).all() for x in myList)

class QuadTree:

def insert(self, img, level = 0):


self.level = level
self.mean = calculate_mean(img).astype(int)
self.resolution = (img.shape[0], img.shape[1])
self.final = True

if not checkEqual(img):
split_img = split4(img)

self.final = False
self.north_west = QuadTree().insert(split_img[0], level + 1)

5
self.north_east = QuadTree().insert(split_img[1], level + 1)
self.south_west = QuadTree().insert(split_img[2], level + 1)
self.south_east = QuadTree().insert(split_img[3], level + 1)

return self

def get_image(self, level):


if(self.final or self.level == level):
return np.tile(self.mean, (self.resolution[0], self.resolution[1], 1))

return concatenate4(
self.north_west.get_image(level),
self.north_east.get_image(level),
self.south_west.get_image(level),
self.south_east.get_image(level))
def calculate_mean(img):
return np.mean(img, axis=(0, 1))

means = np.array(list(map(lambda x: calculate_mean(x),


split_img))).astype(int).reshape(2,2,3)
print(means)
plt.imshow(means)
plt.show()
quadtree = QuadTree().insert(img)
plt.imshow(quadtree.get_image(1))
plt.show()
plt.imshow(quadtree.get_image(3))
plt.show()
plt.imshow(quadtree.get_image(7))
plt.show()
plt.imshow(quadtree.get_image(10)
)plt.show()

OUTPUT:
(74, 121, 3)

6
7
8
9
Result:
Thus the program that derives the quad tree representation of
an image using the homogeneitycriterion of equal intensity is
executed successfully

10
EXP NO : 3

GEOMETRIC TRANSFORMS ON IMAGE

AIM:
To write a programs for the following geometric transforms: (a) Rotation (b)
Change of scale (c) Skewing (d) Affine transform calculated from three pairs of
corresponding points (e) Bilinear transform calculated from four pairs of corresponding
points

ALGORITHM:
1)Image Rotation:

Step 1: Load the original image using the PIL library.

Step 2: Rotate the image by 180 degrees using the rotate() method.

Step 3: Rotate the image by 90 degrees clockwise using the transpose() method.

Step 4: Rotate the image by a custom angle, such as 60 degrees, using the rotate()
method.

Step 5: Display the rotated images.

2)Change of Scale:

Step 1: Load the original image using OpenCV.

Step 2: Determine the desired scale percentage (e.g., 40% of the original size).

Step 3: Calculate the new dimensions based on the scale percentage.

Step 4: Resize the image using the resize() function from OpenCV.

Step 5: Display the resized image.

3)Skewing:

Step 1: Load the original image using the skimage library.


11
Step 2: Determine the skew angle of the image using the determine_skew() function.

Step 3: Deskew the image by rotating it in the opposite direction of the skew angle using
rotate().

Step 4: Save the deskewed image using imsave().

Step 5: Display the deskewed image.

4)Perform an Affine Transformation:

Step 1: Load the original image using OpenCV.

Step 2: Convert the color space of the image to RGB using cvtColor().

Step 3: Define the coordinates of triangular vertices in the source image.

Step 4: Define the coordinates of the corresponding triangular vertices in the output
image.

Step 5: Create an affine transformation matrix using getAffineTransform().

5)Perspective Transformation:

Step 1: Load the original image using OpenCV.

Step 2: Define the source points and destination points for perspective transformation.

Step 3: Calculate the perspective transform matrix using getPerspectiveTransform().

Step 4: Apply the perspective transformation to the image using warpPerspective().

Step 5: Display the original and transformed image

12
PROGRAM:

from PIL import Image


import cv2
import numpy as np
from skimage import io
from skimage.color import rgb2gray
from skimage.transform import rotate
from skimage.transform import rotate, hough_line, hough_line_peaks
import matplotlib.pyplot as plt
from deskew import determine_skew

# (a) Image Rotation


Original_Image = Image.open(r"C:\Users\lenovo\Pictures\image.jpeg")
# Rotate Image By 180 Degree
rotated_image1 = Original_Image.rotate(180)
rotated_image2 = Original_Image.transpose(Image.ROTATE_90)
rotated_image3 = Original_Image.rotate(60)
rotated_image1.show()
rotated_image2.show()
rotated_image3.show()

# (b) Change of scale


img = cv2.imread(r"C:\Users\lenovo\Pictures\image.jpeg",
cv2.IMREAD_UNCHANGED)
print('Original Dimensions : ',img.shape)
scale_percent = 40 # percent of original size
width = int(img.shape[1] * scale_percent / 100)
height = int(img.shape[0] * scale_percent / 100)
dim = (width, height)
# resize image
resized = cv2.resize(img, dim, interpolation=cv2.INTER_AREA)
print('Resized Dimensions : ',resized.shape)
cv2.imshow("Resized image", resized)
cv2.waitKey(0)
cv2.destroyAllWindows()
13
# (c) Skewing
# Step 1: Load the image
image = io.imread(r'C:\Users\lenovo\Pictures\image.jpeg')

# Step 2: Determine the skew angle


angle = determine_skew(image)

# Step 3: Deskew the image


deskewed = rotate(image, angle, resize=True) * 255

# Save the deskewed image


io.imsave(r'C:\Users\lenovo\Pictures\deskewed_image.jpeg',
deskewed.astype(np.uint8))

# Step 4: Perform an affine transformation


img = cv2.imread(r"C:\Users\lenovo\Pictures\deskewed_image.jpeg")
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
rows, cols, ch = img.shape

# Coordinates of triangular vertices in the source image


pt1 = np.float32([[50, 50], [200, 50], [50, 200]])

# Coordinates of the corresponding triangular vertices in the output image


pt2 = np.float32([[10, 100], [200, 50], [100, 250]])

# Creating a transformation matrix


Mat = cv2.getAffineTransform(pt1, pt2)
dst = cv2.warpAffine(img, Mat, (cols, rows))

# Plotting the images


plt.figure(figsize=(10, 10))
plt.subplot(121)
plt.imshow(img)
plt.title('Input')

14
plt.subplot(122)
plt.imshow(dst)
plt.title('Output')

plt.show()

#5Bilinear transform calculated from four pairs of corresponding points.


image = cv2.imread(r"C:\Users\lenovo\Pictures\image1.jpeg")
src_points = np.float32([[0, 0], [image.shape[1]-1, 0], [0, image.shape[0]-1],
[image.shape[1]-1, image.shape[0]-1]])
dst_points = np.float32([[50, 50], [image.shape[1]-101, 50], [50,
image.shape[0]-51], [image.shape[1]-101, image.shape[0]-51]])

perspective_matrix = cv2.getPerspectiveTransform(src_points, dst_points)

transformed_image = cv2.warpPerspective(image, perspective_matrix,


(image.shape[1], image.shape[0]))

cv2.imshow('Original Image', image)


cv2.imshow('Transformed Image', transformed_image)
cv2.waitKey(0)
cv2.destroyAllWindows()

15
OUTPUT:

a) Rotation

b) Change of scale
Original Dimensions : (148, 242, 3)
Resized Dimensions : (59, 96, 3)

16
C & D ) skewing and Affine transfrom

e)Bilinear transform calculated from four pairs of corresponding


points.

RESULT:

Thus the programs for the following geometric transforms: (a)


Rotation (b) Change of scale (c) Skewing (d) Affine transform calculated from
three pairs of corresponding points (e) Bilinear transform calculated from four
pairs of corresponding points

17
EXP NO :4

OBJECT DETECTION AND RECOGNITION

AIM : To develop a program to implement Object Detection and Recognition

ALGORITHM:
Step 1: Import Necessary Libraries
● Import the required libraries for image processing, object detection,
and visualization.

Step 2: Define Constants and Functions


● Define any constants needed for visualization (e.g., margin, font size)
and functions to perform tasks such as visualizing detection results.

Step 3: Load Image and Model


● Load the input image using OpenCV.
● Create an ObjectDetector object using MediaPipe.
● Specify options for the ObjectDetector, such as the model path and
score threshold.

Step 4: Detect Objects in the Image


● Convert the loaded image to a format compatible with MediaPipe.
● Use the ObjectDetector to detect objects in the image.
● Retrieve the detection results.

Step 5: Visualize Detection Results


● Create a copy of the input image for visualization purposes.
● Iterate over the detection results.
● Draw bounding boxes around detected objects on the image.
● Display labels and scores for each detected object.
● Convert the annotated image from BGR to RGB format (if necessary).
● Display the annotated image using a suitable method (e.g., OpenCV's
imshow).

18
PROGRAM :
import cv2
import numpy as np
import mediapipe as mp
from mediapipe.tasks import python
from mediapipe.tasks.python import vision

MARGIN = 10 # pixels
ROW_SIZE = 10 # pixels
FONT_SIZE = 1
FONT_THICKNESS = 1
TEXT_COLOR = (255, 0, 0) # red

def visualize(
image,
detection_result
) -> np.ndarray:
"""Draws bounding boxes on the input image and return it.
Args:
image: The input RGB image.
detection_result: The list of all "Detection" entities to be visualize.
Returns:
Image with bounding boxes.
"""
for detection in detection_result.detections:
# Draw bounding_box
bbox = detection.bounding_box
start_point = bbox.origin_x, bbox.origin_y
end_point = bbox.origin_x + bbox.width, bbox.origin_y + bbox.height
cv2.rectangle(image, start_point, end_point, TEXT_COLOR, 3)

# Draw label and score


category = detection.categories[0]
category_name = category.category_name
probability = round(category.score, 2)
result_text = category_name + ' (' + str(probability) + ')'
text_location = (MARGIN + bbox.origin_x,
MARGIN + ROW_SIZE + bbox.origin_y)
19
cv2.putText(image, result_text, text_location, cv2.FONT_HERSHEY_PLAIN,
FONT_SIZE, TEXT_COLOR, FONT_THICKNESS)

return image

IMAGE_FILE = r'C:\Users\Administrator\Documents\obj1.jpeg'

img = cv2.imread(IMAGE_FILE)
# STEP 2: Create an ObjectDetector object.
base_options =
python.BaseOptions(model_asset_path=r'C:\Users\Administrator\Downloads\
efficientdet_lite0.tflite')
options = vision.ObjectDetectorOptions(base_options=base_options,
score_threshold=0.3)
detector = vision.ObjectDetector.create_from_options(options)

# STEP 3: Load the input image.


image = mp.Image.create_from_file(IMAGE_FILE)

# STEP 4: Detect objects in the input image.


detection_result = detector.detect(image)

# STEP 5: Process the detection result. In this case, visualize it.


image_copy = np.copy(image.numpy_view())
annotated_image = visualize(image_copy, detection_result)
rgb_annotated_image = cv2.cvtColor(annotated_image, cv2.COLOR_BGR2RGB)
cv2.imshow('image',rgb_annotated_image)

20
OUTPUT:

RESULT:
Thus the program to implement Object Detection and Recognition is
executed successfully and output is verified.

21
EXP NO :5

MOTION ANALYSIS USING MOVING EDGES

AIM :
To Develop a program for Motion Analysis using Moving Edges

ALGORITHM:
Step 1: Import necessary libraries.

Step 2: Open the video file using OpenCV's VideoCapture function and retrieve
the width and height of the frames.

Step 3: Define the codec and create a VideoWriter object to save the output
video.

Step 4: Read the first two frames from the video.

Step 5: Start a loop to process each frame in the video until the video is
opened.

Step 6: Calculate the absolute difference between consecutive frames, convert


the difference image to grayscale, and apply Gaussian blur to reduce noise.

Step 7: Apply thresholding to create a binary image highlighting the


differences, then dilate the thresholded image to fill gaps in the objects.

Step 8: Find contours in the dilated image to detect moving objects, iterate
over each contour, and filter out small contours based on area.

Step 9: Draw rectangles around detected objects and annotate them with a
text indicating movement.

Step 10: Write the annotated frame to the output video.

Step 11: Display the annotated frame and update the previous frame with the
current frame for the next iteration.

22
Step 12: Read the next frame from the video.

Step 13: Check for the 'Esc' key press to break out of the loop.

Step 14: Close all OpenCV windows, release the video capture and video writer
objects.

PROGRAM:

import cv2
import numpy as np

cap = cv2.VideoCapture(r'C:\Users\lenovo\Downloads\vid.mp4')
frame_width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
frame_height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
fourcc = cv2.VideoWriter_fourcc('X', 'V', 'I', 'D')
out = cv2.VideoWriter("output.mp4", fourcc, 5.0, (frame_width,
frame_height))

ret, frame1 = cap.read()


ret, frame2 = cap.read()

while cap.isOpened():
diff = cv2.absdiff(frame1, frame2)
gray = cv2.cvtColor(diff, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray, (5, 5), 0)
_, thresh = cv2.threshold(blur, 20, 255, cv2.THRESH_BINARY)
dilated = cv2.dilate(thresh, None, iterations=3)
contours, _ = cv2.findContours(dilated, cv2.RETR_TREE,
cv2.CHAIN_APPROX_SIMPLE)

for contour in contours:


(x, y, w, h) = cv2.boundingRect(contour)
if cv2.contourArea(contour) < 900:
continue
cv2.rectangle(frame1, (x, y), (x+w, y+h), (0, 255, 0), 2)
cv2.putText(frame1, "Status: {}".format('Movement'), (10, 20),
cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 255), 3)

23
out.write(frame1)
cv2.imshow('frame', frame1)
frame1 = frame2
ret, frame2 = cap.read()

if cv2.waitKey(40) == 27:
break

cv2.destroyAllWindows()
cap.release()
out.release()

OUTPUT:

RESULT:
Thus the program to implement Motion Analysis using Moving Edges

is executed successfully and output is verified.

24
EXP NO :6

FACE DETECTION AND RECOGNITION


AIM:
To Develop a program for Face Detection and Recognition

ALGORITHM:
Step 1: Import Libraries
Import necessary libraries such as face_recognition, sklearn, os, and cv2.

Step 2: Initialize Variables


Create empty lists encodings and names to store face encodings and
corresponding labels.

Step 3: Set Training and Test Directories


Define paths for the training directory and the test image.

Step 4: Loop Through Training Data


Iterate through each person's directory in the training directory.

Step 5: Extract Face Encodings


For each image in the person's directory, load the image, detect the
face, and extract its encoding using face_recognition library.

Step 6: Add Encodings and Labels


If exactly one face is detected in the image, append its encoding to
encodings and its label (person's name) to names.

Step 7: Train the Classifier


Create and train a Support Vector Classifier (SVC) using svm.SVC from
sklearn.

Step 8: Load and Detect Faces in Test Image


Load the test image and detect faces using
face_recognition.face_locations.

Step 9: Draw Rectangles and Labels on Faces


25
Loop through each detected face in the test image, draw rectangles
around them using cv2.rectangle, and put labels on them using cv2.putText.

Step 10: Display Result


Display the modified image with rectangles and labels using cv2.imshow,
and wait for user input to close the window.

PROGRAM:
import face_recognition
from sklearn import svm
import os
import cv2

encodings = []
names = []

train_dir = r'C:\Users\lenovo\Desktop\train'
test_image_path =r'C:\Users\lenovo\Desktop\S.jpeg'

for person in os.listdir(train_dir):


person_dir = os.path.join(train_dir, person)
pix = os.listdir(person_dir)

for person_img in pix:


img_path = os.path.join(person_dir, person_img)
face = face_recognition.load_image_file(img_path)
face_bounding_boxes = face_recognition.face_locations(face)

# If training image contains exactly one face


if len(face_bounding_boxes) == 1:
face_enc = face_recognition.face_encodings(face)[0]
# Add face encoding for current image with corresponding label (name) to the
training data
encodings.append(face_enc)
names.append(person)
else:
print(person + "/" + person_img + " was skipped and can't be used for training")

clf = svm.SVC(gamma='scale')
clf.fit(encodings,names)

# Load the test image with unknown faces into a numpy array
test_image = face_recognition.load_image_file(test_image_path)

26
# Find all the faces in the test image using the default HOG-based model
face_locations = face_recognition.face_locations(test_image)
no = len(face_locations)

print("Number of faces detected: ", no)


print(" Image Found")

for i in range(no):
test_image_enc = face_recognition.face_encodings(test_image)[i]
name = clf.predict([test_image_enc])

image= cv2.imread(test_image_path)

for face_location in face_locations:


top, right, bottom, left = face_location
cv2.rectangle(image,(left,top),(right,bottom),(0,0,255),2)
cv2.putText(image,str(*name),(left,top),cv2.FONT_HERSHEY_SIMPLEX,0.5,(255,0,0),2)
cv2.imshow('image',image)
cv2.waitKey(0)
cv2.destroyAllWindows()

OUTPUT:

RESULT:
Thus the program to implement Face Detection and Recognition is
executed successfully and output is verified.

27
EXP NO : 7

EVENT DETECTION IN VIDEO SURVEILLANCE

AIM:

To write a program for event detection in video surveillance


system

ALGORITHM:
Step 1: Import the OpenCV library (import cv2).

Step 2: Load the pre-trained Haar cascade classifier for full body detection
using cv2.CascadeClassifier.

Step 3: Open the video file for processing using cv2.VideoCapture.

Step 4: Enter a loop to read frames from the video until there are no more
frames using cap.read().

Step 5: Convert each frame to grayscale using cv2.cvtColor.

Step 6: Detect pedestrians in the grayscale frame using the Haar cascade
classifier with pedestrian_cascade.detectMultiScale.

Step 7: Draw rectangles around the detected pedestrians on the original frame
using cv2.rectangle.

Step 8: Display the frame with rectangles showing detected pedestrians using
cv2.imshow.

Step 9: If the 'q' key is pressed, break the loop using cv2.waitKey.

28
PROGRAM:
import cv2

pedestrian_cascade = cv2.CascadeClassifier(cv2.data.haarcascades +
'./haarcascade_fullbody.xml')
cap = cv2.VideoCapture("./video (1080p).mp4")

while True:
ret, frame = cap.read()
if not ret:
break
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
pedestrians = pedestrian_cascade.detectMultiScale(gray, scaleFactor=1.1,
minNeighbors=5, minSize=(30, 30))
for (x, y, w, h) in pedestrians:
cv2.rectangle(frame, (x, y), (x+w, y+h), (0, 255, 0), 2)
cv2.imshow('Video', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()

OUTPUT:

RESULT:
Thus the program for event detection in video surveillance is
executed successfully and output is verified.

29
ADDITIONAL EXPERIMENTS

EXP NO 8:

DECTECTING AN EDGE OF AN IMAGE

AIM:
To Develop a program to dectecting edge of an image

ALGORITHM:

1. Step 1: Read Image


● Load an image from the specified file path
(`"C:\Users\lenovo\Pictures\image.jpeg"`).

2. Step 2: Detect Edges


● Apply the Canny edge detection algorithm to the loaded
image with thresholds of 200 and 300.

3. Step 3: Save Edge Image


● Save the resulting edge-detected image to the specified file
path (`"C:\Users\lenovo\Pictures\moutain.jpeg"`) using the
`cv2.imwrite()` function.

4. Step 4: Display Original Image


● Show the original image using `cv2.imshow()` function with
the title "Original image".

5. Step 5: Display Edge-detected Image


● Show the edge-detected image using `cv2.imshow()` function
with the title "Detected edges" by loading the image using
`cv2.imread()` from the file path where it was saved.

30
PROGRAM:

import cv2
image = cv2.imread(r"C:\Users\lenovo\Pictures\image.jpeg")
edge_image=cv2.imwrite(r'C:\Users\lenovo\Pictures\moutain.jpeg',c
v2.Canny(image,200,300))

cv2.imshow('Original image', image)


cv2.imshow('Detected edges',
cv2.imread(r'C:\Users\lenovo\Pictures\moutain.jpeg'))

OUTPUT:

RESULT:
Thus the program to apply dectecting edge of an image is executed
successfully and output is verified.

31
EXP NO 9:

SMOOTHING AND BLURRING

AIM:
To Develop a program to apply smoothing and blurring to an image

ALGORITHM:
STEP 1: Read Image:
● Load an image from the specified file path.

STEP 2:Create Kernel:


● Generate a 5x5 averaging kernel using NumPy to perform blurring.

STEP 3:Apply Filter:


● Apply the 2D filter to the loaded image using OpenCV's ‘filter2D()’
function.

STEP 4: Display Original Image:


● Show the original image using OpenCV's ‘imshow()’ function.

STEP 5.:Display Blurred Image:


● Show the image after applying the kernel blur using OpenCV's ‘imshow()’
function.

STEP 6:Wait for User Input:


● Wait for any key press to close the displayed images using OpenCV's
‘waitKey()’ function.

STEP 7:Close All Windows:


● Close all windows and release resources using OpenCV's
‘destroyAllWindows()’ function.

32
PROGRAM:
import cv2
import numpy as np

# Reading the image


image = cv2.imread(r"C:\Users\lenovo\Pictures\image.jpeg")

# Creating the kernel with numpy


kernel2 = np.ones((5, 5), np.float32)/25

# Applying the filter


img = cv2.filter2D(src=image, ddepth=- 1, kernel=kernel2)

# Showing the image


cv2.imshow('Original', image)
cv2.imshow('Kernel Blur', img)
cv2.waitKey()
cv2.destroyAllWindows()

OUTPUT:

RESULT:
Thus the program to apply smoothing and blurring to an image is
executed successfully and output is verified.

33
34

You might also like