Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
28 views53 pages

Image Processing Filters Guide

Uploaded by

khajamail07
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views53 pages

Image Processing Filters Guide

Uploaded by

khajamail07
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 53

Module 2

Linear Filtering: 1D and2D convolution, Separable filtering, Examples offlinear filters (Moving
average/ Box filter, Bilinear, Gaussian, Sobel,Corner Filter), Bandpass and steerable filters:
applacian of gaussian filter, Nonlinear filters: Median filter, Bilateral filter, Binary Image
processing, Morphology, Fourier Transforms, DCT, Applications sharpening , blur and noise
removal, interpolation, Decimation, multi resolution,

Linear Filtering: 1D and2D convolution

Linear filtering is a fundamental process in signal processing and image processing where an
input signal or image is transformed by applying a filter (or kernel). This process can enhance or
suppress certain features, such as edges, noise, or specific frequencies. Convolution is a key
mathematical operation used in linear filtering, particularly in contexts like signal processing,
image smoothing, sharpening, and more.

In 1D convolution, the output signal y[n] is obtained by sliding a filter (or kernel) h[k] over the
input signal x[n] and computing the weighted sum of the overlapping elements. The mathematical
formula is:

Where:

 x[n] is the input signal.


 h[k] is the filter (kernel).
 y[n] is the output signal after convolution.

Example 1

Given an input signal x[n]=[1,2,3,4] and a filter h[n]=[1,0,−1] perform the 1D convolution.

Solution:
2D Convolution

Mathematical Formulation:

In 2D convolution, the output image Y[i,j] is obtained by sliding a 2D filter (or kernel) H[k,l] over
the input image X[i,j] and computing the weighted sum of the overlapping elements. The
mathematical formula is:

Where:

 X[i,j] is the input image.


 H[k,L] is the filter (kernel).
 Y[i,j] is the output image after convolution.

Example 2

Given a 3x3 input image X and a 3x3 kernel H:


perform the 2D convolution.

Solution:

Calculate Y[1,1]:

Y[1,1]=(1×1)+(2×0)+(1×−1)+(2×1)+(3×0)+(2×−1)+(1×1)+(2×0)+(1×−1)=1−1+2−2+1−1=0

Example 3

Perform 1D convolution on the signal x[n]=[1,2,3,4] with the filter h[n]=[1,−1] using zero-
padding.
Example 4

Compute the 1D convolution of the signal x[n]=[3,1,2] with the filter h[n]=[2,1] without padding.

Example 5

Apply a 3x3 identity kernel H on the image X and perform the 2D convolution.
Separable filtering

Separable filtering is an efficient technique used in image processing where a 2D filter can be
broken down into two 1D filters. This decomposition significantly reduces the computational
complexity of applying the filter to an image. Instead of directly convolving the image with a 2D
kernel, you convolve it first with one 1D filter (in one direction) and then with the other 1D filter
(in the perpendicular direction).

For instance, if you have a 2D filter of size m×n, applying it directly requires m×n multiplications
for each pixel. However, if the filter is separable, you can apply two 1D filters of sizes m×1 and
1×n, which only requires m+n multiplications for each pixel, making the process much more
efficient.

Example 1

Apply a separable Gaussian filter to a small 3x3 grayscale image using the following 1D Gaussian
filter:

The same filter is used in both the horizontal and vertical directions.

Given Image:

Solution:

1. Apply the 1D Gaussian Filter Horizontally:

We convolve the image I with the 1D filter Gx(x) horizontally:

o Convolution with the first row:


Apply the 1D Gaussian Filter Vertically:

Now, convolve the horizontally filtered image Ih with the 1D filter Gy(y) vertically:

 Convolution with the first column:


Moving average/ Box filter

A Box Filter is a simple and commonly used linear filter in image processing. It operates by
averaging the pixels within a defined neighborhood (window) around each pixel in an image.

Example 1

Apply a 3x3 box filter to the following image matrix X:

Solution:
Example 2

Given an image X represented as:

Apply a 5x5 box filter and compute the output for the central pixel.

Solution

Calculate the New Value for Pixel at Position (2, 2):

 The 3x3 neighborhood around pixel (2, 2) is:


Filtered Pixel Values:

The filtered values for the central pixels we calculated are:

1. Pixel at (2,2): 25
2. Pixel at (2,3): 35
3. Pixel at (3,2): 30
4. Pixel at (3,3): 40

Example 3

Given a noisy image X. Apply a 3x3 box filter to reduce the noise.
Example 3

Apply a 3x3 box filter to the following image and explain how you handle the borders:
Bilinear filter

Bilinear filtering is a technique used in image processing to perform interpolation between pixels.
It’s commonly applied when resizing images, performing texture mapping in graphics, or other
tasks requiring smooth transitions between pixel values. The bilinear filter uses the values of the
four nearest pixel neighbors to estimate a new pixel value in a continuous space.

Mathematical Formulation

Given a pixel position (x,y) where x and y are floating-point values, the value of the new pixel
f(x,y) is interpolated based on the four closest integer pixel values f(x1,y1), f(x2,y1), f(x1,y2), and
f(x2,y2) where:

The bilinear interpolation formula is given by:


f(x,y)=(1−(x−x1))⋅(1−(y−y1))⋅f(x1,y1)+(x−x1)⋅(1−(y−y1))⋅f(x2,y1)+(1−(x−x1))⋅(y−y1)⋅f(x1,

y2)+(x−x1)⋅(y−y1)⋅f(x2,y2)

Where:

 (x1,y1) and (x2,y2) are the coordinates of the top-left and bottom-right corners of the pixel
square surrounding (x,y).

Example 1

Given a 2x2 pixel grayscale image with the following intensity values:

Estimate the intensity at position (1.5,1.5).

Solution:

1. Identify the Coordinates and Values: The four surrounding pixel values are:
o f(1,1)=100
o f(2,1)=150
o f(1,2)=200
o f(2,2)=250

The position (1.5,1.5) lies halfway between these four points.

2. Calculate the Weights:


o x1=1, y1=1
o x2=2, y2=2
o x=1.5, y=1.5
o x−x1=0.5, y−y1=0.5

3. Apply the Bilinear Interpolation Formula:

f(1.5,1.5)=(1−0.5)⋅(1−0.5)⋅100+0.5⋅(1−0.5)⋅150+(1−0.5)⋅0.5⋅200+0.5⋅0.5⋅250

Simplifying:

f(1.5,1.5)=0.25⋅100+0.25⋅150+0.25⋅200+0.25⋅250= 25 + 37.5 + 50 + 62.5 = 175


So, the interpolated value at (1.5,1.5) is 175.

Example 2

Consider a 2x2 RGB image:

Estimate the color at (1.25,1.25).


Example 3

Given a 4x4 grayscale image:

Calculate the intensity at position (2.5,2.5) using bilinear interpolation.

Solution:

1. Identify Coordinates: x1=2, y1=2, x2=3, y2=3.

The corresponding pixel values are:

f(2,2)=140
o
f(3,2)=150
o
f(2,3)=180
o
f(3,3)=190
o
2. Compute Interpolated Value:

f(2.5,2.5)=(1−0.5)⋅(1−0.5)⋅140+0.5⋅(1−0.5)⋅150+(1−0.5)⋅0.5⋅180+0.5⋅0.5⋅190

Simplifying:

f(2.5,2.5)=0.25⋅140+0.25⋅150+0.25⋅180+0.25⋅190
f(2.5,2.5)=35+37.5+45+47.5=165

The interpolated intensity value is 165.

Sobel filter

The Sobel filter is an edge detection technique used in image processing and computer vision to
identify gradients in an image. It works by convolving the image with two 3x3 kernels to
approximate the derivatives of the image intensity. These derivatives help highlight regions of
high spatial frequency, such as edges. The Sobel filter is particularly effective for detecting edges
in both horizontal and vertical directions.

The Sobel filter combines the concepts of convolution and differentiation to enhance the transition
points between different image regions, thus making it easier to identify boundaries between
objects.
Mathematical Formulation

The Sobel filter uses two convolution kernels: one for detecting changes in the horizontal
direction (Gx) and another for detecting changes in the vertical direction (Gy).

Given an image I(x,y), the gradient approximations in the x and y directions are calculated as
follows

Example 1
Perform edge detection on the image X using the Sobel horizontal filter H.

Solution

Solution:

1. Apply the Sobel Filter:

Y[1,1]=(1×1)+(1×0)+(1×−1)+(0×2)+(0×0)+(0×−2)+(−1×1)+(−1×0)+(−1×−1)=0

Result:

Example 2

Given the following 3x3 grayscale image:

Apply the Sobel filter to detect the edges.

Solution:

1. Apply the Horizontal Kernel (Gx):


Convolve I with Gx:

Gx(2,2)=(−1×100+0×100+1×100)+(−2×150+0×150+2×150)+(−1×200+0×200+1×200)

=(0)+(0)+(0)=0

Apply the Vertical Kernel (Gy):

Convolve I with Gy:

Gy(2,2)=(−1×100+−2×100+−1×100)+(0×150+0×150+0×150)+(1×200+2×200+1×200)

=(−400)+(0)+(400)=0

Calculate the Gradient Magnitude:

Since both Gx and Gy are 0, the gradient magnitude is 0, indicating no edge at this pixel.

Example 3

Consider the following 3x3 grayscale image:

Apply the Sobel filter to detect the edges.

Solution:

1. Apply the Horizontal Kernel (Gx):

Gx(2,2)=(−1×50+0×50+1×50)+(−2×50+0×100+2×50)+(−1×50+0×50+1×50)

=(0)+(0)+(0)=0
2. Apply the Vertical Kernel (Gy):
Gy(2,2)=(−1×50+−2×50+−1×50)+(0×50+0×100+0×50)+(1×50+2×50+1×50)
=(−200)+(0)+(200)=0
3. Calculate the Gradient Magnitude:

No edge is detected at this pixel.

Example 5

Consider a 5x5 image:

Apply the Sobel filter to detect the edges at the center pixel.

Solution:

1. Horizontal Gradient at the Center (3, 3):

Gx(3,3)=(−1×50+0×50+1×50)+(−2×50+0×100+2×50)+(−1×50+0×50+1×50)

=(0)+(0)+(0)=0

2.Vertical Gradient at the Center (3, 3):

Gy(3,3)=(−1×50+−2×50+−1×50)+(0×50+0×100+0×50)+(1×50+2×50+1×50)

=(−200)+(0)+(200)=0

2. Calculate the Gradient Magnitude:


Corner Filter

A corner filter is used in image processing to detect corner points in an image, which are points
where two edges meet. Corners are key features in many computer vision tasks such as object
recognition, image matching, and motion tracking. Corners are characterized by having high
intensity variation in both directions (horizontal and vertical).

There are various methods for detecting corners, but one of the most common is the Harris
Corner Detector, which is based on the analysis of local gradients. This method uses the intensity
gradient of the image to detect regions where the intensity changes significantly in all directions.

Mathematical Formulation

The Harris Corner Detector involves the following steps:

1. Compute the Image Gradients: Given an image I(x,y), compute the gradients in the x
and y directions using Sobel operators:
4. Threshold the Response: A threshold is applied to the response R to identify strong
corners. If R is greater than a certain threshold, the pixel is classified as a corner.

Example 1

Given the following 3x3 grayscale image:

Apply the Harris Corner Detector to find corners.

Solution:

1. Compute the Image Gradients:

Using Sobel operators:


 In this case, the gradients are zero because the image has uniform intensity around the center
pixel.

 Compute the Structure Tensor:

The structure tensor M for the center pixel is:

Example 2

Consider the following 3x3 grayscale image:

Apply the Harris Corner Detector to find corners.


Example 3

Consider a 5x5 grayscale image:

Apply the Harris Corner Detector to find corners.


Guassian filtering

Gaussian filtering is like a "softening" tool for images. Imagine you have a slightly noisy or sharp
image, and you want to make it look smoother. Gaussian filtering does this by gently blending
each pixel with its neighbors, giving more weight to the closer ones, just like how a blurry spot
around a light source fades out smoothly.

This "blending" helps reduce noise and smooth out details, making the image look cleaner without
losing important features. It's widely used in photography, medical imaging, and computer vision
to make images more visually pleasing or easier to analyze.

How Gaussian Filtering Works

1. Gaussian Function: The Gaussian filter is based on the Gaussian function, which in two
dimensions is given by:
 Here, σ is the standard deviation of the Gaussian distribution. The Gaussian function
defines the shape of the filter, with larger values of σ resulting in more blurring.
 Convolution: To apply Gaussian filtering to an image, the Gaussian function is used as a
kernel (a small matrix) that is convolved with the image. Convolution involves sliding the
kernel across the image and computing the weighted sum of the pixels within the kernel's
footprint.
 Smoothing Effect: The effect of Gaussian filtering is to smooth the image by averaging
the intensity of pixels with their neighbors, with more emphasis on closer pixels (as
defined by the Gaussian kernel).

Example 1

Consider the following 3x3 image:

We will use a 3x3 Gaussian kernel with the following values:

Solution

1. Center Pixel Calculation:


o We will start by applying the Gaussian filter to the center pixel (50 in the image).

New value at center pixel=


So, after applying the Gaussian filter, the center pixel's value remains 50, but the surrounding
pixels would also be smoothed similarly, leading to an overall softened image.

Example 2
Consider the following 5x5 image:

Apply the Gaussian filter to the center pixel (40 in the image) located at position (3,3).

Solution

Let's apply the Gaussian filter to the center pixel (40 in the image) located at position (3,3)(3,3)
(3,3).

1. Identify the neighborhood around the center pixel:

Apply the Gaussian kernel to this neighborhood:

New value at center pixel=161×(1×25+2×35+1×45+2×30+4×40+2×50+1×35+2×45+1×55)


3. Calculate the weighted sum:

 1×25=25
 2×35=70
 1×45=45
 2×30=60
 4×40=16
 2×50=100
 1×35=35
 2×45=90
 1×55=55

Total Sum = 25+70+45+60+160+100+35+90+55=640

4. Apply the Gaussian weight:

New value at center pixel=

Example 3
Consider the following 3x3 image:

Apply the Gaussian filter to the center pixel (30 in the image) located at position (2,2).

Solution
2. Gaussian Filter Application:

New value at center pixel=116×(1×5+2×10+1×15+2×20+4×30+2×40+1×50+2×60+1×70)

3. Calculating the Weighted Sum:

 1×5=5
 2×10=20
 1×15=15
 2×20=40
 4×30=120
 2×40=80
 1×50=50
 2×60=120
 1×70=70

Total Sum = 5+20+15+40+120+80+50+120+70=520

4. Apply the Gaussian Weight:

New value at center pixel=1/16×(520)=32.5

Example 4

Apply a 3x3 Gaussian smoothing filter H to the image X

Solution:
Apply the Gaussian kernel to this neighborhood:

New value at center pixel=161×(1×10+2×20+1×30+2×40+4×50+2×60+1×70+2×80+1×90)

3. Calculate the weighted sum:

 1×10=10
 2×20=40
 1×30=30
 2×40=80
 4×50=200
 2×60=120
 1×70=70
 2×80=160
 1×90=90

Total Sum = 10+40+30+80+200+120+70+160+90=800

Apply the Gaussian weight:

New value at center pixel=1/16×(800)=50

Applications of Gaussian Filtering

1. Noise Reduction:
o Gaussian filtering is often used to reduce noise in an image while preserving
important edges and features.

2. Pre-processing for Edge Detection:


o Before applying edge detection algorithms like the Sobel filter or Canny edge
detector, Gaussian filtering is used to smooth the image, which helps in reducing
false edges.

3. Blurring:
o Gaussian filtering is used to blur images intentionally, such as in artistic effects or
to reduce the impact of detail in texture analysis.

Bandpass and steerable filters


Bandpass Filters: In image processing, bandpass filters are used to allow a specific range of
frequencies to pass through while blocking frequencies outside this range. They are particularly
useful for enhancing certain features in an image, such as edges or textures, by isolating and
emphasizing specific frequency components.

Steerable Filters: Steerable filters are a type of filter in image processing that can be oriented in
any direction by combining a set of basis filters. They are commonly used for edge detection,
texture analysis, and orientation estimation. Steerable filters allow for the efficient detection of
features at different orientations without the need to apply multiple filters at various angles.

Mathematical Formulation
Nonlinear filters: Median filter

Median filtering is a nonlinear filtering technique used in image processing to remove noise,
especially "salt-and-pepper" noise, while preserving edges. Unlike linear filters like the mean
filter, which can blur edges, the median filter is more effective at preserving edges while reducing
noise.

How It Works

The median filter works by moving a window (typically of size 3x3, 5x5, etc.) over the image
pixel by pixel. For each pixel in the image, the filter:

1. Extracts the Neighboring Pixels: The filter looks at the pixel values in the neighborhood
of the current pixel, defined by the window size.
2. Sorts the Pixel Values: It sorts these neighboring pixel values in ascending order.
3. Replaces the Central Pixel: The central pixel is then replaced by the median value of
these sorted pixels.

Mathematical Formulation

Let's consider an image I and a window W of size m×n centered at a pixel (i,j). The median filter
can be expressed mathematically as:

Example 1
A grayscale image is corrupted by salt-and-pepper noise. Consider the following 3x3 pixel
neighborhood around a pixel in the noisy image:

Apply a median filter to calculate the new intensity of the central pixel.
Solution:
1. List all the pixel values in the 3x3 neighborhood:
The pixel values are:
{255,0,255,0,125,255,255,0,0}
2. Sort the pixel values:
After sorting, the values are:
{0,0,0,0,125,255,255,255,255}
3. Find the median value:
The median value is the middle value in the sorted list. Since we have 9 values, the median is the
5th value:
Median=125

After applying the median filter, the new intensity of the central pixel is 125. This process helps in
reducing salt-and-pepper noise by replacing the central pixel with the median value of its
neighborhood.

Example 1

Consider the following 3x3 grayscale image:

Apply a 3x3 median filter to remove the noise.


Example 2

Apply a 3x3 median filter to a noisy 5x5 image to reduce noise and preserve edges. Consider the
image:
Bilateral filter

The bilateral filter is a non-linear, edge-preserving, and noise-reducing smoothing filter used in
image processing. It is unique because it combines both spatial and intensity information,
allowing it to smooth images while preserving edges, which is something that traditional filters
(like Gaussian filters) struggle with.

How It Works

The bilateral filter smooths the image by averaging nearby pixels, but unlike a simple Gaussian
filter, it takes into account both the spatial proximity and the intensity difference between the
center pixel and the surrounding pixels. This means that pixels with similar intensities (even if
they are spatially distant) will influence each other more than pixels with different intensities
(even if they are spatially close).

Mathematical Formulation

Given an image I and a pixel at position (i,j), the bilateral filter Ifiltered(i,j) can be defined as:

Interpretation of the Formula

1. Spatial Gaussian Filter (First Exponential Term):


This term ensures that only pixels close to the center pixel (in terms of spatial distance) are
considered for smoothing.
2. Intensity Gaussian Filter (Second Exponential Term):
This term ensures that only pixels with similar intensities to the center pixel are considered
for smoothing, preserving edges.

Binary Image processing

Binary image processing is a subset of image processing techniques specifically designed to


work with binary images. A binary image is a digital image that has only two possible values for
each pixel: typically 0 (representing black) and 1 (representing white). Binary image processing is
fundamental in many applications such as object detection, edge detection, and image
segmentation.

Common Operations in Binary Image Processing:

1. Thresholding: Converts a grayscale image to a binary image by assigning pixel values to


0 or 1 based on a threshold value.
2. Morphological Operations: Modify the structure of objects in the binary image using
operations like erosion, dilation, opening, and closing.
3. Connected Component Labeling: Identifies and labels connected groups of pixels with
the same binary value.
4. Blob Detection: Identifies and measures objects within the binary image.
5. Edge Detection: Finds the boundaries of objects within the image.

Mathematical Formulation

1. Thresholding: Given a grayscale image I(x,y) and a threshold value T, the binary image
B(x,y) is defined as:

This operation converts the grayscale image into a binary image.


2 Connected Component Labeling:

 Identify connected regions in the binary image using algorithms like the flood-fill
algorithm or union-find algorithm.

3 Blob Detection:

 Measure properties like area, perimeter, and centroid of objects in the binary image.

4 Edge Detection:

Simple edge detection in a binary image can be done using

Example 1

Given a grayscale image I(x,y) represented by the following 3x3 matrix:


Apply a threshold T=100 to convert the grayscale image into a binary image.

Example 2

Morphology

Morphological image processing is a set of operations that process images based on shapes.
These operations rely on the relative ordering of pixel values, not on their numerical values,
making them particularly useful for binary images but also applicable to grayscale images. The
primary purpose of morphological operations is to extract meaningful structures from images,
such as boundaries, skeletons, and regions of interest, by modifying the geometrical structure of
the objects within the image.

Basic Morphological Operations:

1. Erosion: Shrinks objects in a binary image.


2. Dilation: Expands objects in a binary image.
3. Opening: Removes small objects from an image while preserving the shape and size of
larger objects.
4. Closing: Fills small holes and gaps within objects.

Mathematical Formulation

1. Structuring Element (SE): The fundamental tool in morphological operations is the


structuring element. A structuring element S is a small binary matrix (usually 3x3) that is
used to probe and modify the input image.
2. Erosion: The erosion of a binary image B(x,y) by a structuring element SSS is defined as:
 erosion removes pixels on object boundaries.

 Dilation: The dilation of a binary image B(x,y) by a structuring element S is defined as:

 Dilation adds pixels to the boundaries of objects.

 Opening: Opening is defined as erosion followed by dilation:

Bopened=(B⊖S)⊕S

This operation is useful for removing small objects or noise from an image.

 Closing: Closing is defined as dilation followed by erosion:

Bclosed=(B⊕S)⊖S

This operation is useful for filling small holes and gaps within objects.

Fourier Transforms

The Fourier Transform (FT) is a mathematical technique used to transform signals from their
original domain (often time or space) into the frequency domain. In the frequency domain, a
signal is represented as a sum of sinusoids with different frequencies and amplitudes, which
makes it easier to analyze and process, especially for signals with periodic or oscillatory behavior.

The Fourier Transform is a crucial tool in various fields, including signal processing, image
processing, communication systems, and more. It is used to analyze the frequency components of
signals, filter unwanted frequencies, compress data, and solve differential equations.

Mathematical Formulation

1. Continuous Fourier Transform (CFT): The Fourier Transform of a continuous-time


signal x(t) is defined as:
Here:

 X(f) is the Fourier Transform of x(t), representing the signal in the frequency domain.
 f is the frequency variable.
 j is the imaginary unit.

The inverse Fourier Transform, which converts the signal back from the frequency domain to the
time domain, is given by:

Discrete Fourier Transform (DFT): The Discrete Fourier Transform is used for signals that are discrete and
of finite duration. It is defined as:

Here:

 X[k] is the DFT of x[n], representing the signal in the frequency domain.
 k is the frequency index.
 N is the total number of samples.

The inverse Discrete Fourier Transform (IDFT) is given by:

2D Fourier Transform: For 2D signals like images, the 2D Fourier Transform is used. For an image I(x,y)
the 2D Fourier Transform is defined as:
Here:

 F(u,v) is the frequency representation of the image.


 u and v are the frequency variables corresponding to the image dimensions x and y.

Example 1

Compute the DFT of the sequence x[n]={1,2,3,4}.

Example 2

Find the 2D Fourier Transform of a simple 2x2 image I(x,y) where:


Example 3

Given the DFT coefficients X[k]={10,−2−6j,−2,−2+6j}, find the original sequence x[n].
Example 4

Find the Fourier Transform of the sinusoidal function x(t)=sin⁡(2πf0t)x(t) = \sin(2 \pi f_0
t)x(t)=sin(2πf0t).
DCT

The Discrete Cosine Transform (DCT) is a widely used transform in signal processing,
particularly for compressing images and videos. It converts a sequence of values, such as pixels in
an image, into a sum of cosine functions oscillating at different frequencies. The DCT is known
for its energy compaction properties, meaning it concentrates the energy of a signal into a few
coefficients, making it efficient for compression.

The most common application of DCT is in image and video compression standards like JPEG,
MPEG, and H.264. It helps reduce the amount of data needed to represent an image while
preserving important visual information.

Mathematical Formulation
Example 1

Compute the DCT of the sequence x[n]={1,2,3,4}.

Example 2

Given the DCT coefficients X[k]={5,−3.54,0,−0.29}, reconstruct the original sequence x[n].

Example 3
Compute the 2D DCT of a simple 2x2 image I(x,y) where:

Example 4

Show that most of the energy in a simple signal x[n]={2,2,2,2} is concentrated in a few DCT
coefficients.
Applications sharpening

 Purpose: Enhances the edges and fine details in an image, making it appear clearer and more
defined.
 Techniques:

 Unsharp Masking: Subtracts a blurred version of the image from the original to highlight
edges.
 Laplacian Filter: Emphasizes regions of rapid intensity change (edges).

 Applications:

 Medical imaging (e.g., enhancing details in MRI or CT scans).


 Image enhancement for photography.

blur

 Purpose: Smooths an image by reducing the high-frequency components, often used to reduce
noise or create a depth-of-field effect.
 Techniques:

 Gaussian Blur: Applies a Gaussian function to weigh neighboring pixels, leading to a


smooth, natural blur.
 Box Filter: A simple averaging filter that can also produce a blur effect.

 Applications:

 Artistic effects in photography.


 Preprocessing in computer vision tasks to reduce noise.
Noise Removal

 Purpose: Reduces random variations (noise) in an image, which can be caused by various
factors such as low light, high ISO settings, or sensor imperfections.
 Techniques:

 Median Filter: A nonlinear filter that replaces each pixel's value with the median of
neighboring pixel values, effectively reducing "salt and pepper" noise.
 Bilateral Filter: Smooths images while preserving edges, useful in removing Gaussian
noise without blurring the edges.

 Applications:

 Preprocessing in medical imaging to enhance image quality.


 Improving the quality of digital photographs.

Interpolation

 purpose: Estimates unknown pixel values in an image, often used in resizing or transforming
images.
 Techniques:

 Bilinear Interpolation: Uses the values of the four nearest pixels to estimate a new pixel
value, commonly used in image resizing.
 Bicubic Interpolation: Uses the values of the sixteen nearest pixels, resulting in smoother
images compared to bilinear interpolation.

 Applications:

 Image scaling (e.g., upscaling low-resolution images).


 Geometric transformations like rotation or translation.

Decimation

 Purpose: Reduces the number of pixels in an image, effectively decreasing its resolution.
 Techniques:

 Downsampling: Reduces the size of an image by removing pixels, often combined with
low-pass filtering to prevent aliasing.

 Applications:

 Reducing the size of images for storage or transmission.


 Creating image pyramids in multi-resolution analysis.

Example 1

Downsample a 4x4 image to a 2x2 image by averaging 2x2 blocks. The original image is:

Multi Resolution

 Purpose: Analyzes images at different scales or resolutions to capture both coarse and fine
details.
 Techniques:

 Wavelet Transform: Decomposes an image into multiple scales, each representing a


different frequency band.
 Gaussian Pyramid: Successive levels of an image, each downsampled and blurred, used
in applications like image blending.

 Applications:
 Image compression (e.g., JPEG2000).
 Feature extraction in computer vision (e.g., detecting objects at different scales).

Image pyramids.

Image pyramids are a type of multi-resolution representation in image processing where an image
is repeatedly downsampled to create a series of images at different scales. This approach is useful
in various tasks such as image compression, image analysis, object detection, and image blending.
The concept is inspired by the observation that humans and animals often perceive objects at
multiple scales.

There are two main types of image pyramids:

1. Gaussian Pyramid: Each level in the pyramid is a blurred and downsampled version of
the previous level. This pyramid is primarily used for image smoothing and multi-
resolution analysis.
2. Laplacian Pyramid: It is a set of band-pass filtered images, where each level represents
the difference between two adjacent levels in the Gaussian pyramid. This pyramid is used
for image compression and reconstructing images.

Mathematical Formulation

You might also like