Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
50 views6 pages

Digital Image Processing

Computer science engineering previous year question solved

Uploaded by

pandeyritik310
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views6 pages

Digital Image Processing

Computer science engineering previous year question solved

Uploaded by

pandeyritik310
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

2. (a) What are image sampling and quantization?

Are they related to each


other? Explain in short with an example.
Image sampling and quantization are crucial processes in digital image
processing, particularly when converting an analog image to a digital format.
1. Image Sampling: This refers to the process of converting a continuous
image into a discrete image by selecting specific points from the image.
Essentially, it involves measuring the image at regular intervals. For example, if
you have a high-resolution photograph, sampling might involve taking a specific
number of pixels to represent the image, reducing its resolution but making it
manageable for digital processing.
2. Image Quantization: This process involves mapping the sampled values to
a finite set of values. In digital images, colors are typically represented with a
limited number of bits. Quantization reduces the range of colors or grayscale
levels, leading to a finite number of colors or shades. For example, an image that
originally could have millions of colors might be quantized to just 256 colors for
simplicity.
Relation: Sampling and quantization are related because they both convert a
continuous image into a digital format. Sampling determines which points of the
image are measured, and quantization determines how those measurements are
represented digitally.

Example: Consider a high-resolution photograph of a sunset:


● Sampling: You might select every 10th pixel in each row and column to
reduce the image size.
● Quantization: You might limit the color range from potentially millions to
just 256 colors to reduce file size and make it easier to store and process.
Both processes aim to balance between image quality and file size or processing
complexity.

A common measure of transmission for digital data is the baud rate defined
as the number of bits transmitted per second. Generally, transmission is
accomplished in packets consisting of a start bit, a byte (8 bits) of
information, and a stop bit. Using these facts, answer the following:
(i) How many minutes would it take to transmit a 1024×1024 image with 256
gray levels using a 56 K baud modem?
(ii) What would the time be at 750 K baud, a representative speed of a
phone DSL (digital subscriber line) connection?.
Solutions:-
https://youtu.be/3umfH3WRnE0?si=fVPWY0k4PTL4jYsX

https://www.numerade.com/ask/question/a-common-measure-of-transmission-for-digital-data-is-th
e-baud-rate-defined-as-the-number-of-bits-transmitted-per-second-generally-transmission-is-acco
mplished-in-packets-consisting-of-a-star-98162/?utm_medium=social&utm_source=youtube&utm_c
ampaign=apr_may24

1
3. (a) Give a 3x3 kernel for performing unsharp masking in a single pass
through an image. Assume that the average image is obtained using a box
filter of size 3x3.
Unsharp masking enhances the edges by subtracting a blurred version of the
image from the original image. Given that the average image is obtained using
a 3x3 box filter, the kernel for unsharp masking can be derived as follows:
1. Box Filter (3x3): This is essentially an averaging filter:

Box Filter (3x3)=

This filter smooths the image by averaging the pixel values with their neighbors.
2. Unsharp Masking Kernel:
To perform unsharp masking, subtract the smoothed (blurred) image from the
original image. The kernel for this operation, assuming a 3x3 box filter for
blurring, can be represented as:

This kernel enhances the edges by amplifying the center pixel and subtracting
the surrounding blurred values.
(b) Discuss with example the effect of order of the median filter on the
smoothing of image.
The median filter is a non-linear filter commonly used to reduce noise in an
image, particularly “salt and pepper” noise, while preserving edges. The order of
the median filter refers to the size of the neighborhood from which the median
value is calculated.
Lower Order Median Filter (e.g., 3x3):
● This filter considers a smaller neighborhood, which makes it effective in
removing small noise while preserving fine details in the image.
● Example: A 3x3 median filter will reduce noise effectively without blurring
small features like thin lines or small text.
Higher Order Median Filter (e.g., 5x5, 7x7):
● This filter considers a larger neighborhood, which increases its ability to
remove larger noise spots but can also lead to blurring of fine details.
● Example: A 7x7 median filter will remove larger noise spots, but it may also
smooth out small features and details, potentially making the image look
less sharp.
In summary, the order of the median filter affects the balance between noise
reduction and detail preservation. A higher order increases smoothing but at the
expense of detail, while a lower order maintains more detail but may leave some
noise.
2
(c) What are the requirements for multi-resolution analysis with respect to
the scaling function?
1. Two-Scale Relation: The scaling function must satisfy a refinement
equation, allowing it to be expressed as a sum of scaled and shifted
versions of itself.
2. Orthogonality: The scaling function and its integer translations should be
orthogonal to each other, ensuring no overlap and redundancy in the
decomposition.
3. Compact Support: The scaling function should ideally have compact
support, meaning it is non-zero over a finite interval for efficient
computation.
4. Smoothness: The scaling function should be sufficiently smooth to allow for
the accurate reconstruction of signals at different resolutions.
5. Vanishing Moments: The associated wavelet function, derived from the
scaling function, should have vanishing moments, enabling it to capture
different levels of detail in the signal.
4(c). What is the drawback of an ideal low-pass filter? How can the
drawbacks be removed?
The primary drawback of an ideal low-pass filter is its non-causal nature and
infinite impulse response due to the sinc function. This means it cannot be
implemented in real-time systems and requires an infinite amount of processing
time. Additionally, the ideal filter introduces the Gibbs phenomenon, causing
oscillations or "ringing" near sharp transitions or edges in the signal, leading to
distortion.

To mitigate these issues, several approaches are used:


1. Windowing the Sinc Function: Applying a window function (like Hamming or
Hann) to the sinc function makes the impulse response finite and causal,
reducing ringing and enabling practical implementation.

2. Using Practical Filters: Filters like Butterworth, Chebyshev, or Bessel provide


a smoother transition between passband and stopband, minimizing the Gibbs
phenomenon.

3. FIR Filters: Finite Impulse Response filters are designed to have a finite,
causal response, approximating the ideal filter while avoiding its drawbacks.

4. IIR Filters: Infinite Impulse Response filters, such as Butterworth or


Chebyshev, offer a controlled frequency response with less sharp transitions,
reducing ringing effects.

These methods provide practical approximations to the ideal filter with


manageable trade-offs.
3
7.(b). Discuss the working of opening and closing morphology operations
and its uses in image processing along with its properties.

4
5
9. Write short notes on each of the following :

(a) Hole filling algorithm


The hole filling algorithm is used to fill holes in binary images, where a hole is
defined as a background region entirely surrounded by object pixels. The process
starts by identifying a seed point within the hole, setting it as the initial marker.
The algorithm iteratively applies morphological dilation to this marker image
and intersects it with the complement of the original image. This process
continues until no further changes occur, meaning the hole is completely filled.
The final filled image is obtained by combining the original image with the fully
expanded marker. This method is commonly used in applications like medical
imaging, where it’s necessary to fill gaps or voids in segmented structures,
ensuring the completeness of the object representation.

(b) Difference between the LOG and power transforms


LOG Transform and Power Transform are two techniques used to adjust the
intensity of images, but they serve different purposes. LOG Transform is
primarily used to compress the dynamic range of an image, making details in
dark regions more visible while suppressing bright regions. The formula for LOG
transform is S = c•log(1 + R) , where R is the input pixel intensity and c is a
constant. On the other hand, Power Transform, often referred to as Gamma
Correction, is used to adjust the brightness of an image. The formula for Power
Transform is S = c•R^(γ) , where γ controls the degree of correction; values of γ
less than 1 make the image brighter, while values greater than 1 make it darker.
LOG Transform is suitable for enhancing images with a large range of intensity
values, while Power Transform is more commonly used for display adjustment
and contrast enhancement.

(c) Boundary extraction algorithm


The boundary extraction algorithm is a technique used in image processing to
identify the edges or boundaries of objects within a binary image. The algorithm
works by subtracting the eroded version of the image from the original image.
Erosion is performed using a structuring element, typically a small matrix that
defines the neighborhood for pixel comparison. After erosion, the resulting
image is subtracted from the original image, leaving only the boundary
pixels—those that were removed by the erosion process but are present in the
original image. This method is crucial in applications such as object recognition,
where defining the exact edges of an object is important for further analysis,
such as shape characterization or segmentation. Boundary extraction is
particularly useful in medical imaging, remote sensing, and any field where
object delineation is critical.

You might also like