Digital Image Processing (2023)
Q.1 Write the answer of the following (Any seven question only):
(a) In …… image we notice that the components of histogram are concentrated on the
higher side on intensity scale.
(i) bright
(ii) dark
(iii) colorful
(iv) all of these
(b) In which steps of the processing, the images are subdivided successful into smaller
regions?
(i) Image enhancement
(ii) Image acquisition
(iii) Segmentation
(iv) Wavelets
(c) Ideal filters can be
(i) LPF
(ii) HPF
(iii) BPF
(iv) all of these
(d) Each elements of image matrix is called
(i) dots
(ii) coordinates
(iii) pixel
(iv) value
(e) Which of the following is the first and foremost step in image processing?
(i) Image acquisition
(ii) Segmentation
(iii) Image enhancement
(iv) Image restoration
(f) Method in which images are input and attributes are output is called
(i) low level processes
(ii) high level processes
(iii) mid level processes
(iv) edge level processes
(g) Primary colours for reflecting sources are
(i) red, green, blue
(ii) cyan, magenta, yellow
(iii) both (i) and (ii)
(iv) none of these
(h) Which of the following is the abbreviation of JPEG?
(i) Joint photographic experts group
(ii) Joint photographs expansion group
(iii) Joint photographic expanded group
(iv) Joint photographic expansion group
(i) What is the general form of representation of log transformation?
(i) s = c * log10 (1 / r) (iii) s = c * log10 (1*r)
(ii) s = c * log10 (1 + r)
(iii) s = c * log10 (1*r)
(iv) s = c * log10 (1 - r)
(j) What is the image processing technique which is used to improve the quality image for
human viewing?
(i) Compression
(ii) Enhancement
(iii) Restoration
(iv) Analysis
2. (a) Explain the components of a general purpose Digital Image Processing System with a
neat block diagram.
Image Processing System is the combination of the different elements involved in the digital
image processing. Digital image processing is the processing of an image by means of a digital
computer. Digital image processing uses different computer algorithms to perform image
processing on the digital images.
It consists of following components:-
• Image Sensors:
Image sensors senses the intensity, amplitude, co-ordinates and other features of the
images and passes the result to the image processing hardware. It includes the
problem domain.
• Image Processing Hardware:
Image processing hardware is the dedicated hardware that is used to process the
instructions obtained from the image sensors. It passes the result to general purpose
computer.
• Computer:
Computer used in the image processing system is the general purpose computer that
is used by us in our daily life.
• Image Processing Software:
Image processing software is the software that includes all the mechanisms and
algorithms that are used in image processing system.
• Mass Storage:
Mass storage stores the pixels of the images during the processing.
• Hard Copy Device:
Once the image is processed then it is stored in the hard copy device. It can be a pen
drive or any external ROM device.
• Image Display:
It includes the monitor or display screen that displays the processed images.
• Network:
Network is the connection of all the above elements of the image processing
system.
(b) What are the applications of Digital Image Processing System?
Applications of Digital Image Processing Systems
1. Medical Imaging:
o X-Ray Imaging: Enhancing X-ray images to improve the detection of fractures,
tumors, and other anomalies.
o MRI and CT Scans: Processing MRI and CT images for clearer visualization
and analysis of internal body structures.
o Ultrasound Imaging: Enhancing ultrasound images for better diagnostics.
o Digital Pathology: Analyzing tissue samples and cells to detect diseases like
cancer.
2. Remote Sensing:
o Satellite Imaging: Analyzing satellite images for environmental monitoring,
agriculture, and forestry.
o Geographic Information Systems (GIS): Processing spatial data for mapping
and urban planning.
o Disaster Management: Monitoring and assessing natural disasters like floods
and earthquakes.
3. Industrial Inspection:
o Quality Control: Automated inspection of products for defects during
manufacturing processes.
o Pattern Recognition: Identifying patterns in materials or products to ensure
consistency and quality.
o Machine Vision: Guiding robots in manufacturing and assembly processes.
4. Security and Surveillance:
o Face Recognition: Identifying individuals for security and access control.
o License Plate Recognition: Recognizing vehicle license plates for traffic
management and law enforcement.
o Motion Detection: Detecting unauthorized movement in surveillance systems.
5. Entertainment:
o Image and Video Compression: Reducing the size of images and videos for
storage and transmission.
o Special Effects: Creating visual effects in movies and video games.
o Restoration and Enhancement: Restoring old photographs and enhancing the
quality of digital images and videos.
6. Robotics and Automation:
o Autonomous Vehicles: Processing images from cameras and sensors for
navigation and decision-making.
o Object Recognition: Identifying and categorizing objects in the environment for
interaction and manipulation.
o Navigation Systems: Assisting robots in navigating through complex
environments.
7. Communication Systems:
o Image Transmission: Enhancing the transmission of images over various
communication channels.
o Video Conferencing: Improving the quality of video streams in real-time
communication.
o Telemedicine: Enabling remote diagnosis and consultation by transmitting high-
quality medical images.
8. Document Processing:
o Optical Character Recognition (OCR): Converting printed or handwritten text
into digital format.
o Document Analysis: Extracting and processing information from documents.
o Digital Archiving: Enhancing and preserving historical documents and
manuscripts.
9. Scientific Research:
o Astronomy: Analyzing images from telescopes to study celestial objects.
o Microscopy: Processing images from microscopes for studying biological and
material samples.
o Climate Studies: Analyzing satellite images and other data to study climate
change and weather patterns.
10. Art and Culture:
o Restoration: Digitally restoring artworks and photographs.
o Visualization: Creating virtual models of historical sites and artifacts.
o Content Creation: Assisting artists and designers in creating digital content and
animations.
(c) What are the disadvantages of Digital Image Processing System?
While digital image processing systems have numerous advantages, they also come with several
disadvantages. Here are some key drawbacks:
1. High Computational Requirements:
o Processing Power: Advanced image processing algorithms often require
significant computational resources, making them dependent on powerful
hardware.
o Time-Consuming: Complex processing tasks can be time-consuming, especially
for large images or real-time applications.
2. High Storage Requirements:
o Large File Sizes: High-resolution images and intermediate processing results can
take up substantial storage space.
o Archiving: Storing and managing large volumes of image data over time can be
challenging and expensive.
3. Complexity:
o Algorithm Development: Developing and fine-tuning image processing
algorithms can be complex and requires specialized knowledge.
o Implementation: Implementing these algorithms in hardware or software
systems can be challenging and resource-intensive.
4. Noise Sensitivity:
o Impact of Noise: Digital images can be susceptible to various types of noise (e.g.,
Gaussian noise, salt-and-pepper noise), which can degrade image quality and
affect processing results.
o Noise Reduction: Effective noise reduction techniques can be complex and may
not always be able to fully restore the original image quality.
5. Quality Dependence:
o Input Image Quality: The quality of the output heavily depends on the quality of
the input image. Poor quality input can lead to poor quality output.
o Compression Artifacts: Lossy compression techniques used to save storage
space can introduce artifacts that degrade image quality.
6. Cost:
o Initial Setup: High initial costs for acquiring and setting up advanced imaging
hardware and software.
o Maintenance: Ongoing costs for maintenance, updates, and scaling of the system
as data volumes and processing requirements grow.
7. Ethical and Privacy Concerns:
o Surveillance and Privacy: The use of image processing in surveillance can raise
significant privacy issues and ethical concerns regarding constant monitoring.
o Misuse of Technology: There is potential for misuse in areas such as deepfakes,
which can create realistic but fake images or videos for malicious purposes.
8. Data Integrity:
o Manipulation Risks: Digital images can be easily manipulated, which can lead to
questions about the authenticity and integrity of the image data.
o Tampering Detection: Detecting and preventing tampering with digital images
can be challenging.
9. Environmental Impact:
o Energy Consumption: The high computational and storage requirements can
lead to significant energy consumption, contributing to a larger environmental
footprint.
10. Accessibility:
o Technical Expertise: Requires specialized technical knowledge, which can limit
accessibility for non-experts.
o Software Costs: Commercial image processing software can be expensive,
limiting accessibility for individuals or organizations with limited budgets.
3. (a) Explain the following terms
(i) Log transformation
Log transformation
The log transformations can be defined by this formula
s = c log(r + 1).
Where s and r are the pixel values of the output and the input image and c is a constant. The
value 1 is added to each of the pixel value of the input image because if there is a pixel intensity
of 0 in the image, then log (0) is equal to infinity. So 1 is added, to make the minimum value at
least 1.
During log transformation, the dark pixels in an image are expanded as compare to the higher
pixel values. The higher pixel values are kind of compressed in log transformation. This result in
following image enhancement.
The value of c in the log transform adjust the kind of enhancement you are looking for.
(ii) Intensity level slicing
Intensity level slicing means highlighting a specific range of intensities in an image. In other
words, we segment certain gray level regions from the rest of the image.
Suppose in an image, your region of interest always take value between say 80 to 150. So,
intensity level slicing highlights this range and now instead of looking at the whole image, one
can now focus on the highlighted region of interest.
Since, one can think of it as piecewise linear transformation function so this can be implemented
in several ways. Here, we will discuss the two basic type of slicing that is more often used.
• In the first type, we display the desired range of intensities in white and suppress all other
intensities to black or vice versa. This results in a binary image. The transformation
function for both the cases is shown below.
• In the second type, we brighten or darken the desired range of intensities(a to b as shown
below) and leave other intensities unchanged or vice versa. The transformation function
for both the cases, first where the desired range is changed and second where it is
unchanged, is shown below.
Applications: Mostly used for enhancing features in satellite and X-ray images.
(b) With block diagram, explain the fundamental step in Digital Image Processing System.
Following are Fundamental Steps of Digital Image Processing:
1. Image Acquisition
Image acquisition is the first step of the fundamental steps of DIP. In this stage, an image is given
in the digital form. Generally, in this stage, pre-processing such as scaling is done.
2. Image Enhancement
Image enhancement is the simplest and most attractive area of DIP. In this stage details which are
not known, or we can say that interesting features of an image is highlighted. Such as brightness,
contrast, etc...
3. Image Restoration
Image restoration is the stage in which the appearance of an image is improved.
4. Color Image Processing
Color image processing is a famous area because it has increased the use of digital images on the
internet. This includes color modeling, processing in a digital domain, etc....
5. Wavelets and Multi-Resolution Processing
In this stage, an image is represented in various degrees of resolution. Image is divided into smaller
regions for data compression and for the pyramidal representation.
6. Compression
Compression is a technique which is used for reducing the requirement of storing an image. It is a
very important stage because it is very necessary to compress data for internet use.
7. Morphological Processing
This stage deals with tools which are used for extracting the components of the image, which is
useful in the representation and description of shape.
8. Segmentation
In this stage, an image is a partitioned into its objects. Segmentation is the most difficult tasks in
DIP. It is a process which takes a lot of time for the successful solution of imaging problems which
requires objects to identify individually.
9. Representation and Description
Representation and description follow the output of the segmentation stage. The output is a raw
pixel data which has all points of the region itself. To transform the raw data, representation is the
only solution. Whereas description is used for extracting information's to differentiate one class of
objects from another.
10. Object recognition
In this stage, the label is assigned to the object, which is based on descriptors.
11. Knowledge Base
Knowledge is the last stage in DIP. In this stage, important information of the image is located,
which limits the searching processes. The knowledge base is very complex when the image
database has a high-resolution satellite.
4. (a) Explain any two of the following properties of 2D-DFT with suitable equation:
(i) Convolution
(ii) Correlation
(iii) Separability
(iv) Translation
(b) Explain following image transform:
(i) Hadamard Transform
Hadamard Transform:
The Hadamard transform is a type of linear transformation that operates on images or signals
represented as matrices. It transforms the image into a domain where each coefficient represents
a different frequency or pattern in the image. Here’s a brief explanation:
• Definition: The Hadamard transform of an N×NN image matrix I is defined as H⋅I⋅H,
where H is the Hadamard matrix.
• Mathematical Representation: For a given image matrix III and its Hadamard
transform I′, the transformation can be represented as:
I′=H⋅I⋅H
where H is the N×N Hadamard matrix.
• Properties:
o It is invertible, meaning the original image can be reconstructed from its
Hadamard transform.
o Unlike the Fourier transform, which uses complex exponentials, the Hadamard
transform uses binary operations (addition and subtraction), making it
computationally simpler in some contexts.
o It is often used in areas where binary representation or computational simplicity is
advantageous, such as in certain types of data compression or error correction.
(ii) Fourier Transform
Fourier Transform:
The Fourier transform is a fundamental tool in signal processing and image analysis. It
decomposes a signal or image into its constituent frequencies. Here’s a breakdown:
•
Properties:
o It converts an image from the spatial domain (pixel values) to the frequency
domain (amplitudes and phases of different frequencies).
o Provides a way to analyze and manipulate the frequency components of an image,
useful for tasks like filtering, compression, and pattern recognition.
o The Fourier transform is used extensively in fields such as image processing,
telecommunications, and scientific computing due to its ability to separate
complex signals into simpler components.
5. (a) Define connectivity. What is the difference between 8-connectivity and m-connectivity
Connectivity between pixels
It is an important concept in digital image processing.
It is used for establishing boundaries of objects and components of regions in an image.
Two pixels are said to be connected:
• if they are adjacent in some sense(neighbour pixels,4/8/m-adjacency)
• if their gray levels satisfy a specified criterion of similarity(equal intensity level)
There are three types of connectivity on the basis of adjacency. They are:
a) 4-connectivity: Two or more pixels are said to be 4-connected if they are 4-adjacent with each
others.
b) 8-connectivity: Two or more pixels are said to be 8-connected if they are 8-adjacent with each
others.
c) m-connectivity: Two or more pixels are said to be m-connected if they are m-adjacent with
each others.
comparison of 8-connectivity and m-connectivity in tabular form:
Feature 8-Connectivity m-Connectivity
Definition A pixel is connected to its 8
Connectivity is based on a specified
adjacent neighbors: horizontally,
distance metric mmm, determining
vertically, and diagonally.which pixels are considered neighbors.
Neighborhood Defined by a distance metric mmm
Pixels located at (e.g., Euclidean, Manhattan), allowing
(x−1,y−1),(x,y−1),(x+1,y−1),(x−1,y for variable neighbor arrangements.
),(x+1,y),(x−1,y+1),(x,y+1),(x+1,y
+1).
Usage Commonly used in tasks where all Provides flexibility in defining
nearby pixels are equally relevant, adjacency based on specific spatial
like certain types of image arrangements or distance criteria
segmentation or boundary suitable for the application.
detection.
Flexibility Rigid in defining neighbors based Flexible, accommodating various
on fixed patterns. spatial relationships and distance
measures.
Examples Standard in many image processing Useful when precise distance-based
algorithms for adjacency connectivity is required, e.g., in
determination. medical imaging or geographical
analysis.
(b) If an image I is of 8-bit and has 1500 rows and 1300 columns, then find the following:
(i) Calculate how many mega-pixels this image has.
(ii) Calculate the size of the image.
(iii) Calculate how many pixels are required per inch, if the screen size is 5
Let's calculate each part based on the information provided:
Given:
• Image dimensions: 1500 rows × 1300 columns
• Bit depth: 8-bit (assuming grayscale, but we'll calculate for grayscale and color)
• Screen size: 5 inches (for pixels per inch calculation)
(iii) Calculate how many pixels are required per inch, if the screen size is 5 inches.
3. Pixels per Inch Calculation:
Therefore, approximately 279.57 pixels per inch are required to display the entire image on a 5-
inch screen.
6. (a) What is meant by image subtraction? Discuss various areas of application of
image subtraction.
Image subtraction refers to a fundamental operation in digital image processing where pixel
values of one image are subtracted from corresponding pixel values of another image on a pixel-
by-pixel basis. This process results in a new image where each pixel represents the difference
between the corresponding pixels in the original images.
Process of Image Subtraction:
1. Mathematical Representation: Given two images A and B of the same size, the
subtraction of BBB from AAA results in a new image C:
C(x,y)=A(x,y)−B(x,y)
where A(x,y) and B(x,y) are the intensity values of pixels at coordinates (x,y) in images
A and B, respectively.
2. Result Interpretation:
o If C(x,y)>0: Indicates regions where A has higher intensity than B.
o If C(x,y)<0: Indicates regions where B has higher intensity than A.
o C(x,y)=0: Indicates no difference between corresponding pixels of A and B.
Applications of Image Subtraction:
Image subtraction finds applications in various fields of image processing and computer vision
due to its ability to highlight differences between images. Some key areas of application include:
1. Medical Imaging:
o Differential Diagnosis: Used in medical imaging (e.g., X-ray, MRI) to highlight
changes or abnormalities between successive scans. It helps in identifying subtle
differences between healthy and diseased tissues or tracking changes over time.
o Angiography: Used to visualize the contrast enhancement in blood vessels,
revealing their structure and abnormalities.
2. Remote Sensing:
o Change Detection: Used in satellite imagery to detect changes on the Earth's
surface over time. It helps in monitoring environmental changes, urban expansion,
deforestation, etc.
o Vegetation Analysis: Helps in monitoring vegetation health by detecting changes
in vegetation cover and health over different time periods.
3. Object Tracking and Motion Detection:
o Foreground Extraction: Used in video processing for detecting moving objects
by subtracting consecutive frames. It helps in applications like surveillance, traffic
monitoring, and human-computer interaction.
o Background Subtraction: Used to segment moving objects from a static or
dynamic background in real-time video analysis.
4. Quality Control and Inspection:
o Defect Detection: Used in manufacturing to detect defects or anomalies in
products by comparing images of defective and non-defective items.
o Surface Inspection: Used to identify flaws, scratches, or surface irregularities in
materials or products.
5. Photogrammetry:
o Surface Reconstruction: Used to generate 3D models by comparing multiple
images taken from different viewpoints. Image subtraction helps in identifying
disparities and calculating depth information.
6. Digital Image Forensics:
o Tampering Detection: Used to detect image tampering or forgery by comparing
original and suspect images. Subtraction highlights discrepancies or alterations in
pixel values.
(b) What are the elements of visual perception? Explain in-brief
The field of digital image processing is built on the foundation of mathematical and
probabilistic formulation, but human intuition and analysis play the main role to make the
selection between various techniques, and the choice or selection is basically made on
subjective, visual judgements.
In human visual perception, the eyes act as the sensor or camera, neurons act as the connecting
cable and the brain acts as the processor.
The basic elements of visual perceptions are:
1. Structure of Eye
2. Image Formation in the Eye
3. Brightness Adaptation and Discrimination
Structure of Eye:
The human eye is a slightly asymmetrical sphere with an average diameter of the length of
20mm to 25mm. It has a volume of about 6.5cc. The eye is just like a camera. The external
object is seen as the camera take the picture of any object. Light enters the eye through a small
hole called the pupil, a black looking aperture having the quality of contraction of eye when
exposed to bright light and is focused on the retina which is like a camera film.
The lens, iris, and cornea are nourished by clear fluid, know as anterior chamber. The fluid
flows from ciliary body to the pupil and is absorbed through the channels in the angle of the
anterior chamber. The delicate balance of aqueous production and absorption controls pressure
within the eye.
Cones in eye number between 6 to 7 million which are highly sensitive to colors. Human
visualizes the colored image in daylight due to these cones. The cone vision is also called as
photopic or bright-light vision.
Rods in the eye are much larger between 75 to 150 million and are distributed over the retinal
surface. Rods are not involved in the color vision and are sensitive to low levels of
illumination.
Image Formation in the Eye:
When the lens of the eye focus an image of the outside world onto a light-sensitive membrane
in the back of the eye, called retina the image is formed. The lens of the eye focuses light on
the photoreceptive cells of the retina which detects the photons of light and responds by
producing neural impulses.
The distance between the lens and the retina is about 17mm and the focal length is
approximately 14mm to 17mm.
Brightness Adaptation and Discrimination:
Digital images are displayed as a discrete set of intensities. The eyes ability to discriminate
black and white at different intensity levels is an important consideration in presenting image
processing result.
The range of light intensity levels to which the human visual system can adapt is of the order
of 1010 from the scotopic threshold to the glare limit. In a photopic vision, the range is about
106.
7. (a) Define Discrete Fourier Transform and its inverse ?
(b) State distributivity and scaling property?
8. (a) Enumerate the difference between the image enhancement and image restoration?
comparison of image enhancement and image restoration in tabular form:
Feature Image Enhancement Image Restoration
Objective Improves visual quality or perception Recovers original or improved
of an image version of degraded image
Purpose Enhance certain aspects of an image Restore image quality by removing
for better visualization or analysis degradation or noise
Process Often non-linear, focuses on specific Typically involves mathematical
Focus features or aesthetics models and algorithms for
reconstruction
Techniques Histogram equalization, contrast Deconvolution, noise reduction,
adjustment, sharpening inpainting
Outcome May alter the image to highlight Aims to recover details, reduce
specific features or improve overall artifacts, or improve fidelity
appearance
Applications Photography, artistic effects, feature Medical imaging, forensic analysis,
enhancement archival restoration
Challenges Subjective evaluation, risk of Mathematical complexity, accuracy
introducing artifacts of restoration
Examples Brightness adjustment, color Removing blur from a photograph,
correction, edge enhancement denoising MRI scans
(b) What are the derivative operators useful in image segmentation? Explain their role
in segmentation ?
In image segmentation, derivative operators play a crucial role in detecting edges and boundaries
between different regions or objects within an image. They help highlight abrupt changes in
intensity, which often correspond to edges or boundaries. Here are some common derivative
operators used in image segmentation and their roles:
Common Derivative Operators:
1. First-Order Derivatives:
2.
Second-Order Derivatives:
Role in Segmentation:
• Edge Detection: Derivative operators are primarily used for edge detection by
identifying significant changes in pixel intensities, which often correspond to object
boundaries in an image.
• Feature Extraction: Gradient magnitudes and directions provide valuable information
about the structure and orientation of edges, aiding in feature extraction for segmentation
algorithms.
• Boundary Localization: The Laplacian and Hessian operators help localize boundaries
by detecting abrupt intensity changes and describing the spatial characteristics of edges
(e.g., sharpness, curvature).
• Segmentation Initialization: Derivative-based edge maps can serve as initial inputs or
guiding cues for more complex segmentation algorithms, such as region-based or
contour-based methods.
• Noise Sensitivity: Derivative operators can be sensitive to noise, so preprocessing
techniques like Gaussian smoothing or median filtering are often applied to enhance
segmentation accuracy and reduce false detections.
9. Write short notes on the following
(a) Image Subtraction
(b) Power Transform
Power – Law transformations
There are further two transformation is power law transformations, that include nth power and
nth root transformation. These transformations can be given by the expression:
s=cr^γ
This symbol γ is called gamma, due to which this transformation is also known as gamma
transformation.
Variation in the value of γ varies the enhancement of the images. Different display devices /
monitors have their own gamma correction, that’s why they display their image at different
intensity.
This type of transformation is used for enhancing images for different type of display devices.
The gamma of different display devices is different. For example Gamma of CRT lies in between
of 1.8 to 2.5, that means the image displayed on CRT is dark.
Correcting gamma.
s=cr^γ
s=cr^(1/2.5)
(c) Image averaging
Image averaging, also known as averaging or mean filtering, is a simple yet effective technique
in image processing used primarily for noise reduction and smoothing. It involves calculating the
average pixel value from a neighborhood of pixels centered around each pixel in the original
image. Here's how image averaging works and its applications:
How Image Averaging Works:
1. Neighborhood Selection:
o Choose a window or kernel of a specified size (typically square or rectangular)
centered around each pixel in the image.
2. Averaging Calculation:
o Compute the average intensity value of all pixels within the chosen neighborhood.
o Replace the original pixel value with this average value.
3. Edge Handling:
o For pixels near the image boundary where the neighborhood extends beyond the
image boundaries, various techniques such as zero-padding, reflection padding, or
wrapping around can be used.
4. Iterative Process:
o Repeat the averaging process for every pixel in the image to generate a smoothed
version of the original image.
Applications of Image Averaging:
• Noise Reduction: Averaging is effective in reducing additive noise (e.g., Gaussian noise)
by smoothing out irregular variations in pixel intensity across the image.
• Image Smoothing: It removes high-frequency components such as fine details and
textures, resulting in a blurred or smoothed version of the image.
• Preprocessing: Used as a preprocessing step in various image analysis tasks to improve
the performance of subsequent algorithms, such as edge detection or segmentation.
• Motion Blur Simulation: In computer graphics and animation, averaging is used to
simulate motion blur by averaging consecutive frames of an animated sequence.