CV answers
1. Image Degradation/Restoration Model:
Imagine taking a photograph through a foggy window or with a shaky camera. The resulting image is not a
perfect representation of the scene—it’s degraded. In digital image processing, we model this degradation
using a mathematical formula so we can attempt to undo it and restore the original image.
The model is:
g(x, y) = h(x, y) ∗ f(x, y) + η(x, y)
Let’s break this down:
f(x, y): The original image (what we want to recover).
h(x, y): The degradation function. This represents things like blurring from motion or an out-of-focus
lens. Think of it as a "smudge" function that distorts the image.
∗: Convolution, which means the original image is combined with the degradation function.
η(x, y): Additive noise—random variations caused by things like electronic sensor imperfections,
transmission interference, or atmospheric conditions.
g(x, y): The final degraded image—the one we actually observe.
The goal of image restoration is to use knowledge or estimates of h(x, y) and η(x, y) to reconstruct an
approximation of f(x, y). In other words, we want to remove or reverse the effects of both the blur and the
noise to retrieve a clean image.
In the frequency domain (which deals with the image in terms of its frequency components rather than pixel
intensities), this becomes:
G(u, v) = H(u, v) · F(u, v) + N(u, v)
Here, the capital letters represent the Fourier transforms of the respective spatial domain functions. This
frequency-domain representation is particularly useful because convolution in the spatial domain becomes
multiplication in the frequency domain, which is easier to handle computationally.
Key takeaway: The more accurately we know h(x, y) and η(x, y), the better we can estimate f(x, y). This
process is objective and based on modeling physical causes of degradation.
2. (4.)Convert RGB to HSI model
To convert a pixel from RGB to HSI, where R, G, B ∈ [0, 1], the following steps are used:
Step 1: Normalize R, G, B values to the range [0, 1] if they aren't already.
Step 2: Compute Intensity (I):
I = (R + G + B) / 3
Step 3: Compute Saturation (S):
This measures the degree to which a color is diluted with white.
Step 4: Compute Hue (H):
Use the intermediate variable:
Then,
If B ≤ G: H = θ
If B > G: H = 360° − θ
Convert H to range [0,1] by dividing by 360 if needed.
Step 5: HSI values are then:
H ∈ [0,1] (or in degrees 0° to 360°)
S ∈ [0,1]
I ∈ [0,1]
Note:
If R = G = B, the color is gray (S = 0), and H is undefined or set to 0.
These computations are performed on a per-pixel basis.
3. (6)Explain Edge Detection and Gradient-Based Method:
Edge detection is a fundamental image segmentation technique used to identify points in a digital image
where brightness changes sharply. These points typically represent object boundaries.
Gradient-Based Edge Detection:
This method is based on detecting sharp changes in intensity by computing the gradient of the image. The
key ideas are:
a) Image Gradient:
The gradient of an image f(x, y) at a point gives the direction and rate of fastest increase in intensity. It is a
vector:
∇f(x, y) = [∂f/∂x, ∂f/∂y]
b) Gradient Magnitude and Direction:
The strength (magnitude) of the edge is given by:
M(x, y) = √[(∂f/∂x)² + (∂f/∂y)²]
The direction is given by:
α(x, y) = arctangent(∂f/∂y / ∂f/∂x)
These values indicate how strong and in which direction the intensity changes, which helps detect edges.
c) Operators for Gradient Approximation:
Several filters are used to approximate derivatives:
Sobel Operator: Emphasizes central pixels, smooths noise.
Prewitt Operator: Similar to Sobel but simpler.
Roberts Operator: Uses 2×2 diagonal filters.
Kirsch Operator: Emphasizes edges in particular compass directions.
d) Smoothing and Thresholding:
To reduce noise before gradient calculation, smoothing (e.g., averaging filters) is applied. After computing
the gradient, thresholding is used to suppress weak edges (usually caused by noise), enhancing strong,
meaningful edges.
Summary:
Gradient-based edge detection detects intensity changes using the first derivative of the image. It’s sensitive
to noise, so smoothing and thresholding are commonly used. The gradient magnitude shows edge strength,
and its direction indicates orientation.
4. (8)Explain Hit-Miss transform
The Hit-or-Miss Transform (HMT) is a morphological operation used primarily for shape detection in
binary images. Unlike basic morphological operations like dilation or erosion, HMT utilizes two
structuring elements: one to probe the foreground (object) and one to probe the background.
Definition:
Let I be the binary image composed of:
Foreground: A
Background: Ac (complement of A)
Let B1 be a structuring element for detecting a specific foreground pattern, and B2 for detecting a
background pattern.
The HMT is defined as:
I ⊛ (B1, B2) = (A ⊖ B1) ∩ (Ac ⊖ B2)
This means:
B1 must fit within the foreground (A)
B2 must simultaneously fit within the background (Ac)
The transform identifies locations where both conditions are true
Use Case:
Detecting specific shapes or patterns (like corners, holes, or particular object outlines) in binary
images.
Example:
If you want to find a 3×3 cross shape within an image, you can define B1 to match the cross in the
foreground and B2 to match the surrounding background. The intersection of the two erosions pinpoints
where the pattern occurs.
Advantages:
Highly selective; useful for exact shape detection.
Can differentiate between similar structures based on foreground and background geometry.
5. (10)Difference between RGB and CMY Models
Feature RGB Color Model CMY Color Model
Type Additive color model Subtractive color model
Primary Components Red, Green, Blue Cyan, Magenta, Yellow
Color Generation Colors are created by adding light of R, Colors are created by subtracting light
Method G, B from white
Used in devices that emit light (e.g., Used in devices that reflect light (e.g.,
Basis
monitors, cameras) printers, copiers)
Cyan (G + B), Magenta (R + B), Yellow Red (M + Y), Green (C + Y), Blue (C +
Secondary Colors
(R + G) M)
Black Representation (0, 0, 0) — absence of light (1, 1, 1) — full absorption of R, G, B
(0, 0, 0) — no absorption; full reflection
White Representation (1, 1, 1) — full presence of light
of R, G, B
RGB → CMY: C = 1 – R, M = 1 – G, Y = CMY → RGB: R = 1 – C, G = 1 – M, B =
Mathematical Relation
1–B 1–Y
Used in displays, scanners, digital
Application Used in color printing and inkjet devices
cameras
2. Where They Are Used:
RGB Model:
o Used in hardware that emits light, such as:
Computer monitors
Televisions
Digital cameras
Smartphones
o It is the standard model for image acquisition and display systems.
o Ideal for image generation and screen-based applications.
CMY (and CMYK) Model:
o Used in color reproduction systems that use pigments or dyes, such as:
Inkjet printers
Photocopiers
Press printing (CMYK adds black (K) to compensate for imperfect color inks)
o CMY is derived from RGB and is used for converting screen colors to printable output.