Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
12 views49 pages

IP Unit 1 Fundamentals

Chapter 2 discusses the fundamentals of digital images, focusing on the structure and function of the human eye, including the roles of cones and rods in vision. It also covers image formation, brightness adaptation, and the processes of image acquisition using various sensing elements and arrays. Additionally, the chapter explains concepts such as image sampling, quantization, and the representation of digital images.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views49 pages

IP Unit 1 Fundamentals

Chapter 2 discusses the fundamentals of digital images, focusing on the structure and function of the human eye, including the roles of cones and rods in vision. It also covers image formation, brightness adaptation, and the processes of image acquisition using various sensing elements and arrays. Additionally, the chapter explains concepts such as image sampling, quantization, and the representation of digital images.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 49

Chapter 2

Digital Image Fundamentals

Structure of Human Eye


• Sphere in shape
• 20mm Average
diameter
• Three membranes
– Cornea and sclera
– Choroid
– Retina
• Light receptors over
retina
– Cones
– rods
C.Gireesh, Assistant Professor, Vasavi College of Engineering, Hyderabad
Chapter 2
Digital Image Fundamentals

Cones and Rodes


• Cones are highly distributed • Rodes are highly distributed
at central portion of retina over retina
called fovea and less • 75 to 150 million
distributed over other • Several rods are connected
regions of retina. to single nerve
• 6 – 7 million • Overall picture of field of
• Each cone is connected to a view
nerve end. • Scotopic vision or Dimlight
• Highly sensitive to color vision or Moon light vision
• Photopic vision or Bright
light vision or Day light
vision
C.Gireesh, Assistant Professor, Vasavi College of Engineering, Hyderabad
Chapter 2
Digital Image Fundamentals

C.Gireesh, Assistant Professor, Vasavi College of Engineering, Hyderabad


Chapter 2
Digital Image Fundamentals

Image formation in the Eye

• Let ‘h’ denote the height of the object in the retinal image.
• The geometry of the figure yields 15/100 = h/17
• So h = 2.55 mm

C.Gireesh, Assistant Professor, Vasavi College of Engineering, Hyderabad


Chapter 2
Digital Image Fundamentals

• Estimate the diameter of the smallest printed dot that


eye can discern if the page on which the dot is printed
is 0.2m away from the eyes.
Assume that the visual system ceases to detect the dot
when the image of the dot on the fovea becomes
smaller than the diameter of one receptor(cone) in
that area of the retina.
Assume further that fovea can be modelled as square
array of dimensions 1.5mm X 1.5mm, and that the
cones and spaces between the cones are distributed
uniformly through out the array.

C.Gireesh, Assistant Professor, Vasavi College of Engineering, Hyderabad


Chapter 2
Digital Image Fundamentals

C.Gireesh, Assistant Professor, Vasavi College of Engineering, Hyderabad


Chapter 2
Digital Image Fundamentals

C.Gireesh, Assistant Professor, Vasavi College of Engineering, Hyderabad


Chapter 2
Digital Image Fundamentals

Brightness Adaptation and Discrimination


• The range of light intensity
levels to which human system
can adapt is from scotopic
threshold to Glare limit.
• Subjective brightness is a
logarithmic function of the
light incident on the eye.
• The long solid curve
represents the range of
intensities to which the visual
system can adapt.
C.Gireesh, Assistant Professor, Vasavi College of Engineering, Hyderabad
Chapter 2
Digital Image Fundamentals

• The transition from scotopic to


photopic vision is gradual over
the approximate range from
0.001 to 0.1 millilambert ( -3
to -1 mL in the log scale)
• The total range of distinct
intensity levels a visual system
can discriminate
simultaneously is small when
compared with the total
adaptation range

C.Gireesh, Assistant Professor, Vasavi College of Engineering, Hyderabad


Chapter 2
Digital Image Fundamentals

• For any given set of


conditions Ba is the
brightness adaptation level.
• The range of subjective
brightness that the eye can
perceive when adapted to
this level Ba is Ba- Bb
• Bb is the adaptation level at
which all stimuli are
perceived as
indistinguishable blacks.

C.Gireesh, Assistant Professor, Vasavi College of Engineering, Hyderabad


Chapter 2
Digital Image Fundamentals

C.Gireesh, Assistant Professor, Vasavi College of Engineering, Hyderabad


Chapter 2
Digital Image Fundamentals

C.Gireesh, Assistant Professor, Vasavi College of Engineering, Hyderabad


Chapter 2
Digital Image Fundamentals

C.Gireesh, Assistant Professor, Vasavi College of Engineering, Hyderabad


Chapter 2
Digital Image Fundamentals

C.Gireesh, Assistant Professor, Vasavi College of Engineering, Hyderabad


Chapter 2
Digital Image Fundamentals

C.Gireesh, Assistant Professor, Vasavi College of Engineering, Hyderabad


Chapter 2
Digital Image Fundamentals

C.Gireesh, Assistant Professor, Vasavi College of Engineering, Hyderabad


Chapter 2
Digital Image Fundamentals

• Wavelength( )and
frequency( )are related
by the expression

C.Gireesh, Assistant Professor, Vasavi College of Engineering, Hyderabad


Chapter 2
Digital Image Fundamentals

Monochromatic light
• Light that is void of color is called
monochromatic ( or achromatic ) light.
• The only attribute of monochromatic light is
its intensity.
• Vary from black to grays and finally to white.

C.Gireesh, Assistant Professor, Vasavi College of Engineering, Hyderabad


Chapter 2
Digital Image Fundamentals

Chromatic Light
• Light having colors
• Three basic quantities are used to describe the
quality of a chromatic light
– Radiance: is the total amount of energy that flows
from the light source , measured in watts
– Luminance : an amount of energy an observer
perceives from a light source, measured in lumens
– Brightness : is a subjective descriptor of light
perception that is practically impossible to
measure.
C.Gireesh, Assistant Professor, Vasavi College of Engineering, Hyderabad
Chapter 2
Digital Image Fundamentals

Image Sensing and Acquisition

C.Gireesh, Assistant Professor, Vasavi College of Engineering, Hyderabad


Chapter 2
Digital Image Fundamentals

IMAGE ACQUISITION USING A SINGLE SENSING ELEMENT


• Figure 2.12(a) shows the components of a single sensing element.
• A familiar sensor of this type is the photodiode, whose output is a voltage
proportional to light intensity.
• In order to generate a 2-D image using a single sensing element, there has to be
relative displacements in both the x- and y-directions between the sensor and the
area to be imaged.
• Figure 2.13 shows an arrangement used in high-precision scanning, where a film
negative is mounted onto a drum whose mechanical rotation provides displacement
in one dimension.
• The sensor is mounted on a lead screw that provides motion in the perpendicular
direction.
• A light source is contained inside the drum.
• As the light passes through the film, its intensity is modified by the film density
before it is captured by the sensor.
• This "modulation" of the light intensity causes corresponding variations in the
sensor voltage, which are ultimately converted to image intensity levels by
digitization.
C.Gireesh, Assistant Professor, Vasavi College of Engineering, Hyderabad
Chapter 2
Digital Image Fundamentals

C.Gireesh, Assistant Professor, Vasavi College of Engineering, Hyderabad


Chapter 2
Digital Image Fundamentals

IMAGE ACQUISITION USING SENSOR STRIPS


• The strip provides imaging elements in one direction. Motion perpendicular to the
strip provides imaging in the other direction, as shown in Fig. 2.14(a).
• In-line sensors are used routinely in airborne imaging applications, in which the
imaging system is mounted on an aircraft that flies at a constant altitude and speed
over the geographical area to be imaged.
• An imaging strip gives one line of an image at a time, and the motion of the strip
relative to the scene completes the other dimension of a 2-D image.
• Sensor strips in a ring configuration are used in medical and industrial imaging to
obtain cross-sectional (“slice”) images of 3-D objects, as Fig. 2.14(b) shows.
• A rotating X-ray source provides illumination, and X-ray sensitive sensors opposite
the source collect the energy that passes through the object.
• This is the basis for medical and industrial computerized axial tomography (CAT)
imaging.
• The output of the sensors is processed by reconstruction algorithms whose objective
is to transform the sensed data into meaningful cross sectional images.

C.Gireesh, Assistant Professor, Vasavi College of Engineering, Hyderabad


Chapter 2
Digital Image Fundamentals

C.Gireesh, Assistant Professor, Vasavi College of Engineering, Hyderabad


Chapter 2
Digital Image Fundamentals

IMAGE ACQUISITION USING SENSOR ARRAYS


• Figure 2.12(c) shows individual sensing elements arranged in the form of a 2-D
array.
• This is also the predominant arrangement found in digital cameras.
• A typical sensor for these cameras is a CCD (charge-coupled device) array, which
can be manufactured with a broad range of sensing properties and can be packaged
in rugged arrays of 4000 * 4000 elements or more.
• The response of each sensor is proportional to the integral of the light energy
projected onto the surface of the sensor, a property that is used in astronomical and
other applications requiring low noise images.
• Noise reduction is achieved by letting the sensor integrate the input light signal over
minutes or even hours.
• Because the sensor array in Fig. 2.12(c) is two dimensional, its key advantage is
that a complete image can be obtained by focusing the energy pattern onto the
surface of the array.
• Motion obviously is not necessary, as is the case with the sensor arrangements
discussed in the preceding two sections.

C.Gireesh, Assistant Professor, Vasavi College of Engineering, Hyderabad


Chapter 2
Digital Image Fundamentals

• Figure 2.15 shows the principal manner in


which array sensors are used.
• This figure shows the energy from an
illumination source being reflected from a
scene.
• The first function performed by the imaging
system in Fig.2.15(c) is to collect the
incoming energy and focus it onto an image
plane.
• If the illumination is light, the front end of
the imaging system is an optical lens that
projects the viewed scene onto the focal
plane of the lens, as Fig. 2.15(d) shows.
• The sensor array, which is coincident with
the focal plane, produces outputs
proportional to the integral of the light
received at each sensor.
• Digital and analog circuitry sweep these
outputs and convert them to an analog signal,
which is then digitized by another section of
the imaging system.
• The output is a digital image, as shown
diagrammatically in Fig. 2.15(e).

C.Gireesh, Assistant Professor, Vasavi College of Engineering, Hyderabad


Chapter 2
Digital Image Fundamentals

A Simple Image Formation Model


• we denote images by two-dimensional functions of the form f (x, y).
• The value of f at spatial coordinates (x, y) is a scalar quantity.
• As a consequence, f (x, y) must be nonnegative and finite; that is,
0 ≤ f (x, y) < ∞
• Function f (x, y) is characterized by illumination and reflectance components;
that is,
f (x, y) = i(x, y)r(x, y)
where 0 ≤ i(x, y) < ∞
and 0 ≤ r(x, y) ≤ 1

C.Gireesh, Assistant Professor, Vasavi College of Engineering, Hyderabad


Chapter 2
Digital Image Fundamentals

Image Sampling and Quantization


• An image may be
continuous with respect
to the x- and y-
coordinates, and also in
amplitude.
• To digitize it, we have to
sample the function in
both coordinates and also
in amplitude.
• Digitizing the coordinate
values is called
sampling.
• Digitizing the amplitude
values is called
quantization
C.Gireesh, Assistant Professor, Vasavi College of Engineering, Hyderabad
Chapter 2
Digital Image Fundamentals

• The one-dimensional function in


Fig. 2.16(b) is a plot of amplitude
(intensity level) values of the
continuous image along the line
segment AB in Fig. 2.16(a).
• The random variations are due to
image noise.
• To sample this function, we take
equally spaced samples along line
AB, as shown in Fig. 2.16(c).
• The samples are shown as small
dark squares superimposed on the
function, and their (discrete)
spatial locations are indicated by
corresponding tick marks in the
bottom of the figure.

C.Gireesh, Assistant Professor, Vasavi College of Engineering, Hyderabad


Chapter 2
Digital Image Fundamentals

• The set of dark squares constitute


the sampled function. However,
the values of the samples still span
(vertically) a continuous range of
intensity values.
• In order to form a digital function,
the intensity values also must be
converted (quantized) into discrete
quantities.
• The vertical gray bar in Fig.
2.16(c) depicts the intensity scale
divided into eight discrete
intervals, ranging from black to
white.
• The vertical tick marks indicate
the specific value assigned to each
of the eight intensity intervals.
C.Gireesh, Assistant Professor, Vasavi College of Engineering, Hyderabad
Chapter 2
Digital Image Fundamentals

• The continuous intensity levels


are quantized by assigning one
of the eight values to each
sample, depending on the
vertical proximity of a sample
to a vertical tick mark.
• The digital samples resulting
from both sampling and
quantization are shown as
white squares in Fig. 2.16(d).
Starting at the top of the
continuous image and carrying
out this procedure downward,
line by line, produces a two-
dimensional digital image

C.Gireesh, Assistant Professor, Vasavi College of Engineering, Hyderabad


Chapter 2
Digital Image Fundamentals

C.Gireesh, Assistant Professor, Vasavi College of Engineering, Hyderabad


Chapter 2
Digital Image Fundamentals

Representing digital image

C.Gireesh, Assistant Professor, Vasavi College of Engineering, Hyderabad


Chapter 2
Digital Image Fundamentals

C.Gireesh, Assistant Professor, Vasavi College of Engineering, Hyderabad


Chapter 2
Digital Image Fundamentals

• The number , b, of bits required to store a digitized image is


b =MxNxk
Where M is the no. Of rows, N is the no. Of columns in the
image and k is the number of bits to store a pixel.
The number of intensity levels in an image is L =

C.Gireesh, Assistant Professor, Vasavi College of Engineering, Hyderabad


Chapter 2
Digital Image Fundamentals

• Generally image transmission is accomplished in packets


consisting of start bit, a byte of information and a stop bit. How
many minutes would it take to transmit a 1024x1024 Image with
256 intensity levels using 56 K baud modem?
Sol:
The total amount of data (including the start and stop bit) in an 8-bit,
1024× 1024 image, is
(1024×1024)×[8+2] bits.
The total time required to transmit this image over a 56K baud link is
(1024×1024)×[8+2] /56000 = 187.25 sec or about 3.1 min.

C.Gireesh, Assistant Professor, Vasavi College of Engineering, Hyderabad


Chapter 2
Digital Image Fundamentals

• The difference in
intensity between
the highest and
lowest intensity
levels in an image is
called image
contrast.
• The contrast ratio is
the ratio of these
two quantities.

C.Gireesh, Assistant Professor, Vasavi College of Engineering, Hyderabad


Chapter 2
Digital Image Fundamentals

Spatial Resolution
• It is a measure of the smallest discernable
detail in an image.
• Quantitatively
– Line pairs per unit distance
– Dots (pixels) per unit distance
• Printing and publishing industry uses dpi (dots
per inch)

C.Gireesh, Assistant Professor, Vasavi College of Engineering, Hyderabad


Chapter 2
Digital Image Fundamentals

• Original image is of size


3692*2812 pixels
• Shown at 1250, 300, 150,
and 72 dpi respectively
• 72 dpi image is an array of
size 213*162 pixels.
• 3692/height = 1250/72 , so
height at 72 dpi is 213.
• Similarly for width
• All the images are zoomed
back to their original size
for comparision purpose

C.Gireesh, Assistant Professor, Vasavi College of Engineering, Hyderabad


Chapter 2
Digital Image Fundamentals

Intensity resolution
• It is the smallest discernible change in
intensity level.
• Based on hardware, the number of intensity
levels usually is an integer power of two.
• The most common number is 8 bits.
• 16 bits is used in some applications where
enhancement of some intensity ranges is
necessary.
• 32 bits is rare
C.Gireesh, Assistant Professor, Vasavi College of Engineering, Hyderabad
Chapter 2
Digital Image Fundamentals

C.Gireesh, Assistant Professor, Vasavi College of Engineering, Hyderabad


Chapter 2
Digital Image Fundamentals

C.Gireesh, Assistant Professor, Vasavi College of Engineering, Hyderabad


Chapter 2
Digital Image Fundamentals

Image interpolation
• Used in tasks like zooming, shrinking, rotation, and geometrically correcting digital
images.
• Interpolation is the process of using known data to estimate values at unknown locations.
• Nearest neighbour interpolation
• Bilinear interpolation – four nearest neighbours
v(x,y) = ax + by + cxy + d
• where the four coefficients are determined from the four equations in four unknowns that
can be written using the four nearest neighbours of point (x, y).
• Bilinear interpolation gives much better results than nearest neighbour interpolation,
• with a modest increase in computational burden.
• Bicubic interpolation- sixteen nearest neighbours

C.Gireesh, Assistant Professor, Vasavi College of Engineering, Hyderabad


Chapter 2
Digital Image Fundamentals

C.Gireesh, Assistant Professor, Vasavi College of Engineering, Hyderabad


Chapter 2
Digital Image Fundamentals

Neighbors of a pixel
• 4-neighbors of a pixel p at
(x,y) are
(x+1,y), (x-1,y), (x,y+1),
(x,y-1)
• The four diagonal neighbors
of p are (x+1,y+1), (x+1,y-
1), (x-1,y+1),(x-1,y-1)
• All together called 8-
neighbors of a pixel
• For a pixel at border of an
image some of its neibors
fall outside the image.
C.Gireesh, Assistant Professor, Vasavi College of Engineering, Hyderabad
Chapter 2
Digital Image Fundamentals

Adjacency
• Let V be the set of intensity values used to define adjacency.
• In a binary image, V = {1} if we are referring to adjacency of pixels with value 1.
• In a grayscale image, the idea is the same, but set V typically contains more
elements.
• For example, if we are dealing with the adjacency of pixels whose values are in the
range 0 to 255, set V could be any subset of these 256 values.
• We consider three types of adjacency:

• Mixed adjacency is a modification of 8-adjacency, and is introduced to eliminate


the ambiguities that may result from using 8-adjacency.
C.Gireesh, Assistant Professor, Vasavi College of Engineering, Hyderabad
Chapter 2
Digital Image Fundamentals

Path, Connectivity
• We can define 4-, 8-, or m- paths depending on
the type of adjacency specified
• Let S represent a subset of pixels in an image.
• Two pixels are said to be connected in S if
there exists a path between them consisting
entirely of pixels in S.
• For any pixel p in S, the set of pixels that are
connected to it in S is called a connected
component of S.
C.Gireesh, Assistant Professor, Vasavi College of Engineering, Hyderabad
Chapter 2
Digital Image Fundamentals

4- , 8-, or m-paths, adjacent regions

• (a) An arrangement of pixels in a binary image


• (b) Pixels that are 8-adjacent
• (c) m-adjacency
• (d) Two regions that are 8-adjacent

C.Gireesh, Assistant Professor, Vasavi College of Engineering, Hyderabad


Chapter 2
Digital Image Fundamentals

Region
• If it only has one connected component, then
the set S is called a connected set.
• Let R be a subset of pixels in an image.
• We call R a region of the image, if R is a
connected set.
• Two regions, Ri and Rj are said to be adjacent
if their union forms a connected set.

C.Gireesh, Assistant Professor, Vasavi College of Engineering, Hyderabad

You might also like