Chapter II, Part II
Digital Image
Acquisition Process
BRIGHTNESS ADAPTATION AND DISCRIMINATION
Because digital images are displayed as sets of discrete intensities, the eye’s ability to discriminate
between different intensity levels is an important consideration in presenting image processing
results.
Two phenomena demonstrate that perceived brightness is not a simple function of intensity.
The first is based on the fact that the visual system tends to undershoot
or overshoot around the boundary of regions of different intensities.
Although the intensity of the stripes is constant, we actually perceive a brightness pattern that is
strongly scalloped near the boundaries.
These perceived scalloped bands are called Mach bands, after Ernst Mach, who first described the
phenomenon in 1865.
BRIGHTNESS ADAPTATION AND DISCRIMINATION
Illustration of the Mach band effect.
• Perceived intensity is not a simple
function of actual intensity.
What Color do you see?
BRIGHTNESS ADAPTATION AND DISCRIMINATION
Illustration of the Mach band effect.
• Perceived intensity is not a simple
function of actual intensity.
What Color do you see?
BRIGHTNESS ADAPTATION AND DISCRIMINATION
The second phenomenon, called simultaneous contrast, is that a region’s perceived brightness
does not depend only on its intensity
All the inner squares have the same intensity, but they appear progressively darker as the
background becomes lighter.
Optical illusions:
Other examples of human perception
phenomena are optical illusions in which
the eye fills in non-existing details or
wrongly perceives geometrical properties
of
objects.
Optical illusions: Fig. (a) Fig. (b)
In Fig. (a), the outline of a square is seen clearly,
despite the fact that no lines defining such a figure
are part of the image.
The same effect, this time with a circle, can be seen
in Fig. (b);
The two horizontal line segments in Fig. (c) are of
the same length, but one appears shorter than the
other.
Finally, all long lines in Fig. 2.9(d) are equidistant
and parallel. Yet, the crosshatching creates the
illusion that those lines are far from being parallel.
Fig. (c) Fig. (d)
Image Sensing and Acquisition
Images are generated by the combination of an “illumination”
source and the reflection or absorption of energy from that source.
• Depending on the source: Illumination energy is reflected from,
or
transmitted through, objects.
Example in the first category: Light reflected from a planar surface.
Example in the second category: When X-rays pass through a
patient's body for generating a diagnostic X-ray film.
Image Sensing and Acquisition
Imaging sensors are used to transform the illumination energy into
digital images.
Incoming energy is transformed into a voltage by the combination
of input electrical power and sensor material that is responsive to
the particular type of energy being detected.
Image Sensing and Acquisition
Imaging sensors are used to transform the illumination energy into
digital images.
Three principal sensor arrangements used to transform energy into
digital images.
• Single Imaging Sensor
• Line Sensor
• Array Sensor
• Image Acquisition Using a Single Sensor:
•Figure shows the components
of a single sensor.
•Most familiar sensor of this type is the
photodiode.
• To generate a 2-D image using a single Single imaging sensor.
sensor, there has to be relative
displacements in both
the x-and y-directions.
• Image Acquisition Using a Single Sensor:
• A film negative is mounted onto a drum
whose mechanical rotation provides
displacement in one dimension.
• The single sensor is mounted on a lead screw
that provides motion in the perpendicular
direction.
• Because mechanical motion can be controlled
with high precision, this method is an inexpensive Combining a single sensor with motion to
(but slow) way to obtain high- resolution images. generate a 2-D image.
2. Image Acquisition Using Sensor Strips:
Line sensor
•A geometry that is used much more frequently than single sensors
consist of an in-line arrangement of sensors in the form of a sensor
strip, as shown in figure.
•The strip provides imaging elements in one direction
2. Image Acquisition Using Sensor Strips:
•Motion perpendicular to the strip provides
imaging in the other direction, as shown in
figure
This is the type of arrangement used in most
flatbed scanners. Sensing devices with 4000 Or
more in-One sensors are possible.
• The imaging strip gives one line of an image at a
Image acquisition using a linear sensor
time, and the motion of the strip completes the strips
other dimension of a two-dimensional image.
2. Image Acquisition Using Sensor Strips:
•Sensor strips mounted in a ring configuration are
used in medical and industrial imaging to obtain
cross- sectional ("slice") images of 3-D objects as
shown figure.
•A rotating X-ray source provides illumination and
the sensors opposite the source collect the X-ray
energy that passes through theobject.
•This technique is used in medical and industrial
computerized axial tomography (CAT) imaging.
3. Image Acquisition Using Sensor Arrays:
•Figure shows individual sensors arranged in
the form of a 2-D array
•Numerous electromagnetic and some
ultrasonic sensing devices are arranged in array
format. Array sensor
•This arrangement is found in digital cameras.
•Since the sensor array is two-dimensional,
its key advantage is that a complete image
•The response of each sensor is proportional
can be obtained by focusing the energy
to the integral of the light energy projected
pattern onto this surface of the array.
onto the surface of the sensor.
3. Image Acquisition Using Sensor Arrays:
Simple Image Formation Model:
IMAGE SAMPLING AND QUANTIZATION
•There are numerous ways to acquire images, but our objective is to generate digital
images from sensed data.
•The output of most sensors is a continuous voltage waveform whose amplitude and spatial
behavior are related to the physical phenomenon being sensed.
•To create a digital image, we need to convert the continuous sensed
data into digital form.
•This Involves two processes: sampling and quantization
IMAGE SAMPLING AND QUANTIZATION
• (a) Continuous image.
• (b) A scan line showing
intensity variations along line AB
in the continuous image.
• (c) Sampling and quantization.
• (d) Digital scan line. (The black
border in (a) is included for
clarity. It is not part of the
image).
IMAGE SAMPLING AND QUANTIZATION
• Figure shows a continuous image f that we want to convert to
digital form.
• An image may be continuous with respect to the x- and y-
coordinates, and also in amplitude.
• To convert it to digital form, we have to sample the function
in both coordinates and in amplitude.
• Digitizing the coordinate values is called sampling.
• Digitizing the amplitude values is called quantization.
IMAGE SAMPLING AND QUANTIZATION
• The one-dimensional function in figure is a plot
of amplitude (intensity level) values of the
continuous image along the line segment AB.
• The random variations are due to image noise
A scan line from A to B in the
continuous image
IMAGE SAMPLING AND QUANTIZATION
• The sample points are indicated by a vertical tick
mark at the bottom the in figure.
• The samples are shown as small white squares
superimposed on the function.
• The set of these discrete locations gives the
sampled function.
• To form a digital function, the intensity values must be converted into discrete
quantities.
• The right side of figure shows the intensity scale divided into eight discrete intervals,
ranging from black to white.
IMAGE SAMPLING AND QUANTIZATION
• The continuous intensity levels are quantized by
assigning one of the eight values to each
sample.
• The assignment is made depending on the
vertical proximity of a sample to a vertical tick
mark.
• The digital samples resulting from both
sampling and quantization are shown in figure. Digital scan line
IMAGE SAMPLING AND QUANTIZATION
• First figure shows a continuous image projected onto the plane of an array sensor.
• Second figure shows the image after sampling and quantization.
• The quality of a digital image is determined to a large degree by the number of
samples and discrete intensity levels used in sampling and quantization.
Continuous Result of image
image sampling and
projected onto quantization
a sensor array
Representation of Different Image Type’s
• Digital images are interpreted as 2D or 3D matrices by a computer, where each value
or pixel in the matrix represents the amplitude, known as the “intensity” of the pixel.
Representation of Different Image Type’s
• Computers deal with different “types” of images based on their
function representations.
• Binary Image: Images that have only two unique values of pixel
intensity- 0 (representing black) and 1 (representing white) are called
binary images.
• Such images are generally used to highlight a discriminating portion
of a colored image.
• For example, it is commonly used for image segmentation, as shown
below.
Representation of Different Image Type’s
2. Grayscale Image: Grayscale or 8-bit images are composed of 256 unique
colors, where a pixel intensity of 0 represents the black color and pixel
intensity of 255 represents the white color.
• All the other 254 values in between are the different shades of gray.
An example of an RGB image converted to its grayscale version is shown below.
Representation of Different Image Type’s
3. Color Image: The images we are used to in the modern world are RGB or
colored images which are 16-bit matrices to computers.
That is, 65,536 different colors are possible for each pixel.
The two most common ways of storing color image contents are:
• RGB representation: in which each pixel is usually represented by a 24-bit
number containing the amount of its red (R), green (G), and blue (B)
components.
• Indexed representation: where a 2D array contains indices to a color palette
(or lookup table (LUT)).
Representation of Different Image Type’s
RGB Image
Representation of Different Image Type’s
Indexed Image
Mathematical Tool for Image Processing
Array(Elementwise) versus Matrix Operation: Images are viewed as the
matrix.
But in this series of DIP, we are using array operation.
There is a difference is Matrix and Array Operation.
In Array, the operation is carried out by Pixel by Pixel in Image.
Let these we two images:
The Array(elementwise)
product
The Matrix Operation is :
Mathematical Tool for Image Processing
Linear versus Non-Linear Operation: Linear operation is Addition,
Subtraction, Multiplication, Division on the Image.
Non-Linear operation is Max, Min, Median, Mode, Mean of the Image for
Image Enhancement.
ARITHMETIC OPERATIONS :
• Addition Operation: Let s(x,y) is the new corrupted image as we are
adding noise g(x,y) to original image f(x,y) to hide the noise in the
original image s(x,y)=f(x,y)+g(x,y).
• Adding constant to the image makes the image brighter i.e
s(x,y)=f(x,y)+constant.
Mathematical Tool for Image Processing
Addition Operation: Subtraction Operation:
a) Image 1 b)Image 2 (a) image1 (b)Image2 ( c)
c) Image1+Image2 image1-image2 (d) image1-
d) Image1+constant constant
Mathematical Tool for Image Processing
Multiplication Operation:
h(x,y)=f(x,y)*g(x,y): We can multiply image with image
We can also multiple constant to an image like h(x,y)=f(x,y)*constant.
Used for shading correction(Illumination correction)
(a) image 1 i.e. the original image (b) multiplying image1 with 1.25 with increases
the contrast on the image
Mathematical Tool for Image Processing
Division Operation:
h(x,y)=f(x,y)/(x,y): We can divide image with image
We can also divide constant to an image like h(x,y)=f(x,y)/constant.
Used for shading correction(Illumination correction)
(a) Image1/1.25
(b) image2
(c) image3
(d) image4= image2 *image3
(e) image5=image4/image2
Mathematical Tool for Image Processing
LOGICAL OPERATION :
Logical operations are AND, OR, NOT, XOR.
(a) image1
(b) image2
(c) image1 AND image2
(d) image1 OR image2
Mathematical Tool for Image Processing
Geometrical Spatial Transformations:
Mathematical Tool for Image Processing
Geometrical Spatial Transformations:
END
Question?