Module 1
Module 1
• GAMMA-RAY IMAGING
1. Nuclear medicine – Bone Scan and Positron Emission Tomography
(PET) using Gamma Rays
2. Astronomical Observations – Cygnus Explosion in a Cygnus loop and
Gamma radiation from a valve in a nuclear reactor
Continued
Continued
• X-RAY IMAGING
• Medical Diagnosis
1. Bone X-Ray
2. Angiography
3. CAT
• Industrial Scanning & Testing
• Astronomy
Continued
• IMAGING IN THE ULTRAVIOLET BAND
1. Industrial Inspection
2. Microscopy (Fluorescence)
3. Lasers
4. Biological Imaging
5. Astronomical Observations
Continued
• IMAGING IN THE VISIBLE AND INFRARED BANDS
1. Light Microscopy
2. Remote Sensing
3. Weather Observation /Prediction
4. Automated Visual Inspection
5. Finger Printing
6. Iris Recognition
Continued
• Thematic Bands in NASA’s LANDSAT satellites
The primary function of LANDSAT is to obtain and transmit images of
the Earth from space, for purposes of monitoring environmental
conditions on the planet
Continued
Continued
Continued
• IMAGING IN THE MICROWAVE BAND
The principal application of imaging in the microwave band is RADAR
• IMAGING IN THE RADIO BAND
Medicine: MRI
Astronomy
Continued
• OTHER IMAGING MODALITIES
Imaging using “sound” finds application in geological exploration, industry, and
medicine
Ultrasound images are generated using the following basic procedure:
1. The ultrasound system (a computer, ultrasound probe consisting of a source, a
receiver, and a display) transmits high-frequency (1 to 5 MHz) sound pulses into the
body.
2. The sound waves travel into the body and hit a boundary between tissues (e.g.,
between fluid and soft tissue, soft tissue and bone). Some of the sound waves are
reflected back to the probe, while some travel on further until they reach another
boundary and are reflected.
3. The reflected waves are picked up by the probe and relayed to the computer.
4. The machine calculates the distance from the probe to the tissue or organ
boundaries using the speed of sound in tissue (1540 m/s) and the time of each
echo’s return.
5. The system displays the distances and intensities of the echoes on the screen,
forming a two-dimensional image.
Continued
• A transmission electron
microscope (TEM) works
much like a slide projector.
• A projector transmits a
beam of light through a
slide; as the light passes
through the slide, it is
modulated by the
contents of the slide.
• This transmitted beam is
A scanning electron microscope (SEM), on the then projected onto the
other hand, actually scans the electron beam viewing screen
and records the interaction of beam and
sample at each location
Continued
Fundamental Steps in Digital Image Processing
Continued
• Image Acquisition: Acquisition could be as simple as being given an
image that is already in digital form.
It involves Pre-Processing such as Scaling
• Image Enhancement: It is the process of manipulating an image so
the result is more suitable than the original for a specific application
Enhancement, on the other hand, is based on human subjective
preferences regarding what constitutes a “good” enhancement result.
• Image restoration: Is an area that also deals with improving the
appearance of an image
image restoration is objective, in the sense that restoration
techniques tend to be based on mathematical or probabilistic models
of image degradation
Continued
• Colour image processing: covers fundamental concepts in color models
and basic color processing in a digital domain.
Color is used also as the basis for extracting features of interest in an
image.
• Wavelets: are the foundation for representing images in various degrees of
resolution
• Compression: As the name implies, deals with techniques for reducing the
storage required to save an image, or the bandwidth required to transmit it
• Morphological Processing: deals with tools for extracting image
components that are useful in the representation and description of shape
• Segmentation: Partitions an image into its constituent parts or objects
Continued
• Feature extraction: Consists of Feature detection and Feature
description.
Feature detection refers to finding the features in an image, region, or
boundary.
Feature description assigns quantitative attributes to the detected
features.
• Image pattern classification: Is the process that assigns a label (e.g.,
“vehicle”) to an object based on its feature descriptors
Components of an Image Processing System
Continued
• Image Sensor: Two subsystems are required to acquire digital images.
The first is a physical sensor that responds to the energy radiated by
the object we wish to image.
The second, called a digitizer, is a device for converting the output of
the physical sensing device into digital form
• Specialized image processing hardware: Consists of the digitizer just
mentioned, plus hardware that performs other primitive operations,
such as an arithmetic logic unit (ALU), that performs arithmetic and
logical operations in parallel on entire images.
• This type of hardware sometimes is called a front-end subsystem, and
its most distinguishing characteristic is speed
Continued
• The computer in an image processing system is a general-purpose
computer and can range from a PC to a supercomputer
• Software for image processing consists of specialized modules that
perform specific tasks
Commercially available image processing software, such as the well-known
MATLAB® Image Processing Toolbox is an example
• Mass storage is a must in image processing applications
Digital storage for image processing applications falls into three principal
categories:
1. Short-term storage for use during processing Eg: Computer Memory,
Frame Buffers
2. On-line storage for relatively fast recall and Eg: Servers
3. Archival storage, characterized by infrequent access Eg: Magnetic discks,
Opticle discks
Continued
• Storage is measured in bytes (eight bits), Kbytes (10^3 bytes), Mbytes
(10^6 bytes), Gbytes (10^9 bytes), and Tbytes (10^12 bytes).
• Image displays in use today are mainly color, flat screen monitors.
Monitors are driven by the outputs of image and graphics display
cards that are an integral part of the computer system
• Hardcopy devices for recording images include laser printers, film
cameras, heat sensitive devices, ink-jet units, and digital units, such
as optical and CD-ROM disk
• Networking and cloud communication provides communications
with remote sites via the internet
Elements of Visual Perception
• STRUCTURE OF THE HUMAN EYE
• Diameter of Eye: About 20mm
• Enclosed by three membranes:
the cornea and sclera outer
cover; the choroid; and the
retina.
• The cornea is a tough,
transparent tissue that covers
the anterior surface of the eye
• Continuous with the cornea, the
sclera is an opaque membrane
that encloses the remainder of
the optic globe
Continued
• The choroid lies directly below the sclera. This membrane contains a
network of blood vessels that serve as the major source of nutrition
to the eye
• The choroid is divided into the ciliary body and the iris.
• Iris contracts or expands to control the amount of light that enters
the eye
• The central opening of the iris (the pupil) varies in diameter from
approximately 2 to 8 mm
• The front of the iris contains the visible pigment of the eye, whereas
the back contains a black pigment
• The lens consists of concentric layers of fibrous cells and is suspended
by fibers that attach to the ciliary body
Continued
• The lens absorbs approximately 8% of the visible light spectrum, with
higher absorption at shorter wavelengths
• The innermost membrane of the eye is the retina, which lines the inside of
the wall’s entire posterior portion
• When the eye is focused, light from an object is imaged on the retina
• There are two types of receptors: cones and rods
• There are between 6 and 7 million cones, 75 to 150 million rods in each eye
• Cone vision is called Photopic or Bright-light vision and highly sensitive to
color
• Rods capture an overall image of the field of view. They are not involved in
color vision, and are sensitive to low levels of illumination
• The fovea itself is a circular indentation in the retina of about 1.5 mm in
diameter, so it has an area of approximately 1.77 mm2
Elements of Visual Perception
IMAGE FORMATION IN THE EYE
• In an ordinary photographic camera the lens has a fixed focal length.
Focusing at various distances is achieved by varying the distance
between the lens and the imaging plane, where the film is located
• In the human eye, the converse is true; the distance between the
center of the lens and the imaging sensor (the retina) is fixed, and the
focal length needed to achieve proper focus is obtained by varying
the shape of the lens
• The fibers in the ciliary body accomplish this by flattening or
thickening the lens for distant or near objects, respectively.
Continued
• Where
𝑳𝒎𝒊𝒏 = 𝒊𝒎𝒊𝒏 𝒓𝒎𝒊𝒏 & 𝑳𝒎𝒂𝒙 = 𝒊𝒎𝒂𝒙 𝒓𝒎𝒂𝒙
Image Sampling and Quantization
• To create a digital image, we need to convert the continuous sensed
data into a digital format. This requires two processes: sampling and
quantization
• Diagram shows a continuous image f that we want to convert to
digital form
• Image is continuous with respect to the x-
and y-coordinates, and also in amplitude
• To digitize it, we have to sample the function
in both coordinates and also in amplitude
• Digitizing the coordinate values is called
sampling
• Digitizing the amplitude values is called
quantization
Continued
• The one-dimensional function is a
plot of amplitude (intensity level)
values of the continuous image
along the line segment AB
• To sample this function, we take
equally spaced samples along line
AB
• The set of dark squares constitute
the sampled function
• However, the values of the
samples still span (vertically) a
continuous range of intensity
values. In order to form a digital
function, the intensity values also
must be converted (quantized)
into discrete quantities.
Continued
• The vertical gray bar depicts the intensity scale divided into eight
discrete intervals, ranging from black to white.
• The vertical tick marks indicate the specific value assigned to each of
the eight intensity intervals.
• The continuous intensity levels are quantized by assigning one of the
eight values to each sample, depending on the vertical proximity of a
sample to a vertical tick mark
• Starting at the top of the continuous image and carrying out this
procedure downward, line by line, produces a two-dimensional digital
image
REPRESENTING DIGITAL IMAGES
• Let f(s,t) represent a continuous image function of two continuous
variables, s and t
• Convert this function f(s,t) into a digital image f(x,y) by sampling and
quantization where (x,y) are discrete coordinates
X = {0, 1, 2,…….,M-1} & Y = {0, 1, 2,…….,N-1}
• There are three ways to represent digital image f(x,y)
First Type
1. f(x,y) is a plot of the function, with two axes determining
spatial location and the third axis being the values of f as
a function of x and y.
2. This representation is useful when working with grayscale
sets whose elements are expressed as triplets of the form
(x,y,z) , where x and y are spatial coordinates and z is the
value of f at coordinates (x,y).
Continued
Second Type
• Intensity of each point in the
display is proportional to the
value of f at that point.
• There are only three equally
spaced intensity values 0, 0.5
and 1.
• A monitor or printer converts
these three values to black, gray,
or white, respectively
Third Type
• The third representation is an array (matrix) composed of the numerical values of f(x,y).
• This is the representation used for computer processing.
Continued
• In equation form, we write the representation of an MxN numerical
array as
(Xc,Yc) = (floor(M/2)+1,floor(N/2)+1)
LINEAR VS. COORDINATE INDEXING
• The convention discussed in the previous section, in which the
location of a pixel is given by its 2-D coordinates, is referred to as
coordinate indexing, or subscript indexing.
• Another type of indexing used extensively in programming image
processing algorithms is linear indexing
• Exercise
1. Prove Sum Operator is Linear
2. Prove Max Operator is Non-Linear