Image Processing
Lecture
Process F: Radiation & the
Atmosphere
A: Energy Source or
Illumination
B: Radiation and the
Atmosphere
C: Interaction with
the Target
D: Recording of
Energy by the Sensor
E: Transmission,
Reception, and
Processing
F: Interpretation and
Analysis
G: Application
Definition
Image processing
A set of processes used to
geometrically correct,
enhance, filter, combine
and perform statistical
and/or mathematical
Digital Image Processing
The aim is to manipulate the digital image in
order to allow the user to extract as much
information as possible from it.
Because the data are in numeric form and the
data sets are enormous (e.g., one LANDSAT TM
image contains 290.91 x 106 = 29 091 000 000
pixels), satellite images can only be processed
according to mathematical principles, using
powerful computers.
E.g., the digital numbers of corresponding
ground resolution cells could be added,
subtracted, divided or multiplied to get a ‘new’
digital number for each specific pixel.
These new numbers are then stored
in theDigital
computer in the
Image form of a new
Processing
digital matrix.
This new image can then be
converted to pictorial form or else it
can undergo further processing by
additional computer programs.
The number of possible
manipulations that can be carried out
on a digital image is virtually
unlimited.
However, virtually all these
Digital Image Processing
The 4 main categories of
1. Preprocessing
Digital Image Processing:
2. Image Enhancement
3. Image Transformation /
Data Merge
4. Image Classification
Preprocessing precedes all other
Preprocessing
image processing techniques and
is one of the most important
phases of the entire process.
It is the correction and ‘clean-up’
of the data set to remove all
negative and ‘unnecessary’
influences which influenced the
scanning process during the
collection of the data.
Disturbances in Imagery
No ‘ideal’ RS systems exists
The Earth is very complex
Constraints exist: spectral / spatial / temporal /
radiometric resolution
Therefore there is error in the data acquisition
process, which leads to:
degradation in the quality of RS data
impacts the accuracy of image analysis
Therefore we NEED to Preprocess data prior to
analyzing it
Radiometric Correction
Variation in solar illumination and
atmospheric conditions would influence
the brightness value of picture elements.
Atmospheric scattering causes a haze in
the image which would have to be
removed by radiometric correction.
Sensors could themselves cause image
distortion if some of the electronic
components became defective or went
out of phase with the rest.
In an attempt to neutralize the variation
Geometric Correction
All images are subject to geometric
distortions, which are due to several
factors: perspective of the sensor optics,
motion of the platform, rotation of the
Geometric
earth (skew Correction
distortion), terrain relief etc.
Aims to compensate for these distortions
and give the image the scale and
projection properties of a map.
Registration
Gives the image true ground positions
Why is Geometry Important?
Image Comparison
• Multi-spectral
• Multi-Sensor
• Multi-Level
• Multi-Temporal
Combining with
other data (maps,
GIS) for:
• Interpretation
• Mapping
• Building data
Geometric
Resampling
Resampling: The final stage in Geometric
Rectification, this process determines the
digital values to place in the new pixel
Nearest
locations of the corrected output image.
Bilinear
neighbour Interpolation
nearest input weighted average Cubic
cell of the four nearest convolution
input cells
distance-
weighted
average of the
16 closest cells
Image Enhancement
Images are enhanced to make them
easier for visual interpretation and to
increase the understanding of the
imagery
For each application and for each
image it is usually necessary to
custom adjust the range and
distribution of brightness values
• Subtle differences in brightness value can be
highlighted either by:
o Contrast modification or
o by assigning quite different colors to those levels
(density slicing)
• Point operations change the value of each
individual pixel independent of all other pixels
• Local operations change the value of individual
pixels in the context of the values of
neighboring pixels
Visualization
• Color spaces for visualization – Three
approaches: – Red-Green-Blue (RGB) space –
based on additive principle of colors
• • The way TV and computer screen operate
• • 3 channel (R,G,B)
• Intensity-Hue-Saturation (IHS) space
• Yellow-Magenta-Cyan (YMC) space - based on
subtractive principle of colors
Image Enhancement
Techniques
Image Reduction
Image magnification
Transect extraction
Contrast Stretching (linear & non-linear)
Threshold / Density Slice
Spatial Filters
Hue-Intensity-Saturation Transforms
Image Reduction
• Many digital image processing systems today are
unable to display a full image at the normal commercial
pixel scale (>3000 rows and 3000 columns).
• Image reduction allows the analyst to view a subset of
an image at one time on the screen by reducing the
original image dataset down to a smaller dataset.
Linear Contrast Stretch
The minimum (84) &
maximum (153)
brightness values of
the image are
identified and a
expanded uniformly
to cover the full This
range of screen enhances the
values available (0 -contrast in
the image
255) with light
areas
appearing
lighter and
dark areas
Threshold - Amsterdam Water (TM Band 4)
Density Slice - Mozambique (TM
Band 3)
Horizontal Gradient Filter:(East / West
Edges)
TM Band 5 original Horizontal
image gradient filter
Laplace filter
TM Band 5 original Laplace filter+
image original
Hue Image Saturation(HIS) Image of Rossing
321 - RGB 321 - HIS
Composite Composite
Digital Image Processing
There are 4 main categories of
Digital Image Processing
Preprocessing
Image Enhancement
Image Transformation
Image Classification
Image Transformations:
Image transformations typically involve
the manipulation of multiple bands of
data: from a single multispectral image,
or multi-temporal data of the same area.
1. Basic Arithmetic Image
Transformations
2. Normalized Indices / Vegetation
Indices (NDVI)
3. Principal Components Analysis
(PCA)
Basic Arithmetic Image
Transformations
• Image Subtraction
• often used to identify changes that have occurred between
images collected on different dates
• Image division
• highlight subtle variations in the spectral responses of
various surface covers
• the resultant image enhances variations in the slopes of the
spectral reflectance curves between the two different
spectral ranges
• More complex ratios involving the sums of and
differences between spectral bands for various sensors,
have been developed for monitoring vegetation
conditions
Ratio Images
Ratio 3/1: bright
areas highlight
iron rich surface
material, as found
in many alteration
zones
1/7 = B 4/2
= G 3/1=R
Yellows & reds
denote areas of rock
R = 3/1 G = 4/3
B = 5/7
Vegetation =
bright blue-
green
Iron-stained =
pink to orange
Other rock &
soil materials
= other subtle
Normalized Difference
Vegetation Index (NDVI)
• numerical indicator that uses the
visible and near-infrared bands to
measure the “greenness” or
photosynthetic activity of vegetation
• Photosynthetically active vegetation,
in particular, absorbs most of the
red light while reflecting much of the
near infrared light.
• NDVI is calculated on a per-pixel basis as the
normalized difference between the red and
near infrared bands from an image;
• NDVI
or simplified as
Healthy
vegetation
(left) absorbs
most of the
visible light
that hits it, and
reflects a large
portion of the
near-infrared
light.
Unhealthy or
sparse
vegetation
(right) reflects
more visible
NDVI
NDVI is used to monitor vegetation conditions
at all scales.
The NOAA AVHRR
produces a
standard NDVI
used to monitor
vegetation
condition on
continental and
global scales.
Change Detection with NDVI
Change Detection with NDVI
Principal Component Analysis (PCA)
• Image transformation techniques based on
complex processing of the statistical
characteristics of multi-band data sets can be
used to reduce this data redundancy and
correlation between bands.
• a technique applied to multispectral and
hyperspectral remotely sensed data
transforms an original correlated dataset into
a substantially smaller set of uncorrelated
variables that represents most of the
information present in the original dataset.
PCA CB Data - PC1 & PC2
PC1 - 83% PC2 - 11%
PCA CB Data - PC7
PC7 -
0.05%
Mostly
Noise
PCA CB Data Composites
CB
321
PC
321
E.g. Original Data
SPOT HRV 321
composite
65m x 65m
SPOT panchromatic
32m x 32m
E.g. Merged Dataset
Digital Image Processing
There are 4 main categories of
Digital Image Processing
Preprocessing
Image Enhancement
Image Transformation
Image Classification
Classification
Defn = ordering. discrimination, assigning labels
Classification is used to “smooth” out small
variations in an image and to simplify it into a
Land cover “map”
One of the most common methods of
extracting Information from image data
Uses Multi-spectral data which has been pre-
processed, especially in terms of geometric
rectification
Image Classification
Digital image classification uses the spectral
information represented by the digital numbers
in many spectral bands, and attempts to
classify each individual pixel based on this
information. This is termed spectral pattern
recognition.