Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
41 views21 pages

Digital Image Processing and Interpretation - Module1

Module 1 covers digital image processing with a focus on image restoration, emphasizing the need for geometric and radiometric corrections to raw images. It discusses systematic and non-systematic errors, ground control points (GCP), image registration, and atmospheric corrections. The module aims to equip learners with the ability to identify and correct various types of errors in satellite images, enhancing their accuracy for analysis.

Uploaded by

Kelvin Munyinyi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views21 pages

Digital Image Processing and Interpretation - Module1

Module 1 covers digital image processing with a focus on image restoration, emphasizing the need for geometric and radiometric corrections to raw images. It discusses systematic and non-systematic errors, ground control points (GCP), image registration, and atmospheric corrections. The module aims to equip learners with the ability to identify and correct various types of errors in satellite images, enhancing their accuracy for analysis.

Uploaded by

Kelvin Munyinyi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Module 1: Digital Image Processing - Image Restoration

The raw image acquired from the sensors need to be corrected for geometric and radiometric distortions
before it can be considered as a faithful representation of the original scene. This module starts with a
discussion on the necessity of applying geometric corrections to raw images. The various types of errors,
both systematic and non systematic which are normally encountered during image capture are discussed.
Methods to apply correction are briefly discussed. The concept of ground control points (GCP) is
introduced. Image registration methodology is explained along with various interpolation techniques.
These processes are dependent on the sensor properties used to capture images. The different sources of
radiometric errors seen in images are discussed along with means of correcting them. The module ends
with an explanation about atmospheric corrections.

Module Objectives
At the end of this module, the learner should be able to:
1. Identify the purpose of applying geometric and radiometric corrections to raw images;
2. Explain the different types of geometric and radiometric errors;
3. Explain how Image registration is done using GCP;
4. Discuss the geometric correction of satellite images;
5. Discuss the radiometric correction of satellite images.

1
Module – 1 Lecture Notes– 1

Geometric Corrections – (3 Hours)

1. Introduction

The flux radiance registered by a remote sensing system ideally represents the radiant energy leaving the
surface of earth like vegetation, urban land, water bodies etc. Unfortunately, this energy flux is interspersed
with errors, both internal and external which exist as noise within the data. The internal errors, also known
as systematic errors are sensor created in nature and hence are systematic and quite predictable. The
external errors are largely due to perturbations in the platform or atmospheric scene characteristics. Image
preprocessing is the technique used to correct this degradation/noise created in the image, thereby to
produce a corrected image which replicates the surface characteristics as closely as possible. The
transformation of a remotely sensed image, so that it possesses the scale and projection properties of a
given map projection, is called geometric correction or georeferencing. A related technique essential for
georeferencing, is known as registration that deals with fitting of coordinate system of one image to that of
a second image, both of the same area.

2. Systematic Errors The sources of systematic errors in a remote sensing system are explained below:

a. Scan skew:

This happens when the ground swath is not normal and is skewed due to the forward motion of the
platform during the time of scan. For example, image from Landsat satellites will usually be skewed with
respect to the earth‟s N-S axis. The skew angle can be expressed as:

Sin ( e )
  90  Cos 1
Cos ( L)

2
Where  e = Direction of forward motion of the satellite

L = Latitude at which skew angle is to be determined

b. Platform velocity:

Caused due to a change in speed of the platform resulting in along track scale distortion.

c. Earth rotation:

It is due to the rotation of earth during the scan period resulting in along scan distortion (Figure 1). When
a satellite moving along its orbital track tries to scan the earth revolving with a surface velocity
proportional to the latitude of the nadir, there occurs a shift in displacement of the last scan line in the
image. This can be corrected provided we know the distance travelled by the satellite and its velocity.

For example, consider the case when the line joining centres of first and last scan lines to be aligned along
a line of longitude and not skewed (as observed in the case of SPOT, Terra and all sun synchronous
satellites). Landsat satellites take 103.27 minutes for one full revolution.

Expressing the distance and velocity in angular units, we have the satellite‟s angular velocity to be
2π (103.27* 60) radian/sec .

If the angular distance moved by a Landsat satellite during capture of one image is is 0.029 radians, the
time required for satellite to traverse this distance can be calculated as
0.029* (103.27* 60) 2π  28.8seconds .

Once we know the relative time difference, the aim is to determine the distance traversed by the centre
of last scan line during this time (28.6 seconds). Earth takes 23 hours, 56 minutes and 4 seconds to
complete one full rotation; therefore its angular velocity can be obtained similarly as
2π 867164 radians/sec .

The surface velocity of earth can be estimated to be 292.7m/sec (for51o ) . Hence the distanced traversed can
be finalized as 292.7* 28.6  8371m .

3
Figure 1: Figure showing distortion due to earth‟s rotation

d. Mirror scan velocity:

Caused when the rate of scanning is not constant resulting in along scan geometric distortion.

e. Aspect ratio:

Sensors like MSS of Landsat produce images whose pixels are not square. The instantaneous field of view
of MSS is 79m, while the spacing between pixels along each scan line is 56m. This results in the creation
of pixels which are not square due to over sampling in the across track direction. For image processing
purposes, square/rectangular pixels are preferred. Therefore we make use of a transformation matrix so
that the final aspect ratio becomes 1:1.

In the case of MSS, the transformation matrix will be as given below:

4
 0.0 1.41
A   
1.0 0.00

This i s because the Landsat MSS scan lines are nominally 79 m apart and the pixels along the scan line
are spaced at a distance of 56 m. Due to the general user tendency to choose square pixels rather than
rectangular ones, we can either go with 79m or 56 m to remove the issue of unequal scale in both x and y
direction. In this case, with Landsat MSS, the across scan direction will be over sampled and therefore it
will be much more reasonable to choose 79 m square pixels. Then, the aspect ratio will become the ratio
of x: y dimensions which is 56:79 that amounts to 1:1.41.

3. Non- Systematic Errors

Schematic showing systematic and non systematic errors are presented in Fig. 2. The sources of
nonsystematic errors are explained below:

a. Altitude: Caused due the departure of sensor altitude resulting in change of scale.

b. Attitude:

Errors due to attitude variations can be attributed to the roll, pitch and yaw of satellite. Schematic showing
roll, attitude distortions pertainting to an aircraft is depicted in Fig. 3. Some of these errors can be
corrected having knowledge about the platform ephemeris, ground control point, and sensor
characteristics and spacecraft velocity.

5
Figure 2: Schematic representation of the systematic and non systematic distortions

6
Figure 3: Attitude Distortions of an aircraft

7
Module – 1 Lecture Notes– 2

Ground Control Points and Co-Registration– (3 Hours)

1. Introduction

Remotely sensed images obtained raw from the satellites contain errors in the form of systematic
and non systematic geometric errors. Some of the errors can be rectified by having additional information
about satellite ephemeris, sensor characteristics etc. Some of these can be corrected by using ground
control points (GCP). These are well defined points on the surface of the earth whose coordinates can be
estimated easily on a map as well as on the image.

2. Properties of GCP

For geometric rectification of image from a map or from another registered image, selection of GCP is
the prime step. Hence, proper caution must be maintained while choosing the points. Some of the
properties which the GCP should possess is outlined below:

a. They should represent a prominent feature which is not likely to change for a long duration of
time. For example, the choice of a highway intersection of corner of a steel bridge is more
appropriate as a GCP than a tree or meandering part of a river. This is essential for easiness in
identifying points from the image/map as permanent locations will not perish for a long duration of
time.
b. GCPs should be well distributed. This means that rather than concentrating on points lying close
to each other, points selected farther apart should be given priority. This enables the selected points
to be fully representative of the area, as it is essential for proper geometric registration. More
knowledge about this step will be reiterated in section 3 and 4.
c. Optimum number of GCPs should be selected depending on the area to be represented.
Greater the number of carefully selected and well apart points more will be the accuracy of
registration.

8
Figure 1. (a) Insufficient distribution of GCP (b) Poor distribution of GCP (c) Well distributed GCP

The GCP when selected from a map leads to image to map rectification whereas that chosen from an
image results in image to image rectification.

3. Geometric rectification

This process enables affixing projection details of a map/image onto an image to make the image
planimetric in nature. The image to be rectified, represented by means of pixels arranged in rows and
columns can be considered equivalent to a matrix of digital number (DN) values accessed by means of
9
their row and column numbers (R,C). Similarly, the map/image (correct) coordinates of same point can be
represented by their geolocation information (X, Y). The nature of relationship of (R, C) with (X, Y) needs
to be established so that each pixel in the image is properly positioned in the rectified output image. Let
F1 and F2 be the coordinate transformation functions used to interrelate the geometrically correct
coordinates and distorted image coordinates. Let (R, C) = distorted image coordinates and (X,Y) = correct
map coordinates Then

R = F1(X, Y) and C = F2(X, Y)

The concept is similar to affixing an empty array of geometrically correct cells over the original distorted
cells of unrectified image and then fill in the values of each empty cell using the values of the
distorted image. Usually, the transformation functions used are polynomial. The unrectified image is tied
down to a map or a rectified image using a selected number of GCPs and then the polynomials are
calculated. The unrectified image is then transformed using the polynomial equations.

To illustrate this method, let us assume that a sample of x and y values are available wherein x and y are
any two variables of interest such as the row number of an image pixel and the easting coordinate of the
same point on the corresponding map. The method of least squares enables the estimation of values of x
given the corresponding value of y using a function of the form:

Here, as only a single predictor variable of y is used, the expression is termed as univariate least squares
equation. The number of predictor variables can be greater than one. It should be noted that a first order
function can only accomplish scaling, rotation, shearing and reflection but they will not account for
warping effects. A higher order function can efficiently model such distortions though in practice,
polynomials of order greater than three are rarely used for medium resolution satellite imagery.

10
3.1 Co-Registration

Errors generated due to satellite attitude variations like roll, pitch and yaw will generally be unsystematic
in nature that are removed best by identifying GCPs in the original imagery and on the reference map
followed by mathematical modeling of the geometric distortion present. Rectification of image to map
requires that the polynomial equations are fit to the GCP data using least squares criteria in order to model
the corrections directly in the image domain without identifying the source of distortion. Depending on the
image distortion, the order of polynomial equations, degree of topographic relief displacement etc will
vary. In general, for moderate distortions in a relatively small area of an image, a first order, six parameter
affine transformations is sufficient in order to rectify the imagery. This is capable of modeling six kinds of
distortion in the remote sensor data which if combined into a single expression becomes:

Here, ( x, y) denotes the position in output rectified image or map and ( x ' , y ' ) denotes the corresponding
positions in the original input image.

3.2 Image Resampling

Resampling is the process used to estimate the pixel value (DN) used to fill the empty grid cell from the
original distorted image. A number of techniques are available for resampling like nearest neighbor,
bilinear, cubic etc. In nearest neighbor resampling technique, the DN for empty grid is assigned to the
nearest pixel of the overlapping undistorted correct image. This offers computational simplicity and
alteration of original DN values. However it suffers from the disadvantage of offsetting pixel values
spatially causing a rather disjointed appearance to the rectified image. The bilinear interpolation technique
considers a weighted average approach with the nearest four pixel values/DN. As this process is actually a
2D equivalent of linear interpolation, hence the name „bilinear‟. The resulting image will look smoother at
the stake of alteration of original DN values. Cubic interpolation is an improved version of bilinear
resampling technique, where the 16 pixels surrounding a particular pixel are analyzed to come out with a
synthetic DN. Cubic resampling also tends to distort the resampled image. In image processing studies,

11
where the DN stands to represent the spectral radiance emanating from the region encompassing field of
view of sensor, alteration of DN values may result in problems in spectral pattern analysis studies. This
is the main reason why image classification techniques are performed before image resampling.

Figure 1: Polynomial transformation for geometric correction of images

12
Figure 2: Image Resampling by overlay of rectified empty matrix over unrectified image.

13
Module – 1 Lecture Notes– 3

Atmospheric Corrections – (3 Hours)

1. Introduction

The energy registered by the sensor will not be exactly equal to that emitted or reflected from the terrain
surface due to radiometric and geometric errors. They represent the commonly encountered error that
alters the original data by including errors. Of these, geometric error types and their methods of correction
have been discussed in the previous lecture. Radiometric errors can be sensor driven or due to atmospheric
attenuation. Before analysis of remote sensing images, it is essential that these error types are identified
and removed to avoid error propagation.

2. Sensor driven errors

Such errors occur due to the improper functioning of the sensor system. Some of the commonly
encountered error due to sensor malfunctioning are discussed below:

a. Line drop out: This error results in transverse scanning systems when out of the multiple
detectors used, 1 or 2 fails to function properly. Satellites like Landsat MSS has 6 detectors of
which sometimes even if one detector fails to function properly, this results in zero DN recorded
for every pixel during corresponding scan lines. Such images will be smeared with black lines.

14
Figure 1: Sequence of lines read by detectors in Transverse scanning system

There is no exact methodology to restore the DN values of such images. However, to


improve the interpretability of such images, sometimes average of proceeding and succeeding lines
of DN are used as corrected DN values. The justification of this procedure stems from the
geographical continuity of terrain.

b. Line banding: Some detectors generate noise which is a function of the relative gain/offset
differences of the detectors within a band which results in banding. Such errors can be
corrected using a histogram based approach. For example, a histogram for each detector in
each band can be produced. Assuming that each detector has sensed a representative sample of all
the surface classes within the scene, each of the histograms will be similar (i.e., have the same
mean and standard deviation) if the detectors are matched and calibrated. However, even if one
detector is no longer producing data readings consistent with the other detectors, its histogram
will be different. An average histogram can be generated by using the DN values from all the
detectors except the faulty detector. The DN produced by all the detectors get altered so that their
histograms are then made to match the average one. When this procedure is completed, the
imbalance between the detectors is eliminated and the image is said to have been de-striped. This
procedure changes the DN for all the lines, though the relative change for the properly functioning
detectors is less when compared to systems having more detectors. A defective detector on the
Landsat MSS forms one-sixth of the input to the average histogram whereas a defective detector

15
for a reflected TM band contribute only one –sixteenth of the input to the average histogram. Fig. 2
show the histograms of each detector that tries to visually depict the line banding effect in
detector. 4. Fig. 3 shows the corrected histogram for the faulty detector 4.

Figure 2: Histogram of each detector of a hypothetical band

Figure 3: Line banding corrections

16
3. Atmospheric Corrections

The DN measured or registered by a sensor is composed of two components. One is the actual
radiance of the pixel which we wish to record, another is the atmospheric component. The magnitude of
radiance leaving ground is attenuated by atmospheric absorption and the directional properties are altered
due to scattering. Other sources of errors are due to the varying illumination geometry dependent on sun‟s
azimuth and elevation angles, ground terrain. As the atmosphere properties vary from time to time, it
becomes highly essential to correct the radiance values for atmospheric effects. But due to the highly
dynamic and complex atmospheric system, it is practically not possible to understand fully the
interactions between atmospheric system and electromagnetic radiation. Fig. 4 shows schematically the
DN measured by a remote sensing sensor. However, the relationship between received sensor radiance and
radiance leaving ground can be summarized in the form of the following relation:

Ls  T *  * D  L p

Where, D is the total downwelling radiance, ρ is the target reflectance, T is the atmospheric
transmittance. The atmospheric path radiance and radiance leaving ground is given by L P and LS. The
second term represents scattered path radiance, which introduces “haze” in the imagery.

17
Figure 4: Atmospheric correction to DN measured by remote sensing sensors

The means of correcting for atmospheric attenuation are discussed below:

a. Based on images: Some of the simple techniques used are based on the histogram minimum
method and regression. The extent to which the atmosphere alters the true DN is best seen by
examining the DN histograms for various bands. Many scenes contain very dark pixels (such as
those in deep shadow) and it might be assumed that they should have a DN of zero. A first order
atmospheric correction may be applied to remotely sensed datasets by assuming that the offsets are
due solely to the atmospheric effects and by subtracting the offset from each DN.

b. Radiative transfer model: There are several numerical radiative transfer models available such as
LOWTRAN; ATREM 5S/6S etc which make use of different assumptions to model the complex
and dynamic atmosphere system. The use of these models requires huge amounts of data collection.
18
Sometimes, due to the associated high costs for data collection, the use of standard atmospheres
such as mid latitude summer is relied upon.

c. Empirical method: This method relies on apriori knowledge about the reflectance of two targets-
one of which is light and the other is dark. Now, the radiances recorded by the sensor can be
calculated from the DN of images. The line joining the two target points can be defined to
determine the intercept representing atmospheric radiance.

Though the above methods are available to rectify errors due to atmospheric attenuation of radiance energy
flux, several studies have relied on avoiding this step suggesting that when the training data and the data
to be classified are both measured on the same relative scale, the atmospheric attenuation from both sources
tend to cancel out.

4. Solar Illumination Corrections

Satellite sensor recorded radiance is dependent on several factors such as the reflectance properties of the
target, view angle of sensor, solar elevation angle, terrain surface characteristics like slope aspect etc, and
on atmospheric attenuations. As shown in Fig. 5, corrections need to be applied to DN in order to take an
account of different illumination angles. The reflectance of any target varies with change in illumination
angle and angle of view of sensor. A function called
bi-directional reflectance distribution function (BRDF) is the name given to the function relating
magnitude of upwelling radiance of target with respect to these two angles. Images obtained at different
times of the year are acquired under different illumination conditions. Solar illumination angle, as
measure from the horizontal, is greater in the summer than in the winter.

As the earth‟s surface is not flat, terrain slope and aspect properties introduce radiometric distortion.
Among the different means of correcting terrain effects, one of them namely the cosine correction is
discussed. Assuming a lambertian surface, a constant distance between Earth and sun and a constant
amount of solar energy illuminating earth, the magnitude of irradiance that reaches a pizel on a slope is
going to be directly proportional to the cosine of the incidence angle. This can be written as:

19
Cos o
LH  LT
Cosi

Figure 5: (a) Correction applies to measured DN in order to take account of different illumination angles
(b) Effect of varying aspect with respect to illumination on the measured DN.

20
Bibliography / Further Readings

1. Paul. MK. Mather, 2004, Computer Processing of Remotely- Sensed Images, Wiley & Sons.
th
2. Lillesand T. M. & Kiefer R. W., 2000. Remote Sensing and Image Interpretation, 4 ed.
Wiley & Sons

3. John R. Jensen, 1996, Introductory Digital Image Processing, Prentice Hall.

21

You might also like