Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
22 views14 pages

Target Classification Introduction

This chapter discusses target classification, discrimination, and identification in radar systems, focusing on air and missile targets. It covers the target classification problem, radar-measured target features, waveforms, signal processing, feature extraction, and various classifiers including Bayes’, Dempster-Shafer, and decision trees. The chapter emphasizes the importance of accurate target classification for effective radar operation and defense systems.

Uploaded by

w.lin.gs.1888
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views14 pages

Target Classification Introduction

This chapter discusses target classification, discrimination, and identification in radar systems, focusing on air and missile targets. It covers the target classification problem, radar-measured target features, waveforms, signal processing, feature extraction, and various classifiers including Bayes’, Dempster-Shafer, and decision trees. The chapter emphasizes the importance of accurate target classification for effective radar operation and defense systems.

Uploaded by

w.lin.gs.1888
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

6

Target Classification,
Discrimination, and
Identification

6.1 INTRODUCTION
This chapter covers the concepts of target classification, discrimination, and
identification. [2, 3, 5] are excellent sources of background on this subject. The
topics covered in this chapter include:

• Introduction to the target classification problem


• Radar-measured target features
• Waveforms and signal processing
• Feature extraction
• Classifiers:
– Bayes’
– Dempster-Shafer
– Decision trees
– Others

125
126 Chapter 6

• Classification of air targets:


– Noncooperative target recognition
– Target identification (ID)

• Classification of ballistic missile targets

– Discrimination

• Hit or kill assessment.

The target classification, discrimination, and identification topic completes the


overview of fundamental radar theory that forms the basis of designing and an-
alyzing phased-array radars. As noted in Chapter 5, the concept of parameter es-
timation is at the core of target classification. Here it is referred to by the special
name of target feature extraction.
This chapter focuses on the target classification problem that exists for air
and missile targets. As will be seen, the detection and tracking of targets is a
prerequisite for performing this function. In the case of air targets as described
in the previous chapter, the terms target classification and non-cooperative target
recognition (NCTR) are used synonymously. Another term, identification (ID), is
used as a refined type of classification or NCTR. Although not discussed explic-
itly in the chapter, ship target classification is very similar to that of the air tar-
get case.
For ballistic missile targets, the terms classification and discrimination are fre-
quently used ambiguously and inconsistently. In this book, target classification
means to categorize targets by class, such as tactical ballistic missiles (TBMs), in-
tercontinental ballistic missiles (ICBM), intermediate-range ballistic missiles
(IRBMs), and so on. Discrimination on the other hand refines classification to ob-
ject types. A term used in this book for the complete set of possible categoriza-
tion is classification, discrimination, and identification (CDI).
The last part of the chapter addresses the topic of hit or kill assessment. This is
included for the reason that air and ballistic missile defense fire control radars
usually need to assess the success of the threat intercept when there is adequate
time to take a second shot if the first attempt misses. This function is very similar
to the target classification problem, using its own unique features to decide on a
hit, kill, or miss.
Target Classification, Discrimination, and Identification 127

6.2 THE TARGET CLASSIFICATION PROBLEM


In its simplest form, the target classification problem asks the question: What
kind of target is being tracked? Since the decision will be based on data or fea-
tures collected by the radar, it is best expressed mathematically:

find i such that { p( H i f) } is max imum ≥ pmin , (6.1)

where H i, f , and p min are the ith target class hypothesis, the target feature vector,
and the minimum desired probability of declaring a target class. The conditional
probability in equation (6.1) is referred to as the a posteriori or posterior proba-
bility, that is, the likelihood that the target is in class i given that feature vector f
was measured by the radar.
The test against a minimum probability is optional; however, it is a good prac-
tice to apply this type of test to ensure that only reasonably probable class decla-
rations are accepted. In many applications, the minimum probability is supplied
to the radar or performed by the command, control, battle manager (C2BMC) or
ship combat system controlling the fire control system.
The remainder of this chapter addresses the target features, the radar wave-
forms to collect them, and the classifiers used to implement equation (6.1).

6.3 RADAR-MEASURED TARGET FEATURES


The feature vector f in equation (6.1) represents the set of all target features col-
lected to perform the target classification function. Possible features include:

• Kinematics (i.e., track-based features)


• Signatures
• Pattern-based.

The first two are physics-based features. Possible target kinematics features in-
clude:

• Speed
• Acceleration or deceleration
• Altitude and altitude-rate.
128 Chapter 6

Similarly, signature features can consist of:

• Radar cross section (RCS)


• Target size
• Target shape
• Phase measurements.

Pattern-based features are descriptive of the distribution of objects. At the


macroscopic level, an example is targets in a certain formation.
All three classes of target features are useful in classifying, discriminating, and
identifying air, missile, and ship targets.

6.4 WAVEFORMS AND SIGNAL PROCESSING

6.4.1 Classification, Discrimination, and Identification Waveforms


Target features are collected by the radar to perform the CDI functions. The
waveforms used to enable feature measurements vary with the desired type of
features to be collected. The kinematics features listed in Section 3 are usually
available from the waveforms used for tracking targets. In general, these are rel-
atively narrowband waveforms. Since most target tracking (with the exception
of the track-while-scan approach) use update rates of 1 Hz or higher, in normal
operation no additional waveforms need to be scheduled for CDI purposes to
collect kinematics features. For low-altitude operation, moving target indicator
(MTI) or pulse-Doppler waveforms may be required for detection and tracking.
In these cases, kinematics-type features can be extracted from the pulse train.
For signature features, a wide range of waveform bandwidths can be em-
ployed, from narrowband to wide bandwidths. Again, where multipulse wave-
forms are used for clutter mitigation or to measure range-rate, features are
extracted from the pulse train returns.

6.4.2 Signal Processing


In the case of narrowband waveforms, such as those used for target tracking,
no special signal processing is required, whether single or multiple-pulse
Target Classification, Discrimination, and Identification 129

waveforms are employed. For the former waveforms, typical signal processing
will consist of all-range digital pulse compression (for linear-frequency modu-
lation [FM] waveforms), followed by range and amplitude interpolation and
peak-detection. In the latter situation, when multiple-pulse waveforms are em-
ployed, pulse matched filtering will be followed by Doppler processing and
the above post-detection sequence.
Wideband waveform processing is dependent on the bandwidths used for
feature collection. For bandwidths less than 100 MHz or so, current analog-to-
digital converter (A/D) technology allows digital pulse compression. How-
ever, at bandwidths above 100 MHz, some form of “stretch” or spectrum
analysis-type processing will usually be required for matched filtering. For
wideband multipulse waveforms, these pulse matched filters will be followed
by Doppler processing.
Again, range and amplitude interpolation and peak detection are necessary
for wideband waveforms, as well as fine phase measurement for certain feature
extraction purposes.

6.5 FEATURE EXTRACTION

The term feature extraction, as used in this chapter, covers a broad family of radar
measurement processing. For the features described in Section 3, possible fea-
ture extraction might entail:

• Standard track filter processing for:


– Target speed and acceleration
– Target altitude and altitude-rate (which may require conversion of state vec-
tor data)
– Target rotation-rate and acceleration (depending on state vector composi-
tion)

• Computation and smoothing of target RCS

• Computation and smoothing of target size

• Computation and smoothing of fine phase measurements.


130 Chapter 6

Since the tracking filter performs smoothing as a part of its normal processing,
no additional smoothing is required for the kinematics feature extraction listed
above.

6.6 CLASSIFIERS
As stated in other parts of this book target classifiers can be categorized as being
Bayesian or non-Bayesian. In other words, either formal Bayes’ rule-type classifi-
ers are employed or those that use other means to decide on target class. The lat-
ter category of classifier can be probability-based or not, depending on the
specific decision processing implemented.

6.6.1 Bayes’ Classifier


The Bayes classifier is an implementation of Bayes’ rule of conditional probabil-
ity [2, 3]:

( ) ( )
( )
P fi c j P c j
P c j fi = M , (6.2)
∑ P( f c ) P( c )
i k k
k =1

where P ( c j f i ) is the probability of target class j given that feature i is measured,


P ( f i c j ) is the conditional probability of feature i occurring given that cj is the
underlying target class j, and P(cj) is the class prior probability (i.e., the probabil-
ity of class j occurring out of all J classes).
The two conditional probabilities in equation (6.2) are also referred to as a pos-
teriori (or posterior) and feature probabilities. The J posterior probabilities are
the classifier’s outputs, and the feature means and class probabilities are ele-
ments of the classifier database. The feature probabilities are computed based on
the underlying probability density, the feature means, and the error covariance
matrix, defined as feature mean value:

{
µi j = E fi c j , } (6.3)
Target Classification, Discrimination, and Identification 131

where f i and c j are the ith and jth feature and target class, respectively, and fea-
ture error covariance matrix M:

σ 112 ρ12 σ 1 σ 2 .... 


 
 ρ12 σ 1σ 2 σ 22 2 .... 
M = {
E f f T
} =   (6.4)
 
ρ σ σ 2 
 1 N 1 N .... σN N 

to result in feature probability:

( f − µ )T M −1 ( f − µ )
( ) 1 −
P f cj = e 2
. (6.5)
( )
N 1
2π M 2

In equation (6.5), f is the measured feature vector, and µ is the feature mean
vector, and M is the feature error covariance matrix. When all features are inde-
pendent and uncorrelated, equation (6.5) can be simplified to:

(f −µ )
2
i ij

(
P fi c j ) =
1
2π σi j
e
2 σ i2j
. (6.6)

In real-world systems, a battle manager, command and control, or combat sys-


tem will establish a minimum threshold test for the posterior probabilities to de-
clare a target class. Equation (6.2) can be implemented recursively, where
posterior probabilities can be used as prior probabilities on successive iterations.
When posterior probabilities do not clearly indicate a single-class decision, the
battle manager or combat system can defer its decision.
One necessary requirement of a Bayes’ classifier is that all possible target
classes must be identified in the classifier database. This is required since the
Bayes’ classifier will always compute posterior probabilities even if the correct
class is not one of the target hypotheses (and at least one posterior probability
will always be the largest). This is the reason for the minimum probability test
implied in equation (6.1). An incomplete classifier database (i.e., with unrepre-
sented target hypotheses) can lead to spurious and erroneous results when only
132 Chapter 6

the largest posterior probability is used as the metric to declare target classes.
One solution for this inherent problem with the Bayes’ classifier is to define an
unknown or “strange” class to accommodate nonidentified target classes. When
this method is used, the strange class posterior probability can be used to assess
the reasonableness of the apparent target class indicated by other posterior prob-
abilities. Such an approach is very important in effectively using Bayes’ classifi-
ers and is analogous to adding process noise to a Kalman filter to compensate for
unmodeled target dynamics or states.
Given this limitation, the Bayes’ classifier using equation (6.5) is the optimal
linear classifier when assumed feature probability distributions match the true
underlying statistics. When underlying probability densities are known a priori
or can be estimated from measurements, these can be optimally used by the
Bayes’ classifier.

6.6.2 Dempster-Shafer (D-S) Classifier


The Dempster-Shafer (D-S) classifier is a non-Bayesian statistical classifier that
uses the concepts of “evidence,” “plausibility,” and “probability masses” upon
which to base target class decisions.
Analogous to the conditional probability used by the Bayes’ classifier, the con-
ditional probability mass of class A given features v1and v2 can be expressed as:

m(A|v1,v2) = [m(A|v1)m(A|v2) + m(AvB|v1)m(A|2) + m(A|v1)m(AvB|v2)]/D


(6.7)

The probability mass for m(B|v1,v2) can be expressed in a similar fashion as in


equation (6.7). Now consider the probability mass associated with classes A or B
conditioned on the features:

m(AvB|v1,v2) = [m(AvB|v1) m(AvB|v2)]/D (6.8)

where D equals the sum of the numerators and AvB means class A or B.
After all evidence has been considered, the D-S classifier needs a decision rule
such as the plausibility of A given by:

P(A) = [m(A) + m(AvB)] / [m(A) + m(AvB) + m(B) + m(AvB)]. (6.9)


Target Classification, Discrimination, and Identification 133

As described in [1, 6], the evidence leading to a decision is the probability


masses associated with the candidate target hypotheses. Using the mass combi-
nation rules, such as that represented by equation (6.9), the plausibility of the
underlying target classes can be computed.
Key differences between D-S and Bayes’ are the use of unnormalized probabil-
ities (i.e., the probability “masses”), and a probability distribution-free approach
compared with the Bayes’ classifier, which often assumes an underlying Gauss-
ian probability distribution. Another important difference is the ability to handle
correlated features. Bayes’ theory incorporates feature correlation information
via the feature error covariance matrix, and specifically by the off-diagonal
terms. D-S theory does not account for feature interdependencies. For radar ap-
plications, this can be a deficiency of the D-S classifier compared with the Bayes’
methods. Although the D-S classifier can be modified to account for correlated
features, these adjustments are ad hoc in nature and are suboptimal solutions
compared to the Bayes’ classifier. For this reason, the use of Bayes classifiers for
radar-based CDI is often the prevalent choice.

6.6.3 Decision Tree Classifiers


One of the simpler targets classifiers is a decision tree with fixed structure and
decision rules. Decision trees are desirable when minimizing computer through-
put is a strong consideration in classifier selection and feature statistics are not
available or cannot be quantified. Decision trees can employ nonquantitative
features and concepts such as “slow targets” versus “fast targets,” or “short tar-
gets” versus “long targets,” “manned targets” versus “unmanned targets,” and
similar “fuzzy” target-related attributes.
A key rule in designing decision trees is to employ the highest quality features
or those with the greatest discriminating capabilities early in the decision pro-
cess, and lower quality or less discriminating features later in the tree. Figure 6.1
depicts a simple decision tree for use of target total energy (i.e., potential plus ki-
netic energies) to separate tactical ballistic missiles and air-breathing targets
(ABTs).

6.6.4 Rule-Based Classifiers


Rule-based algorithms can be used for target classifiers These may or may not
use quantitative features and can make “hard” or “soft” decisions, unlike simple
134 Chapter 6

Figure 6.1 A Typical Decision Tree Classifier

decision trees that only make “hard” decisions (e.g., a target is in class A or B
not, perhaps, in both classes). These rules are usually logical functions such as
“if-then-else.” An example of rule-based classifier constructs are:

If {speed is slow}
Then{targetisahelicopterorunmannedaerialvehicle(UAV)}(6.10)
Else {target is a tank or ship}

or,

If {speed > vp}


Then {target is jet-powered} (6.11)
Else {target is propeller powered}.

As can be seen, either qualitative or quantitative rules can be used in a decision


tree. This also provides the ability to use so-called fuzzy logic or neural-like pro-
cesses. The primary drawback of these classifiers is from an algorithm-training
and analysis perspective, since there is no systematic analytical method of de-
signing or analyzing them.
Target Classification, Discrimination, and Identification 135

6.6.5 Compound Classifiers


The classifiers discussed in the preceding sections are some commonly used
ones; there are many more defined in [2, 3]. Another possible classifier type is
based on the combination of one or more of these (and other) algorithms to cre-
ate a “compound” classifier.
Often, target classification processing might use a decision tree at a high level as
the overall classifier structure. Each node can then employ different types of clas-
sifiers such as Bayes’, D-S, or rule-based approaches. Since each of the classifiers
discussed in this chapter has strengths and weaknesses, the best solution is to se-
lect the best classifier for separating or classifying targets at the particular stage of
classification processing based on performance, efficiency, and so on. This ap-
proach can yield a very powerful solution technique for target classification, dis-
crimination, and identification problems encountered in radar applications.

6.7 CLASSIFICATION OF AIR TARGETS


Air target classification can use any or all of the classifier types described in Sec-
tion 6. The desired end result is to categorize the tracked targets into distinct
classes or types so that subsequent radar processing can be performed, such as
interceptor support in the case of fire control systems.
In addition to the kinematics, signature, and context-based features described
in Section 3, additional data are often available for classifying air targets. One
type of data is Identification, Friend or Foe (IFF). This data is available from co-
operative targets that use an IFF transponder. This allows easier classification of
friendly aircraft and ships that use IFF transponders. Another specific context-
like feature is procedural in nature. Operation rules such as flight corridors can
be defined to control and identify friendly aircraft by requiring that they fly
within these corridors (an exception being the case of damaged aircraft that have
limited flight capability or where safety over-rules the use of corridors). These
types of techniques along with IFF represent powerful target ID features.
Air-breathing targets exhibit a number of specific kinematics and signature
features, including:

• Speed, acceleration, altitude, and altitude-rate


• Observed maneuver capability
136 Chapter 6

• RCS
• Estimated size
• Target shape.

The above features can be used with the Bayesian and non-Bayesian classifiers
described in Section 6 to decide on the likely target class, such as:
• Aircraft
• UAV
• Helicopter
• Cruise missile
• Other.

Moreover, these techniques can also be used to refine categories to types, or to


perform the ID function, such as airframe type.
The target classes and types listed above, along with their associated posterior
probabilities (when using Bayes-type classifiers) can be provided to the fire con-
trol system.

6.8 CLASSIFICATION OF BALLISTIC MISSILE TARGETS


The classification of ballistic missiles (BMs) is much different from that for air
targets. Although some similar target features are employed, their values and
specific usage differs. BM targets can exhibit features such as:
• Speed, acceleration, altitude, and altitude-rate
• Observed maneuver capability
• RCS
• Size.

These and other features can be used, preferably by the Bayesian classifier de-
scribed in Section 6, to decide on the likely target class, such as:
• Theater or tactical BMs
• Intermediate-range BMs
• Intercontinental BMs.
Target Classification, Discrimination, and Identification 137

Discrimination techniques can then be used to further refine categories to


types.
The target classes and types listed above, along with their associated posterior
probabilities (when using Bayes-type classifiers) are provided to the C2BMC or
ship combat system for use in computing intercept solutions.

6.9 HIT OR KILL ASSESSMENT


For systems that allow a shoot-look-shoot firing doctrine when battle space and
timeline permit, hit or kill assessment is a valuable radar function. Successful de-
termination of the effectiveness of an intercept can avoid wasting expensive in-
terceptors, or can improve the probability of negation by allocating additional
interceptors when available and feasible.
Hit or kill assessment (KA) is much the same as target classification, except
that here the classes of interest are:

• Hit
• Kill
• Miss.

Like the target classification problem, hit or kill assessment can employ any of
the classifiers described in Section 6.

6.10 PERFORMANCE PREDICTION


Back-of-the-envelope calculations of classification performance are valuable to
validate correct operation of a target classifier. One method used for estimating
classification performance is the K-factor, defined as:

µ2 − µ1
K= , (6.12)
1
2 (σ 2
1
2
+ σ2 )
2 2
where the µ 1, µ 2, σ 1, and σ 2 are the mean values and variances of feature 1 and
feature 2, respectively. Since the K-factor is a normalized statistical distance, if
138 Chapter 6

the underlying probability densities for the feature distributions are Gaussian,
then the probability of correct classification can easily be calculated using the ap-
propriate K-factor from either equation (6.12), or (6.13) as described in the fol-
lowing paragraph.
When multiple statistically independent features are used by a classifier, an
aggregate K-factor can be calculated:

2 2 2 2
K TOTAL = K1 + K 2 + K 3 + + KM , (6.13)

where through are the individual K-factors for the M features, calculated using
equation (6.12).

6.11 REFERENCES
[1] P. Dempster, et al., Classic Works on the Dempster-Shafer Theory of Belief Functions,
Springer, 2007
[2] R. Duda, et al., Pattern Classification, 2nd Edition, Wiley-Interscience, 2000
[3] K. Fukunaga, Introduction to Statistical Pattern Recognition, 2nd Edition, Academic
Press, 1990
[4] A. Gelb, Applied Optimal Estimation, MIT Press, 1974
[5] S. Theodoridis & K. Koutroumbas, Pattern Recognition, 2nd Edition, Academic Press,
2003
[6] G. Shafer, A Mathematical Theory of Evidence, Princeton University Press, 1976

You might also like