Data Mining:
Concepts and Techniques
— Slides for Textbook —
— Chapter 7 —
©Jiawei Han and Micheline Kamber
Department of Computer Science
University of Illinois at Urbana-Champaign
www.cs.uiuc.edu/~hanj
October 14, 2020 Data Mining: Concepts and Techniques 1
Chapter 7. Classification and Prediction
What is classification? What is prediction?
Issues regarding classification and prediction
Classification by decision tree induction
Bayesian Classification
Classification by Neural Networks
Classification by Support Vector Machines (SVM)
Classification based on concepts from association rule
mining
Other Classification Methods
Prediction
Classification accuracy
Summary
October 14, 2020 Data Mining: Concepts and Techniques 2
Classification vs. Prediction
Classification:
predicts categorical class labels (discrete or nominal)
classifies data (constructs a model) based on the training
set and the values (class labels) in a classifying attribute
and uses it in classifying new data
Prediction:
models continuous-valued functions, i.e., predicts
unknown or missing values
Typical Applications
credit approval
target marketing
medical diagnosis
treatment effectiveness analysis
October 14, 2020 Data Mining: Concepts and Techniques 3
Classification—A Two-Step Process
Model construction: describing a set of predetermined classes
Each tuple/sample is assumed to belong to a predefined class,
as determined by the class label attribute
The set of tuples used for model construction is training set
The model is represented as classification rules, decision trees,
or mathematical formulae
Model usage: for classifying future or unknown objects
Estimate accuracy of the model
The known label of test sample is compared with the
classified result from the model
Accuracy rate is the percentage of test set samples that are
correctly classified by the model
Test set is independent of training set, otherwise over-fitting
will occur
If the accuracy is acceptable, use the model to classify data
tuples whose class labels are not known
October 14, 2020 Data Mining: Concepts and Techniques 4
Classification Process (1): Model
Construction
Classification
Algorithms
Training
Data
NAME RANK YEARS TENURED Classifier
Mike Assistant Prof 3 no (Model)
Mary Assistant Prof 7 yes
Bill Professor 2 yes
Jim Associate Prof 7 yes IF rank = ‘professor’
Dave Assistant Prof 6 no
OR years > 6
Anne Associate Prof 3 no
THEN tenured = ‘yes’
October 14, 2020 Data Mining: Concepts and Techniques 5
Classification Process (2): Use the
Model in Prediction
Classifier
Testing
Data Unseen Data
(Jeff, Professor, 4)
NAME RANK YEARS TENURED
Tom Assistant Prof 2 no Tenured?
Merlisa Associate Prof 7 no
George Professor 5 yes
Joseph Assistant Prof 7 yes
October 14, 2020 Data Mining: Concepts and Techniques 6
Supervised vs. Unsupervised
Learning
Supervised learning (classification)
Supervision: The training data (observations,
measurements, etc.) are accompanied by labels
indicating the class of the observations
New data is classified based on the training set
Unsupervised learning (clustering)
The class labels of training data is unknown
Given a set of measurements, observations, etc. with
the aim of establishing the existence of classes or
clusters in the data
October 14, 2020 Data Mining: Concepts and Techniques 7
Chapter 7. Classification and Prediction
What is classification? What is prediction?
Issues regarding classification and prediction
Classification by decision tree induction
Bayesian Classification
Classification by Neural Networks
Classification by Support Vector Machines (SVM)
Classification based on concepts from association rule
mining
Other Classification Methods
Prediction
Classification accuracy
Summary
October 14, 2020 Data Mining: Concepts and Techniques 8
Issues Regarding Classification and Prediction
(1): Data Preparation
Data cleaning
Preprocess data in order to reduce noise and handle
missing values
Relevance analysis (feature selection)
Remove the irrelevant or redundant attributes
Data transformation
Generalize and/or normalize data
October 14, 2020 Data Mining: Concepts and Techniques 9
Issues regarding classification and prediction
(2): Evaluating Classification Methods
Predictive accuracy
Speed and scalability
time to construct the model
time to use the model
Robustness
handling noise and missing values
Scalability
efficiency in disk-resident databases
Interpretability:
understanding and insight provided by the model
Goodness of rules
decision tree size
compactness of classification rules
October 14, 2020 Data Mining: Concepts and Techniques 10
Chapter 7. Classification and Prediction
What is classification? What is prediction?
Issues regarding classification and prediction
Classification by decision tree induction
Bayesian Classification
Classification by Neural Networks
Classification by Support Vector Machines (SVM)
Classification based on concepts from association rule
mining
Other Classification Methods
Prediction
Classification accuracy
Summary
October 14, 2020 Data Mining: Concepts and Techniques 11
Training Dataset
age income student credit_rating buys_computer
<=30 high no fair no
This <=30 high no excellent no
31…40 high no fair yes
follows an >40 medium no fair yes
example >40 low yes fair yes
from >40 low yes excellent no
31…40 low yes excellent yes
Quinlan’s <=30 medium no fair no
ID3 <=30 low yes fair yes
>40 medium yes fair yes
<=30 medium yes excellent yes
31…40 medium no excellent yes
31…40 high yes fair yes
>40 medium no excellent no
October 14, 2020 Data Mining: Concepts and Techniques 12
Output: A Decision Tree for “buys_computer”
age?
<=30 overcast
30..40 >40
student? yes credit rating?
no yes excellent fair
no yes no yes
October 14, 2020 Data Mining: Concepts and Techniques 13
Algorithm for Decision Tree Induction
Basic algorithm (a greedy algorithm)
Tree is constructed in a top-down recursive divide-and-conquer
manner
At start, all the training examples are at the root
Attributes are categorical (if continuous-valued, they are
discretized in advance)
Examples are partitioned recursively based on selected attributes
Test attributes are selected on the basis of a heuristic or
statistical measure (e.g., information gain)
Conditions for stopping partitioning
All samples for a given node belong to the same class
There are no remaining attributes for further partitioning –
majority voting is employed for classifying the leaf
There are no samples left
October 14, 2020 Data Mining: Concepts and Techniques 14
Attribute Selection Measure:
Information Gain (ID3/C4.5)
Select the attribute with the highest information gain
S contains si tuples of class Ci for i = {1, …, m}
information measures info required to classify any
arbitrary tuple m
si si
I( s1,s2,...,sm ) log 2
i 1 s s
entropy of attribute A with values {a1,a2,…,av}
v
s1 j ... smj
E(A) I ( s1 j ,..., smj )
j 1 s
information gained by branching on attribute A
Gain(A) I(s 1, s 2 ,..., sm) E(A)
October 14, 2020 Data Mining: Concepts and Techniques 15
Attribute Selection by Information
Gain Computation
Class P: buys_computer = “yes” 5 4
E ( age) I ( 2,3) I (4,0)
Class N: buys_computer = “no” 14 14
I(p, n) = I(9, 5) =0.940 5
I (3,2) 0.694
Compute the entropy for age: 14
age pi ni I(pi, ni) 5
I (2,3) means “age <=30” has 5
<=30 2 3 0.971 14
out of 14 samples, with 2 yes’es
30…40 4 0 0 and 3 no’s. Hence
>40 3 2 0.971
age income student credit_rating buys_computer Gain(age) I ( p, n) E (age) 0.246
<=30 high no fair no
<=30 high no excellent no
31…40 high no fair yes Similarly,
>40 medium no fair yes
>40 low yes fair yes
>40
31…40 low
low yes
yes
excellent
excellent
no
yes
Gain(income) 0.029
Gain( student ) 0.151
<=30 medium no fair no
<=30 low yes fair yes
>40 medium yes fair yes
<=30 medium
31…40 medium
yes
no
excellent
excellent
yes
yes
Gain(credit _ rating ) 0.048
31…40 high yes fair yes
>40October 14, 2020 no
medium excellent Data Mining:
no Concepts and Techniques 16
Other Attribute Selection Measures
Gini index (CART, IBM IntelligentMiner)
All attributes are assumed continuous-valued
Assume there exist several possible split values for
each attribute
May need other tools, such as clustering, to get the
possible split values
Can be modified for categorical attributes
October 14, 2020 Data Mining: Concepts and Techniques 17
Gini Index (IBM IntelligentMiner)
If a data set T contains examples from n classes, gini index,
gini(T) is defined as gini(T ) 1
n
p2 j
j 1
where pj is the relative frequency of class j in T.
If a data set T is split into two subsets T1 and T2 with sizes
N1 and N2 respectively, the gini index of the split data
contains examples from n classes, the gini index gini(T) is
defined as
N 1 gini( ) N 2 gini( )
gini split (T ) T1 T2
N N
The attribute provides the smallest ginisplit(T) is chosen to
split the node (need to enumerate all possible splitting
points for each attribute).
October 14, 2020 Data Mining: Concepts and Techniques 18
Extracting Classification Rules from Trees
Represent the knowledge in the form of IF-THEN rules
One rule is created for each path from the root to a leaf
Each attribute-value pair along a path forms a conjunction
The leaf node holds the class prediction
Rules are easier for humans to understand
Example
IF age = “<=30” AND student = “no” THEN buys_computer = “no”
IF age = “<=30” AND student = “yes” THEN buys_computer = “yes”
IF age = “31…40” THEN buys_computer = “yes”
IF age = “>40” AND credit_rating = “excellent” THEN buys_computer =
“yes”
IF age = “<=30” AND credit_rating = “fair” THEN buys_computer = “no”
October 14, 2020 Data Mining: Concepts and Techniques 19
Avoid Overfitting in Classification
Overfitting: An induced tree may overfit the training data
Too many branches, some may reflect anomalies due
to noise or outliers
Poor accuracy for unseen samples
Two approaches to avoid overfitting
Prepruning: Halt tree construction early—do not split a
node if this would result in the goodness measure
falling below a threshold
Difficult to choose an appropriate threshold
Postpruning: Remove branches from a “fully grown”
tree—get a sequence of progressively pruned trees
Use a set of data different from the training data to
decide which is the “best pruned tree”
October 14, 2020 Data Mining: Concepts and Techniques 20
Approaches to Determine the Final
Tree Size
Separate training (2/3) and testing (1/3) sets
Use cross validation, e.g., 10-fold cross validation
Use all the data for training
but apply a statistical test (e.g., chi-square) to
estimate whether expanding or pruning a node may
improve the entire distribution
Use minimum description length (MDL) principle
halting growth of the tree when the encoding is
minimized
October 14, 2020 Data Mining: Concepts and Techniques 21
Enhancements to basic decision
tree induction
Allow for continuous-valued attributes
Dynamically define new discrete-valued attributes that
partition the continuous attribute value into a discrete
set of intervals
Handle missing attribute values
Assign the most common value of the attribute
Assign probability to each of the possible values
Attribute construction
Create new attributes based on existing ones that are
sparsely represented
This reduces fragmentation, repetition, and replication
October 14, 2020 Data Mining: Concepts and Techniques 22
Classification in Large Databases
Classification—a classical problem extensively studied by
statisticians and machine learning researchers
Scalability: Classifying data sets with millions of examples
and hundreds of attributes with reasonable speed
Why decision tree induction in data mining?
relatively faster learning speed (than other classification
methods)
convertible to simple and easy to understand
classification rules
can use SQL queries for accessing databases
comparable classification accuracy with other methods
October 14, 2020 Data Mining: Concepts and Techniques 23
Scalable Decision Tree Induction
Methods in Data Mining Studies
SLIQ (EDBT’96 — Mehta et al.)
builds an index for each attribute and only class list and
the current attribute list reside in memory
SPRINT (VLDB’96 — J. Shafer et al.)
constructs an attribute list data structure
PUBLIC (VLDB’98 — Rastogi & Shim)
integrates tree splitting and tree pruning: stop growing
the tree earlier
RainForest (VLDB’98 — Gehrke, Ramakrishnan & Ganti)
separates the scalability aspects from the criteria that
determine the quality of the tree
builds an AVC-list (attribute, value, class label)
October 14, 2020 Data Mining: Concepts and Techniques 24
Data Cube-Based Decision-Tree
Induction
Integration of generalization with decision-tree induction
(Kamber et al’97).
Classification at primitive concept levels
E.g., precise temperature, humidity, outlook, etc.
Low-level concepts, scattered classes, bushy
classification-trees
Semantic interpretation problems.
Cube-based multi-level classification
Relevance analysis at multi-levels.
Information-gain analysis with dimension + level.
October 14, 2020 Data Mining: Concepts and Techniques 25
Presentation of Classification Results
October 14, 2020 Data Mining: Concepts and Techniques 26
Visualization of a Decision Tree in
SGI/MineSet 3.0
October 14, 2020 Data Mining: Concepts and Techniques 27
Interactive Visual Mining by
Perception-Based Classification (PBC)
October 14, 2020 Data Mining: Concepts and Techniques 28
Chapter 7. Classification and Prediction
What is classification? What is prediction?
Issues regarding classification and prediction
Classification by decision tree induction
Bayesian Classification
Classification by Neural Networks
Classification by Support Vector Machines (SVM)
Classification based on concepts from association rule
mining
Other Classification Methods
Prediction
Classification accuracy
Summary
October 14, 2020 Data Mining: Concepts and Techniques 29
Bayesian Classification: Why?
Probabilistic learning: Calculate explicit probabilities for
hypothesis, among the most practical approaches to certain
types of learning problems
Incremental: Each training example can incrementally
increase/decrease the probability that a hypothesis is
correct. Prior knowledge can be combined with observed
data.
Probabilistic prediction: Predict multiple hypotheses,
weighted by their probabilities
Standard: Even when Bayesian methods are computationally
intractable, they can provide a standard of optimal decision
making against which other methods can be measured
October 14, 2020 Data Mining: Concepts and Techniques 30
Bayesian Theorem: Basics
Let X be a data sample whose class label is unknown
Let H be a hypothesis that X belongs to class C
For classification problems, determine P(H/X): the
probability that the hypothesis holds given the observed
data sample X
P(H): prior probability of hypothesis H (i.e. the initial
probability before we observe any data, reflects the
background knowledge)
P(X): probability that sample data is observed
P(X|H) : probability of observing the sample X, given that
the hypothesis holds
October 14, 2020 Data Mining: Concepts and Techniques 31
Bayesian Theorem
Given training data X, posteriori probability of a hypothesis
H, P(H|X) follows the Bayes theorem
P(H | X ) P( X | H )P(H )
P( X )
Informally, this can be written as
posterior =likelihood x prior / evidence
MAP (maximum posteriori) hypothesis
h arg max P(h | D) arg max P(D | h)P(h).
MAP hH hH
Practical difficulty: require initial knowledge of many
probabilities, significant computational cost
October 14, 2020 Data Mining: Concepts and Techniques 32
Naïve Bayes Classifier
A simplified assumption: attributes are conditionally
independent:
n
P( X | C i) P( x k | C i)
k 1
The product of occurrence of say 2 elements x1 and x2,
given the current class is C, is the product of the
probabilities of each element taken separately, given the
same class P([y1,y2],C) = P(y1,C) * P(y2,C)
No dependence relation between attributes
Greatly reduces the computation cost, only count the class
distribution.
Once the probability P(X|Ci) is known, assign X to the class
with maximum P(X|Ci)*P(Ci)
October 14, 2020 Data Mining: Concepts and Techniques 33
Training dataset
age income student credit_rating buys_computer
Class: <=30 high no fair no
C1:buys_computer= <=30 high no excellent no
‘yes’ 30…40 high no fair yes
C2:buys_computer= >40 medium no fair yes
‘no’ >40 low yes fair yes
>40 low yes excellent no
Data sample 31…40 low yes excellent yes
X =(age<=30, <=30 medium no fair no
Income=medium, <=30 low yes fair yes
Student=yes >40 medium yes fair yes
Credit_rating= <=30 medium yes excellent yes
Fair) 31…40 medium no excellent yes
31…40 high yes fair yes
>40 medium no excellent no
October 14, 2020 Data Mining: Concepts and Techniques 34
Naïve Bayesian Classifier: Example
Compute P(X/Ci) for each class
P(age=“<30” | buys_computer=“yes”) = 2/9=0.222
P(age=“<30” | buys_computer=“no”) = 3/5 =0.6
P(income=“medium” | buys_computer=“yes”)= 4/9 =0.444
P(income=“medium” | buys_computer=“no”) = 2/5 = 0.4
P(student=“yes” | buys_computer=“yes)= 6/9 =0.667
P(student=“yes” | buys_computer=“no”)= 1/5=0.2
P(credit_rating=“fair” | buys_computer=“yes”)=6/9=0.667
P(credit_rating=“fair” | buys_computer=“no”)=2/5=0.4
X=(age<=30 ,income =medium, student=yes,credit_rating=fair)
P(X|Ci) : P(X|buys_computer=“yes”)= 0.222 x 0.444 x 0.667 x 0.0.667 =0.044
P(X|buys_computer=“no”)= 0.6 x 0.4 x 0.2 x 0.4 =0.019
P(X|Ci)*P(Ci ) : P(X|buys_computer=“yes”) * P(buys_computer=“yes”)=0.028
P(X|buys_computer=“yes”) * P(buys_computer=“yes”)=0.007
X belongs to class “buys_computer=yes”
October 14, 2020 Data Mining: Concepts and Techniques 35
Naïve Bayesian Classifier: Comments
Advantages :
Easy to implement
Good results obtained in most of the cases
Disadvantages
Assumption: class conditional independence , therefore loss of
accuracy
Practically, dependencies exist among variables
E.g., hospitals: patients: Profile: age, family history etc
Symptoms: fever, cough etc., Disease: lung cancer, diabetes etc
Dependencies among these cannot be modeled by Naïve Bayesian
Classifier
How to deal with these dependencies?
Bayesian Belief Networks
October 14, 2020 Data Mining: Concepts and Techniques 36
Bayesian Networks
Bayesian belief network allows a subset of the variables
conditionally independent
A graphical model of causal relationships
Represents dependency among the variables
Gives a specification of joint probability distribution
Nodes: random variables
Links: dependency
X Y X,Y are the parents of Z, and Y is the
parent of P
No dependency between Z and P
Z
P Has no loops or cycles
October 14, 2020 Data Mining: Concepts and Techniques 37
Bayesian Belief Network: An Example
Family
Smoker
History
(FH, S) (FH, ~S) (~FH, S) (~FH, ~S)
LC 0.8 0.5 0.7 0.1
LungCancer Emphysema ~LC 0.2 0.5 0.3 0.9
The conditional probability table
for the variable LungCancer:
PositiveXRay Dyspnea Shows the conditional probability
for each possible combination of its
parents n
Bayesian Belief Networks P ( z1,..., zn) P ( z i | Parents ( Z i ))
i 1
October 14, 2020 Data Mining: Concepts and Techniques 38
Learning Bayesian Networks
Several cases
Given both the network structure and all variables
observable: learn only the CPTs
Network structure known, some hidden variables:
method of gradient descent, analogous to neural
network learning
Network structure unknown, all variables observable:
search through the model space to reconstruct graph
topology
Unknown structure, all hidden variables: no good
algorithms known for this purpose
D. Heckerman, Bayesian networks for data mining
October 14, 2020 Data Mining: Concepts and Techniques 39
Chapter 7. Classification and Prediction
What is classification? What is prediction?
Issues regarding classification and prediction
Classification by decision tree induction
Bayesian Classification
Classification by Neural Networks
Classification by Support Vector Machines (SVM)
Classification based on concepts from association rule
mining
Other Classification Methods
Prediction
Classification accuracy
Summary
October 14, 2020 Data Mining: Concepts and Techniques 40
Classification
Classification:
predicts categorical class labels
Typical Applications
{credit history, salary}-> credit approval ( Yes/No)
{Temp, Humidity} --> Rain (Yes/No)
x X {0,1} , y Y {0,1}
n
Mathematically h: X Y
y h( x )
October 14, 2020 Data Mining: Concepts and Techniques 41
Linear Classification
Binary Classification
problem
The data above the red
line belongs to class ‘x’
x The data below red line
x x
x x belongs to class ‘o’
x Examples – SVM,
x x x o
Perceptron, Probabilistic
o
x o Classifiers
o o o
o o o
o o o o
October 14, 2020 Data Mining: Concepts and Techniques 42
Discriminative Classifiers
Advantages
prediction accuracy is generally high
(as compared to Bayesian methods – in general)
robust, works when training examples contain errors
fast evaluation of the learned target function
(Bayesian networks are normally slow)
Criticism
long training time
difficult to understand the learned function (weights)
(Bayesian networks can be used easily for pattern discovery)
not easy to incorporate domain knowledge
(easy in the form of priors on the data or distributions)
October 14, 2020 Data Mining: Concepts and Techniques 43
Neural Networks
Analogy to Biological Systems (Indeed a great example
of a good learning system)
Massive Parallelism allowing for computational
efficiency
The first learning algorithm came in 1959 (Rosenblatt)
who suggested that if a target output value is provided
for a single neuron with fixed inputs, one can
incrementally change weights to learn to produce these
outputs using the perceptron learning rule
October 14, 2020 Data Mining: Concepts and Techniques 44
A Neuron
- k
x0 w0
x1 w1
f
output y
xn wn
Input weight weighted Activation
vector x vector w sum function
The n-dimensional input vector x is mapped into
variable y by means of the scalar product and a
nonlinear function mapping
October 14, 2020 Data Mining: Concepts and Techniques 45
A Neuron
- k
x0 w0
x1 w1
f
output y
xn wn
Input weight weighted Activation
vector x vector w sum function
For Example
n
y sign( wi xi k )
i 0
October 14, 2020 Data Mining: Concepts and Techniques 46
Multi-Layer Perceptron
Output vector
Err j O j (1 O j ) Errk w jk
Output nodes k
j j (l) Err j
wij wij (l ) Err j Oi
Hidden nodes Err j O j (1 O j )(T j O j )
wij 1
Oj I j
1 e
Input nodes
I j wij Oi j
i
Input vector: xi
October 14, 2020 Data Mining: Concepts and Techniques
Network Training
The ultimate objective of training
obtain a set of weights that makes almost all the
tuples in the training data classified correctly
Steps
Initialize weights with random values
Feed the input tuples into the network one by one
For each unit
Compute the net input to the unit as a linear combination
of all the inputs to the unit
Compute the output value using the activation function
Compute the error
Update the weights and the bias
October 14, 2020 Data Mining: Concepts and Techniques
Chapter 7. Classification and Prediction
What is classification? What is prediction?
Issues regarding classification and prediction
Classification by decision tree induction
Bayesian Classification
Classification by Neural Networks
Classification by Support Vector Machines (SVM)
Classification based on concepts from association rule
mining
Other Classification Methods
Prediction
Classification accuracy
Summary
October 14, 2020 Data Mining: Concepts and Techniques 50
SVM – Support Vector Machines
Small Margin Large Margin
Support Vectors
October 14, 2020 Data Mining: Concepts and Techniques
SVM – Cont.
Linear Support Vector Machine
Given a set of points xi with labeln
y i {1,1}
The SVM finds a hyperplane defined by the pair (w,b)
(where w is the normal to the plane and b is the distance from the
origin)
s.t.
yi ( xi w b) 1 i 1,..., N
x – feature vector, b- bias, y- class label, ||w|| - margin
October 14, 2020 Data Mining: Concepts and Techniques 52
SVM – Cont.
What if the data is not linearly separable?
Project the data to high dimensional space where it is
linearly separable and then we can use linear SVM –
(Using Kernels)
(0,1) +
+ - +
-1 0 +1 - +
(0,0) (1,0)
October 14, 2020 Data Mining: Concepts and Techniques 53
Non-Linear SVM
Classification using SVM (w,b)
?
xi w b 0
In non linear case we can see this as
?
K ( xi , w) b 0
Kernel – Can be thought of as doing dot product
in some high dimensional space
October 14, 2020 Data Mining: Concepts and Techniques 54
Example of Non-linear SVM
October 14, 2020 Data Mining: Concepts and Techniques 55
Results
October 14, 2020 Data Mining: Concepts and Techniques 56
SVM vs. Neural Network
SVM Neural Network
Relatively new concept Quiet Old
Nice Generalization Generalizes well but
properties doesn’t have strong
Hard to learn – learned mathematical foundation
in batch mode using Can easily be learned in
quadratic programming incremental fashion
techniques To learn complex
Using kernels can learn functions – use
very complex functions multilayer perceptron
(not that trivial)
October 14, 2020 Data Mining: Concepts and Techniques 57
SVM Related Links
http://svm.dcs.rhbnc.ac.uk/
http://www.kernel-machines.org/
C. J. C. Burges.
A Tutorial on Support Vector Machines for Pattern Recognition
. Knowledge Discovery and Data Mining, 2(2), 1998.
SVMlight – Software (in C) http://ais.gmd.de/~thorsten/svm
_light
BOOK: An Introduction to Support Vector Machines
N. Cristianini and J. Shawe-Taylor
Cambridge University Press
October 14, 2020 Data Mining: Concepts and Techniques 58
Chapter 7. Classification and Prediction
What is classification? What is prediction?
Issues regarding classification and prediction
Classification by decision tree induction
Bayesian Classification
Classification by Neural Networks
Classification by Support Vector Machines (SVM)
Classification based on concepts from association rule
mining
Other Classification Methods
Prediction
Classification accuracy
Summary
October 14, 2020 Data Mining: Concepts and Techniques 59
Association-Based Classification
Several methods for association-based classification
ARCS: Quantitative association mining and clustering
of association rules (Lent et al’97)
It beats C4.5 in (mainly) scalability and also accuracy
Associative classification: (Liu et al’98)
It mines high support and high confidence rules in the form of
“cond_set => y”, where y is a class label
CAEP (Classification by aggregating emerging patterns)
(Dong et al’99)
Emerging patterns (EPs): the itemsets whose support
increases significantly from one class to another
Mine Eps based on minimum support and growth rate
October 14, 2020 Data Mining: Concepts and Techniques 60
Chapter 7. Classification and Prediction
What is classification? What is prediction?
Issues regarding classification and prediction
Classification by decision tree induction
Bayesian Classification
Classification by Neural Networks
Classification by Support Vector Machines (SVM)
Classification based on concepts from association rule
mining
Other Classification Methods
Prediction
Classification accuracy
Summary
October 14, 2020 Data Mining: Concepts and Techniques 61
Other Classification Methods
k-nearest neighbor classifier
case-based reasoning
Genetic algorithm
Rough set approach
Fuzzy set approaches
October 14, 2020 Data Mining: Concepts and Techniques 62
Instance-Based Methods
Instance-based learning:
Store training examples and delay the processing
(“lazy evaluation”) until a new instance must be
classified
Typical approaches
k-nearest neighbor approach
Instances represented as points in a Euclidean
space.
Locally weighted regression
Constructs local approximation
Case-based reasoning
Uses symbolic representations and knowledge-
based inference
October 14, 2020 Data Mining: Concepts and Techniques 63
The k-Nearest Neighbor Algorithm
All instances correspond to points in the n-D space.
The nearest neighbor are defined in terms of
Euclidean distance.
The target function could be discrete- or real- valued.
For discrete-valued, the k-NN returns the most
common value among the k training examples nearest
to xq.
Vonoroi diagram: the decision surface induced by 1-
NN for a typical set of training examples.
_
_
_ _ .
+
_ .
+
xq + . . .
October 14, 2020
_ + .
Data Mining: Concepts and Techniques 64
Discussion on the k-NN Algorithm
The k-NN algorithm for continuous-valued target functions
Calculate the mean values of the k nearest neighbors
Distance-weighted nearest neighbor algorithm
Weight the contribution of each of the k neighbors
according to their distance to the query point xq
giving greater weight to closer neighbors w
1
d ( x , x ) 2
Similarly, for real-valued target functions q i
Robust to noisy data by averaging k-nearest neighbors
Curse of dimensionality: distance between neighbors could
be dominated by irrelevant attributes.
To overcome it, axes stretch or elimination of the least
relevant attributes.
October 14, 2020 Data Mining: Concepts and Techniques 65
Case-Based Reasoning
Also uses: lazy evaluation + analyze similar instances
Difference: Instances are not “points in a Euclidean space”
Example: Water faucet problem in CADET (Sycara et al’92)
Methodology
Instances represented by rich symbolic descriptions
(e.g., function graphs)
Multiple retrieved cases may be combined
Tight coupling between case retrieval, knowledge-based
reasoning, and problem solving
Research issues
Indexing based on syntactic similarity measure, and
when failure, backtracking, and adapting to additional
cases
October 14, 2020 Data Mining: Concepts and Techniques 66
Remarks on Lazy vs. Eager Learning
Instance-based learning: lazy evaluation
Decision-tree and Bayesian classification: eager evaluation
Key differences
Lazy method may consider query instance xq when deciding how to
generalize beyond the training data D
Eager method cannot since they have already chosen global
approximation when seeing the query
Efficiency: Lazy - less time training but more time predicting
Accuracy
Lazy method effectively uses a richer hypothesis space since it uses
many local linear functions to form its implicit global approximation
to the target function
Eager: must commit to a single hypothesis that covers the entire
instance space
October 14, 2020 Data Mining: Concepts and Techniques 67
Genetic Algorithms
GA: based on an analogy to biological evolution
Each rule is represented by a string of bits
An initial population is created consisting of randomly
generated rules
e.g., IF A and Not A then C can be encoded as 100
1 2 2
Based on the notion of survival of the fittest, a new
population is formed to consists of the fittest rules and
their offsprings
The fitness of a rule is represented by its classification
accuracy on a set of training examples
Offsprings are generated by crossover and mutation
October 14, 2020 Data Mining: Concepts and Techniques 68
Rough Set Approach
Rough sets are used to approximately or “roughly”
define equivalent classes
A rough set for a given class C is approximated by two
sets: a lower approximation (certain to be in C) and an
upper approximation (cannot be described as not
belonging to C)
Finding the minimal subsets (reducts) of attributes (for
feature reduction) is NP-hard but a discernibility matrix
is used to reduce the computation intensity
October 14, 2020 Data Mining: Concepts and Techniques 69
Fuzzy Set
Approaches
Fuzzy logic uses truth values between 0.0 and 1.0 to
represent the degree of membership (such as using fuzzy
membership graph)
Attribute values are converted to fuzzy values
e.g., income is mapped into the discrete categories
{low, medium, high} with fuzzy values calculated
For a given new sample, more than one fuzzy value may
apply
Each applicable rule contributes a vote for membership in
the categories
Typically, the truth values for each predicted category are
summed
October 14, 2020 Data Mining: Concepts and Techniques 70
Chapter 7. Classification and Prediction
What is classification? What is prediction?
Issues regarding classification and prediction
Classification by decision tree induction
Bayesian Classification
Classification by Neural Networks
Classification by Support Vector Machines (SVM)
Classification based on concepts from association rule
mining
Other Classification Methods
Prediction
Classification accuracy
Summary
October 14, 2020 Data Mining: Concepts and Techniques 71
What Is Prediction?
Prediction is similar to classification
First, construct a model
Second, use model to predict unknown value
Major method for prediction is regression
Linear and multiple regression
Non-linear regression
Prediction is different from classification
Classification refers to predict categorical class label
Prediction models continuous-valued functions
October 14, 2020 Data Mining: Concepts and Techniques 72
Predictive Modeling in Databases
Predictive modeling: Predict data values or construct
generalized linear models based on the database data.
One can only predict value ranges or category distributions
Method outline:
Minimal generalization
Attribute relevance analysis
Generalized linear model construction
Prediction
Determine the major factors which influence the prediction
Data relevance analysis: uncertainty measurement,
entropy analysis, expert judgement, etc.
Multi-level prediction: drill-down and roll-up analysis
October 14, 2020 Data Mining: Concepts and Techniques 73
Regress Analysis and Log-Linear
Models in Prediction
Linear regression: Y = + X
Two parameters , and specify the line and are to
be estimated by using the data at hand.
using the least squares criterion to the known values of
Y1, Y2, …, X1, X2, ….
Multiple regression: Y = b0 + b1 X1 + b2 X2.
Many nonlinear functions can be transformed into the
above.
Log-linear models:
The multi-way table of joint probabilities is
approximated by a product of lower-order tables.
Probability: p(a, b, c, d) = ab acad bcd
October 14, 2020 Data Mining: Concepts and Techniques 74
Locally Weighted Regression
Construct an explicit approximation to f over a local region
surrounding query instance xq.
Locally weighted linear regression:
The target function f is approximated near xq using the
linear function: f ( x) w w a ( x) wnan ( x)
0 11
minimize the squared error: distance-decreasing weight K
E ( xq ) 1 ( f ( x) f ( x))2 K (d ( xq , x))
2 xk _nearest _neighbors_of _ x
the gradient descent training rule: q
w
In most cases,
j K (d (
the target function is approximated by axq , x ))(( f ( x ) ( x))a ( x)
f j
x k _ nearest _ neighbors_ of _ xq
constant, linear, or quadratic function.
October 14, 2020 Data Mining: Concepts and Techniques 75
Prediction: Numerical Data
October 14, 2020 Data Mining: Concepts and Techniques 76
Prediction: Categorical Data
October 14, 2020 Data Mining: Concepts and Techniques 77
Chapter 7. Classification and Prediction
What is classification? What is prediction?
Issues regarding classification and prediction
Classification by decision tree induction
Bayesian Classification
Classification by Neural Networks
Classification by Support Vector Machines (SVM)
Classification based on concepts from association rule
mining
Other Classification Methods
Prediction
Classification accuracy
Summary
October 14, 2020 Data Mining: Concepts and Techniques 78
Classification Accuracy: Estimating Error
Rates
Partition: Training-and-testing
use two independent data sets, e.g., training set
(2/3), test set(1/3)
used for data set with large number of samples
Cross-validation
divide the data set into k subsamples
use k-1 subsamples as training data and one sub-
sample as test data—k-fold cross-validation
for data set with moderate size
Bootstrapping (leave-one-out)
for small size data
October 14, 2020 Data Mining: Concepts and Techniques 79
Bagging and Boosting
General idea
Training data
Classification method (CM)
Classifier C
Altered Training data CM
Classifier C1
Altered Training data CM
…….. Classifier C2
Aggregation ….
Classifier C*
October 14, 2020 Data Mining: Concepts and Techniques 80
Bagging
Given a set S of s samples
Generate a bootstrap sample T from S. Cases in S may not
appear in T or may appear more than once.
Repeat this sampling procedure, getting a sequence of k
independent training sets
A corresponding sequence of classifiers C1,C2,…,Ck is
constructed for each of these training sets, by using the
same classification algorithm
To classify an unknown sample X,let each classifier predict
or vote
The Bagged Classifier C* counts the votes and assigns X to
the class with the “most” votes
October 14, 2020 Data Mining: Concepts and Techniques 81
Boosting Technique — Algorithm
Assign every example an equal weight 1/N
For t = 1, 2, …, T Do
Obtain a hypothesis (classifier) h (t) under w(t)
Calculate the error of h(t) and re-weight the examples
based on the error . Each classifier is dependent on the
previous ones. Samples that are incorrectly predicted
are weighted more heavily
Normalize w(t+1) to sum to 1 (weights assigned to
different classifiers sum to 1)
Output a weighted sum of all the hypothesis, with each
hypothesis weighted according to its accuracy on the
training set
October 14, 2020 Data Mining: Concepts and Techniques 82
Bagging and Boosting
Experiments with a new boosting algorithm,
freund et al (AdaBoost )
Bagging Predictors, Brieman
Boosting Naïve Bayesian Learning on large subset
of MEDLINE, W. Wilbur
October 14, 2020 Data Mining: Concepts and Techniques 83
Chapter 7. Classification and Prediction
What is classification? What is prediction?
Issues regarding classification and prediction
Classification by decision tree induction
Bayesian Classification
Classification by Neural Networks
Classification by Support Vector Machines (SVM)
Classification based on concepts from association rule
mining
Other Classification Methods
Prediction
Classification accuracy
Summary
October 14, 2020 Data Mining: Concepts and Techniques 84
Summary
Classification is an extensively studied problem (mainly in
statistics, machine learning & neural networks)
Classification is probably one of the most widely used
data mining techniques with a lot of extensions
Scalability is still an important issue for database
applications: thus combining classification with database
techniques should be a promising topic
Research directions: classification of non-relational data,
e.g., text, spatial, multimedia, etc..
October 14, 2020 Data Mining: Concepts and Techniques 85
References (1)
C. Apte and S. Weiss. Data mining with decision trees and decision rules. Future Generation
Computer Systems, 13, 1997.
L. Breiman, J. Friedman, R. Olshen, and C. Stone. Classification and Regression Trees.
Wadsworth International Group, 1984.
C. J. C. Burges. A Tutorial on Support Vector Machines for Pattern Recognition. Data Mining
and Knowledge Discovery, 2(2): 121-168, 1998.
P. K. Chan and S. J. Stolfo. Learning arbiter and combiner trees from partitioned data for
scaling machine learning. In Proc. 1st Int. Conf. Knowledge Discovery and Data Mining
(KDD'95), pages 39-44, Montreal, Canada, August 1995.
U. M. Fayyad. Branching on attribute values in decision tree generation. In Proc. 1994 AAAI
Conf., pages 601-606, AAAI Press, 1994.
J. Gehrke, R. Ramakrishnan, and V. Ganti. Rainforest: A framework for fast decision tree
construction of large datasets. In Proc. 1998 Int. Conf. Very Large Data Bases, pages 416-
427, New York, NY, August 1998.
J. Gehrke, V. Gant, R. Ramakrishnan, and W.-Y. Loh,
BOAT -- Optimistic Decision Tree Construction . In SIGMOD'99 , Philadelphia, Pennsylvania,
1999
October 14, 2020 Data Mining: Concepts and Techniques 86
References (2)
M. Kamber, L. Winstone, W. Gong, S. Cheng, and J. Han. Generalization and decision tree
induction: Efficient classification in data mining. In Proc. 1997 Int. Workshop Research
Issues on Data Engineering (RIDE'97), Birmingham, England, April 1997.
B. Liu, W. Hsu, and Y. Ma. Integrating Classification and Association Rule Mining. Proc.
1998 Int. Conf. Knowledge Discovery and Data Mining (KDD'98) New York, NY, Aug. 1998.
W. Li, J. Han, and J. Pei,
CMAR: Accurate and Efficient Classification Based on Multiple Class-Association Rules, ,
Proc. 2001 Int. Conf. on Data Mining (ICDM'01), San Jose, CA, Nov. 2001.
J. Magidson. The Chaid approach to segmentation modeling: Chi-squared automatic
interaction detection. In R. P. Bagozzi, editor, Advanced Methods of Marketing Research,
pages 118-159. Blackwell Business, Cambridge Massechusetts, 1994.
M. Mehta, R. Agrawal, and J. Rissanen. SLIQ : A fast scalable classifier for data mining.
(EDBT'96), Avignon, France, March 1996.
October 14, 2020 Data Mining: Concepts and Techniques 87
References (3)
T. M. Mitchell. Machine Learning. McGraw Hill, 1997.
S. K. Murthy, Automatic Construction of Decision Trees from Data: A Multi-Diciplinary
Survey, Data Mining and Knowledge Discovery 2(4): 345-389, 1998
J. R. Quinlan. Induction of decision trees. Machine Learning, 1:81-106, 1986.
J. R. Quinlan. Bagging, boosting, and c4.5. In Proc. 13th Natl. Conf. on Artificial
Intelligence (AAAI'96), 725-730, Portland, OR, Aug. 1996.
R. Rastogi and K. Shim. Public: A decision tree classifer that integrates building and
pruning. In Proc. 1998 Int. Conf. Very Large Data Bases, 404-415, New York, NY, August
1998.
J. Shafer, R. Agrawal, and M. Mehta. SPRINT : A scalable parallel classifier for data mining.
In Proc. 1996 Int. Conf. Very Large Data Bases, 544-555, Bombay, India, Sept. 1996.
S. M. Weiss and C. A. Kulikowski. Computer Systems that Learn: Classification and
Prediction Methods from Statistics, Neural Nets, Machine Learning, and Expert Systems.
Morgan Kaufman, 1991.
S. M. Weiss and N. Indurkhya. Predictive Data Mining. Morgan Kaufmann, 1997.
October 14, 2020 Data Mining: Concepts and Techniques 88
www.cs.uiuc.edu/~hanj
Thank you !!!
October 14, 2020 Data Mining: Concepts and Techniques 89