Decision Tree
Dr. Naveen Saini
Assistant Professor
Department of Information Technology
Indian Institute of Information Technology Allahabad
Uttar Pardesh
[email protected] https://sites.google.com/view/nsaini1
Supervised Learning
▪ When an algorithm learns from example data and associated
target responses that can consist of numeric values or string
labels, such as classes or tags, in order to later predict the
correct response when posed with new examples comes under
the category of Supervised learning.
▪ This approach is indeed similar to human learning under the
supervision of a teacher.
▪ The teacher provides good examples for the student to
memorize, and the student then derives general rules from
these specific examples.
Supervised Learning: An Example
Supervised Learning
▪ A majority of practical machine learning uses supervised learning.
▪ In supervised learning, the system tries to learn from the previous
examples that are given.
▪ (On the other hand, in unsupervised learning, the system attempts to find the patterns
directly from the example given.)
▪ Speaking mathematically, supervised learning is where you have both
input variables (x) and output variables(Y) and can use an algorithm to
derive the mapping function from the input to the output.
▪ The mapping function is expressed as Y = f(X).
Classification: Definition
Given a collection of records (training set )
– Each record contains a set of attributes, one of the
attributes is the class
Find a model for class attribute as a function
of the values of other attributes
Goal: previously unseen records should be
assigned a class as accurately as possible
– A test set is used to determine the accuracy of the
model. Usually, the given data set is divided into
training and test sets, with training set used to build
the model and test set used to validate it
Illustrating Classification Task
Tid Attrib1 Attrib2 Attrib3 Class Learning
1 Yes Large 125K No
algorithm
2 No Medium 100K No
3 No Small 70K No
4 Yes Medium 120K No
Induction
5 No Large 95K Yes
6 No Medium 60K No
7 Yes Large 220K No Learn
8 No Small 85K Yes Model
9 No Medium 75K No
10 No Small 90K Yes
Model
10
Training Set
Apply
Tid Attrib1 Attrib2 Attrib3 Class Model
11 No Small 55K ?
12 Yes Medium 80K ?
13 Yes Large 110K ? Deduction
14 No Small 95K ?
15 No Large 67K ?
10
Test Set
Examples of Classification Task
Predicting tumor cells as benign or malignant
Classifying credit card transactions
as legitimate or fraudulent
Categorizing news stories as finance,
weather, entertainment, sports, etc
Classification Techniques
Support Vector Machines
Decision Tree based Methods
Rule-based Methods
Neural Networks
Naïve Bayes and Bayesian Belief Networks
Example of a Decision Tree
Splitting Attributes
Tid Refund Marital Taxable
Status Income Cheat
1 Yes Single 125K No
2 No Married 100K No Refund
No
Yes No
3 No Single 70K
4 Yes Married 120K No NO MarSt
5 No Divorced 95K Yes Single, Divorced Married
6 No Married 60K No
7 Yes Divorced 220K No TaxInc NO
8 No Single 85K Yes < 80K > 80K
9 No Married 75K No
NO YES
10 No Single 90K Yes
10
Training Data Model: Decision Tree
Another Example of Decision Tree
MarSt Single,
Married Divorced
Tid Refund Marital Taxable
Status Income Cheat
NO Refund
1 Yes Single 125K No
Yes No
2 No Married 100K No
3 No Single 70K No NO TaxInc
4 Yes Married 120K No < 80K > 80K
5 No Divorced 95K Yes
NO YES
6 No Married 60K No
7 Yes Divorced 220K No
8 No Single 85K Yes
9 No Married 75K No There could be more than one tree that
10 No Single 90K Yes fits the same data!
10
Decision Tree Classification Task
Tid Attrib1 Attrib2 Attrib3 Class
Tree
1 Yes Large 125K No Induction
2 No Medium 100K No algorithm
3 No Small 70K No
4 Yes Medium 120K No
Induction
5 No Large 95K Yes
6 No Medium 60K No
7 Yes Large 220K No Learn
8 No Small 85K Yes Model
9 No Medium 75K No
10 No Small 90K Yes
Model
10
Training Set
Apply Decision
Tid Attrib1 Attrib2 Attrib3 Class
Model Tree
11 No Small 55K ?
12 Yes Medium 80K ?
13 Yes Large 110K ?
Deduction
14 No Small 95K ?
15 No Large 67K ?
10
Test Set
Apply Model to Test Data
Test Data
Start from the root of tree Refund Marital Taxable
Status Income Cheat
No Married 80K ?
Refund 10
Yes No
NO MarSt
Single, Divorced Married
TaxInc NO
< 80K > 80K
NO YES
Apply Model to Test Data
Test Data
Refund Marital Taxable
Status Income Cheat
No Married 80K ?
Refund 10
Yes No
NO MarSt
Single, Divorced Married
TaxInc NO
< 80K > 80K
NO YES
Apply Model to Test Data
Test Data
Refund Marital Taxable
Status Income Cheat
No Married 80K ?
Refund 10
Yes No
NO MarSt
Single, Divorced Married
TaxInc NO
< 80K > 80K
NO YES
Apply Model to Test Data
Test Data
Refund Marital Taxable
Status Income Cheat
No Married 80K ?
Refund 10
Yes No
NO MarSt
Single, Divorced Married
TaxInc NO
< 80K > 80K
NO YES
Apply Model to Test Data
Test Data
Refund Marital Taxable
Status Income Cheat
No Married 80K ?
Refund 10
Yes No
NO MarSt
Single, Divorced Married
TaxInc NO
< 80K > 80K
NO YES
Apply Model to Test Data
Test Data
Refund Marital Taxable
Status Income Cheat
No Married 80K ?
Refund 10
Yes No
NO MarSt
Single, Divorced Married Assign Cheat to “No”
TaxInc NO
< 80K > 80K
NO YES
Decision Tree Classification Task
Tid Attrib1 Attrib2 Attrib3 Class
Tree
1 Yes Large 125K No Induction
2 No Medium 100K No algorithm
3 No Small 70K No
4 Yes Medium 120K No
Induction
5 No Large 95K Yes
6 No Medium 60K No
7 Yes Large 220K No Learn
8 No Small 85K Yes Model
9 No Medium 75K No
10 No Small 90K Yes
Model
10
Training Set
Apply Decision
Tid Attrib1 Attrib2 Attrib3 Class
Model Tree
11 No Small 55K ?
12 Yes Medium 80K ?
13 Yes Large 110K ?
Deduction
14 No Small 95K ?
15 No Large 67K ?
10
Test Set
Decision Tree Induction
Many Algorithms:
– Hunt’s Algorithm (one of the earliest)
– CART (Classification and Regression Tree)
– ID3, C4.5
– SLIQ (Fast scalable algorithm for large
application)
◆Can handle both numeric and categorical attributes
– SPRINT (scalable parallel classifier for
datamining)
General Structure of Hunt’s Algorithm
Tid Refund Marital Taxable
Let Dt be the set of training records Status Income Cheat
that reach a node t 1 Yes Single 125K No
General Procedure: 2 No Married 100K No
– If Dt contains records that 3 No Single 70K No
4 Yes Married 120K No
belong the same class yt, then t
5 No Divorced 95K Yes
is a leaf node labeled as yt 6 No Married 60K No
– If Dt is an empty set, then t is a 7 Yes Divorced 220K No
leaf node labeled by the default 8 No Single 85K Yes
class, yd 9 No Married 75K No
– If Dt contains records that 10
10 No Single 90K Yes
belong to more than one class, Dt
use an attribute test to split the
data into smaller subsets.
Recursively apply the ?
procedure to each subset.
Hunt’s Algorithm
Tid Refund Marital Taxable
Status Income Cheat
Refund 1 Yes Single 125K No
Don’t
Yes No 2 No Married 100K No
Cheat
Don’t Don’t 3 No Single 70K No
Cheat Cheat
4 Yes Married 120K No
5 No Divorced 95K Yes
6 No Married 60K No
Refund Refund 7 Yes Divorced 220K No
Yes No Yes No 8 No Single 85K Yes
9 No Married 75K No
Don’t Don’t Marital
Marital Cheat 10 No Single 90K Yes
Cheat Status Status 10
Single, Single,
Married Married
Divorced Divorced
Don’t Taxable Don’t
Cheat Cheat
Cheat Income
< 80K >= 80K
Don’t Cheat
Cheat
Tree Induction
Greedy strategy
– Split the records based on an attribute test
that optimizes certain criterion
Issues
– Determine how to split the records
◆How to specify the attribute test condition?
◆How to determine the best split?
– Determine when to stop splitting
Tree Induction
Greedy strategy
– Split the records based on an attribute test
that optimizes certain criterion
Issues
– Determine how to split the records
◆How to specify the attribute test condition?
◆How to determine the best split?
– Determine when to stop splitting
How to Specify Test Condition?
Depends on attribute types
– Nominal
– Ordinal
– Continuous
Depends on number of ways to split
– 2-way split
– Multi-way split
Splitting Based on Nominal Attributes
The values of a Nominal attribute are names of things, some kind of symbols. Also referred as categorical
attributes and there is no order (rank, position) among values of the nominal attribute.
Multi-way split: Use as many partitions as distinct
values
CarType
Family Luxury
Sports
Binary split: Divides values into two subsets
Need to find optimal partitioning
CarType CarType
{Sports, OR {Family,
Luxury} {Family} Luxury} {Sports}
Splitting Based on Ordinal Attributes
The Ordinal Attributes contains values that have a meaningful sequence or ranking(order) between them
Multi-way split: Use as many partitions as distinct
values.
Size
Small Large
Medium
Binary split: Divides values into two subsets.
Need to find optimal partitioning.
Size Size
{Small,
{Large}
OR {Medium,
{Small}
Medium} Large}
Size
{Small,
What about this split? Large} {Medium}
Splitting Based on Continuous Attributes
Taxable Taxable
Income Income?
> 80K?
< 10K > 80K
Yes No
[10K,25K) [25K,50K) [50K,80K)
(i) Binary split (ii) Multi-way split
Tree Induction
Greedy strategy.
– Split the records based on an attribute test
that optimizes certain criterion.
Issues
– Determine how to split the records
◆How to specify the attribute test condition?
◆How to determine the best split?
– Determine when to stop splitting
How to determine the Best Split??
Before Splitting: 10 records of class 0,
10 records of class 1
Own Car Student
Car? Type? ID?
Yes No Family Luxury c1 c20
c10 c11
Sports
C0: 6 C0: 4 C0: 1 C0: 8 C0: 1 C0: 1 ... C0: 1 C0: 0 ... C0: 0
C1: 4 C1: 6 C1: 3 C1: 0 C1: 7 C1: 0 C1: 0 C1: 1 C1: 1
Which test condition is the best?
How to determine the Best Split
Greedy approach:
– Nodes with homogeneous class distribution
are preferred
Need a measure of node impurity:
C0: 5 C0: 9
C1: 5 C1: 1
Non-homogeneous, Homogeneous,
High degree of impurity Low degree of impurity
Measures of Node Impurity
Gini Index
Entropy
Misclassification error
How to Find the Best Split
Before Splitting: C0 N00
M0
C1 N01
A? B?
Yes No Yes No
Node N1 Node N2 Node N3 Node N4
C0 N10 C0 N20 C0 N30 C0 N40
C1 N11 C1 N21 C1 N31 C1 N41
M1 M2 M3 M4
M12 M34
Gain = M0 – M12 vs M0 – M34
Measure of Impurity: GINI
Gini Index for a given node t :
GINI (t ) = 1 − [ p( j | t )]2
j
(NOTE: p( j | t) is the relative frequency of class j at node t).
– Maximum (1 - 1/nc) when records are equally
distributed among all classes, implying least
interesting information
– Minimum (0.0) when all records belong to one class,
implying most interesting information
C1 0 C1 1 C1 2 C1 3
C2 6 C2 5 C2 4 C2 3
Gini=0.000 Gini=0.278 Gini=0.444 Gini=0.500
Examples for computing GINI
GINI (t ) = 1 − [ p( j | t )]2
j
C1 0 P(C1) = 0/6 = 0 P(C2) = 6/6 = 1
C2 6 Gini = 1 – P(C1)2 – P(C2)2 = 1 – 0 – 1 = 0
C1 1 P(C1) = 1/6 P(C2) = 5/6
C2 5 Gini = 1 – (1/6)2 – (5/6)2 = 0.278
C1 2 P(C1) = 2/6 P(C2) = 4/6
C2 4 Gini = 1 – (2/6)2 – (4/6)2 = 0.444
Splitting Based on GINI
Used in CART, SLIQ, SPRINT.
When a node p is split into k partitions (children), the
quality of split is computed as,
k
ni
GINI split = GINI (i )
i =1 n
where, ni = number of records at child i,
n = number of records at node p.
Binary Attributes: Computing GINI Index
k
ni
Splits into two partitions GINI split = GINI (i )
Effect of Weighing partitions: i =1 n
– Larger and Purer Partitions are sought for
Parent
B? C1 6
Yes No C2 6
Gini = 0.500
Node N1 Node N2
Gini(N1)
= 1 – (5/7)2 – (2/7)2 N1 N2 Gini(Children)
= 0.428
C1 5 1 = 7/12 * 0.428 +
Gini(N2) C2 2 4 5/12 * 0.528
= 1 – (1/5)2 – (4/5)2 Gini=0.469 = 0.469
= 0.528
Lowest Gini index is preferred for the best splitting.
Tree Induction
Greedy strategy
– Split the records based on an attribute test
that optimizes certain criterion
Issues
– Determine how to split the records
◆How to specify the attribute test condition?
◆How to determine the best split?
– Determine when to stop splitting
Stopping Criteria for Tree Induction
Stop expanding a node when all the records
belong to the same class
Stop expanding a node when all the records have
similar attribute values
Early termination (to be discussed later)
Decision Tree Based Classification
Advantages:
– Inexpensive to construct
– Extremely fast at classifying unknown records
– Easy to interpret for small-sized trees
– Accuracy is comparable to other classification
techniques for many simple data sets
Metrics for Performance Evaluation
Focus on the predictive capability of a model
– Rather than how fast it takes to classify or build
models, scalability, etc.
Confusion Matrix:
PREDICTED CLASS
Class=Yes Class=No
a: TP (true positive)
b: FN (false negative)
Class=Yes a b
ACTUAL c: FP (false positive)
d: TN (true negative)
CLASS Class=No c d
Metrics for Performance Evaluation…
PREDICTED CLASS
Class=Yes Class=No
Class=Yes a b
ACTUAL (TP) (FN)
CLASS
Class=No c d
(FP) (TN)
Most widely-used metric:
a+d TP + TN
Accuracy = =
a + b + c + d TP + TN + FP + FN
Thank you!!
Any Queries??
[email protected]