Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
16 views26 pages

Decision Trees

The document outlines the basics of decision tree classification, including the structure of decision trees, the process of tree construction, and the criteria for selecting splitting attributes. It discusses the concepts of information gain and gain ratio as methods for evaluating attributes, emphasizing the importance of entropy in measuring information. Additionally, it highlights potential issues with highly-branching attributes and the modifications made to improve decision tree algorithms, such as ID3 and C4.5.

Uploaded by

Ali Hamza
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views26 pages

Decision Trees

The document outlines the basics of decision tree classification, including the structure of decision trees, the process of tree construction, and the criteria for selecting splitting attributes. It discusses the concepts of information gain and gain ratio as methods for evaluating attributes, emphasizing the importance of entropy in measuring information. Additionally, it highlights potential issues with highly-branching attributes and the modifications made to improve decision tree algorithms, such as ID3 and C4.5.

Uploaded by

Ali Hamza
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 26

Classification:

Decision Trees
Outline

Basics of creation of trees


 Top-down decision tree construction
 Choosing the splitting attribute
 Information gain and gain ratio

2
Decision Tree Structure
 An internal node is a test on an attribute, e.g.
color=red.
 A branch represents an outcome of the test, e.g. true.
 A leaf node represents a class label or class label
distribution, e.g. dog (0.9), cat (0.1).
 At each node, one attribute is chosen to split training
examples into classes as distinctly as possible.
 A new case is classified by following a path with
matching branches to a leaf node.

3
Weather Data: Play or not Play?
Outlook Temperature Humidity Windy Play?
sunny hot high false No
sunny hot high true No
overcast hot high false Yes
rain mild high false Yes
rain cool normal false Yes
rain cool normal true No
overcast cool normal true Yes
sunny mild high false No
sunny cool normal false Yes
rain mild normal false Yes
sunny mild normal true Yes
overcast mild high true Yes
overcast hot normal false Yes
rain mild high true No

4
Example Tree for “Play?”

Outlook?

sunny rain
overcast

Humidity? Yes
Windy?

high normal true false

No Yes No Yes

5
Building Decision Trees [Q93]

 Top-down tree construction


 At start, all training examples are at the root.
 Partition the examples recursively by choosing
one attribute each time.
 Bottom-up tree pruning
 Remove subtrees or branches, in a bottom-up
manner, to improve the estimated accuracy on
new cases by reducing overfitting. Another
example of regularisation.

6
Choosing the Splitting Attribute

 At each node, available attributes are


evaluated on the basis of separating the
classes of the training examples. An
evaluation (goodness) function is used for
this purpose.
 Commonly used evaluation functions:
 information gain (ID3/C4.5)
 information gain ratio
 gini index (CART)

7
witten&eibe
Which Attribute should we
Select?

8
witten&eibe
A Criterion for Attribute
Selection
 Which is the best attribute?
 The one which will result in the smallest tree
overall
 Heuristic: choose the attribute that produces
the “purest” nodes (i.e. nodes with class
distributions skewed as much as possible)
 Popular impurity criterion: information gain
 Information gain increases with the average
purity of the subsets that an attribute produces
 Strategy: choose the attribute that results
in the greatest information gain
9
witten&eibe
Computing Information

 Information is measured in bits


 Given a probability distribution, the information
required to predict an event is the distribution’s
entropy
 Entropy gives the information required in bits (this
can involve fractions of bits!)
 Formula for computing the entropy:
entropy( p1 , p2 , , pn )  p1logp1  p2 logp2   pn logpn

 This uses log2. Compute this using equation


log2 x = log10x/log102
10
witten&eibe
Logarithms

 The logarithm of a number y to a base b >


0, written logb y is the number of copies of
b that must be multiplied together to give
y. So logb(bn)=n. For example, log2(8)=3.
 Evaluate log2(32), log2(1024).
 logb(mn) = logb(m) + logb(n)
 logb(m/n) = logb(m) – logb(n)

11
*Claude “Father of
Shannon
Born: 30 April 1916 information
Died: 23 February 2001 theory”
Claude Shannon, who has died aged 84,
perhaps more than anyone laid the
groundwork for today’s digital revolution. His
exposition of information theory, stating that
all information could be represented
mathematically as a succession of noughts
and ones, facilitated the digital manipulation
of data without which today’s information
society would be unthinkable.
Shannon’s master’s thesis, obtained in 1940
at MIT, demonstrated that problem solving
could be achieved by manipulating the
symbols 0 and 1 in a process that could be
carried out automatically with electrical
circuitry. That dissertation has been hailed as
one of the most significant master’s theses of
the 20th century. Eight years later, Shannon
published another landmark paper, A
Mathematical Theory of Communication,
Shannon applied the same radical approach to cryptography research, in which
generally taken as his most important
he later became a consultant to the US government.
scientific contribution.
Many of Shannon’s pioneering insights were developed before they could be
applied in practical form. He was truly a remarkable man, yet unknown to most
12
of the world.
Example: Attribute “Outlook”
 “Outlook” = “Sunny”:
info([2,3]) entropy(2/5,3/5)  2 / 5 log(2 / 5)  3 / 5 log(3 / 5) 0.971 bits
Note: log(0) is
 “Outlook” = “Overcast”: not defined, but
info([4,0]) entropy(1,0)  1log(1)  0 log(0) 0 bits we evaluate
0*log(0) as zero
 “Outlook” = “Rainy”:
info([3,2]) entropy(3/5,2/5)  3 / 5 log(3 / 5)  2 / 5 log(2 / 5) 0.971 bits

 Expected information for attribute:


info([3,2], [4,0],[3,2]) (5 / 14) 0.971  (4 / 14) 0  (5 / 14) 0.971
0.693 bits
13
witten&eibe
Computing the Information
Gain
 Information gain:
(information before split) – (information after
split)
gain(" Outlook") info([9,5]) - info([2,3], [4,0], [3,2]) 0.940 - 0.693
0.247 bits

 Information
gain("gain for) 
Outlook" weather
0.247 bits data:
gain("Temperature" ) 0.029 bits
gain(" Humidity" ) 0.152 bits
gain(" Windy" ) 0.048 bits
14
witten&eibe
Continuing to Split

gain(" Humidity") 0.971 bits


gain(" Temperatur e" ) 0.571 bits

gain(" Windy" ) 0.020 bits

15
witten&eibe
Final Decision Tree

 Note: not all leaves need to be pure; sometimes


identical instances have different classes
 Splitting stops when data can’t be split any further
(or perhaps sooner to avoid overfitting).

16
witten&eibe
Highly-branching Attributes

 Problematic: attributes with a large


number of values (extreme case: ID code)
 Subsets are more likely to be pure if there
is a large number of values
 Information gain is biased towards choosing
attributes with a large number of values
 This may result in overfitting (selection of an
attribute that is sub-optimal for prediction)

19
witten&eibe
Weather Data with ID code
ID Outlook Temperature Humidity Windy Play?
A sunny hot high false No
B sunny hot high true No
C overcast hot high false Yes
D rain mild high false Yes
E rain cool normal false Yes
F rain cool normal true No
G overcast cool normal true Yes
H sunny mild high false No
I sunny cool normal false Yes
J rain mild normal false Yes
K sunny mild normal true Yes
L overcast mild high true Yes
M overcast hot normal false Yes
N rain mild high true No
20
Split for ID Code Attribute

Entropy of split = 0 (since each leaf node is “pure”,


having only
one case).

Information gain is maximal for ID code.


21
witten&eibe
Gain Ratio
 Gain ratio: a modification of the information
gain that reduces its bias towards high-
branch attributes
 Gain ratio should be
 Large when data is evenly spread
 Small when all data belong to one branch

 Gain ratio takes number and size of branches


into account when choosing an attribute
 It corrects the information gain by taking the intrinsic
information of a split (ignoring the class) into account (i.e.
how much info do we need to tell which branch an instance
belongs to)

22
witten&eibe
Gain Ratio and Intrinsic Info.
 Intrinsic information: entropy of distribution of
(unlabelled) instances into branches. Si
denotes number of instances in branch i.
|S | |S |
IntrinsicInfo(S , A)   i log i .
|S| 2 | S |
 Gain ratio (Quinlan’86) normalizes info gain
by:

GainRatio(S , A)  Gain(S , A) .
IntrinsicInfo(S , A)

23
Computing the Gain Ratio
 Example: intrinsic information for ID code
info([1,1,  ,1) 14 ( 1 / 14 log1 / 14) 3.807 bits
 Importance of attribute decreases as
intrinsic information gets larger
 Example of gain ratio:
gain("Attribute")
gain_ratio(" Attribute") 
intrinsic_info(" Attribute")

0.940 bits
gain_ratio(" ID_code")  0.246
3.807 bits

24
witten&eibe
Gain Ratios for Weather Data
Outlook Temperature
Info: 0.693 Info: 0.911
Gain: 0.940-0.693 0.247 Gain: 0.940-0.911 0.029
Split info: info([5,4,5]) 1.577 Split info: 1.362
info([4,6,4])
Gain ratio: 0.156 Gain ratio: 0.021
0.247/1.577 0.029/1.362

Humidity Windy
Info: 0.788 Info: 0.892
Gain: 0.940-0.788 0.152 Gain: 0.940-0.892 0.048
Split info: info([7,7]) 1.000 Split info: info([8,6]) 0.985
Gain ratio: 0.152/1 0.152 Gain ratio: 0.049
0.048/0.985

25
witten&eibe
More on the Gain Ratio
 “Outlook” still comes out top of the ‘real’ attributes
 However: “ID code” still has greater gain ratio
 Standard fix: ad hoc test to prevent splitting on that type of
attribute
 Problem with gain ratio: it may overcompensate
 May choose an attribute just because its intrinsic information is
very low
 Standard fix:
 First, only consider attributes with greater than average information
gain
 Then, compare them on gain ratio

 Highly arbitrary aspects of learning algorithm are a


drawback of decision trees

26
witten&eibe
Discussion

 Algorithm for top-down induction of decision


trees (“ID3”) was developed by Ross Quinlan
 Gain ratio just one modification of this basic
algorithm
 Led to development of C4.5, which can deal with
numeric attributes, missing values, and noisy data
 Similar approach: CART (Classification and
Regression Trees)
 There are many other attribute selection
criteria! (But almost no difference in accuracy
of results.)
29
Summary

 Top-Down decision tree construction


 Choosing the splitting attribute
 Entropy as a measure of information
 Information gain biased towards attributes
with a large number of values
 Gain ratio takes number and size of
branches into account when choosing an
attribute

30

You might also like