Debre Tabor University
Gafat Institute of Technology
Department of Computer Science
Introduction to Data Mining & Warehousing
For 4th Year IT Computer Science students
Instructors: Habtu Hailu (PhD)
November, 24
Chapter III
DATA PRE-PROSSESSING
2
Data Preprocessing
Why Data preprocessing?
Data cleaning
Data integration and transformation
Data reduction
Discretization and concept hierarchy
generation
Summary
3
Why Data Preprocessing?
Data in the real world is dirty
incomplete: lacking attribute values, lacking
certain attributes of interest, or containing only
aggregate data
e.g., occupation=“ ”
noisy: containing errors or outliers
e.g., Salary=“-10”
inconsistent: containing discrepancies in codes
or names
e.g., Age=“42” Birthday=“03/07/1997”
e.g., Was rating “1,2,3”, now rating “A, B, C”
e.g., discrepancy between duplicate records
4
Why Is Data Dirty?
Incomplete data may come from
“Not applicable” data value when collected
Different considerations between the time when the data
was collected and when it is analyzed.
Human/hardware/software problems
Noisy data (incorrect values) may come from
Faulty data collection instruments
Human or computer error at data entry
Errors in data transmission
Inconsistent data may come from
Different data sources
Functional dependency violation (e.g., modify some linked
data)
Duplicate records also need data cleaning
5
Why Is Data Preprocessing
Important?
No quality data, no quality mining results!
Quality decisions must be based on quality data
e.g., duplicate or missing data may cause incorrect or
even misleading statistics.
Data warehouse needs consistent integration of
quality data
Data extraction, cleaning, and transformation
comprises the majority of the work of building a
data warehouse
6
Multi-Dimensional Measure of Data
Quality
A well-accepted multidimensional view:
Accuracy
Completeness
Consistency
Timeliness
Believability
Value added
Interpretability
Accessibility
Broad categories:
Intrinsic, contextual, representational, and
accessibility
7
3) Major Tasks in Data
Preprocessing
Data cleaning
Fill in missing values, smooth noisy data, identify or
remove outliers, and resolve inconsistencies
Data integration
Integration of multiple databases, data cubes, or files
Data transformation
Normalization and aggregation
Data reduction
Obtains reduced representation in volume but produces
the same or similar analytical results
Data discretization
Part of data reduction but with particular importance,
especially for numerical data
Concept hierarchy generation
8
Forms of Data
Preprocessing
9
Data Preprocessing
Why Data preprocessing?
Data cleaning
Data integration and transformation
Data reduction
Discretization and concept hierarchy
generation
Summary
10
Data Cleaning
Data cleaning tasks
Fill in missing values
Identify outliers and smooth out noisy data
Correct inconsistent data
Resolve redundancy caused by data
integration
11
Missing Data
Data is not always available
E.g., many tuples have no recorded value for several
attributes, such as customer income in sales data
Missing data may be due to
equipment malfunction
inconsistent with other recorded data and thus deleted
data not entered due to misunderstanding
certain data may not be considered important at the
time of entry
not register history or changes of the data
Missing data may need to be inferred.
12
How to Handle Missing
Data?
Ignore the tuple: usually done when class label is missing
(assuming the tasks in classification—not effective when the
percentage of missing values per attribute varies
considerably.
Fill in the missing value manually: tedious and infeasible?
Fill in it automatically with
a global constant : e.g., “unknown”, a new class?!
the attribute mean for all samples belonging to the same
class: smarter
the most probable value: inference-based such as
Bayesian formula or decision tree
13
Noisy Data
Noise: random error or variance in a measured
variable
Incorrect attribute values may due to
faulty data collection instruments
data entry problems
data transmission problems
technology limitation
inconsistency in naming convention
Other data problems which requires data cleaning
duplicate records
incomplete data
inconsistent data
14
How to Handle Noisy Data?
Binning
first sort data and partition into (equal-
frequency) bins
then one can smooth by bin means, smooth by
bin median, smooth by bin boundaries, etc.
Regression
smooth by fitting the data into regression
functions
Clustering
detect and remove outliers
Combined computer and human inspection
detect suspicious values and check by human
(e.g., deal with possible outliers)
15
Simple Discretization Methods:
Binning
Equal-width (distance) partitioning
Divides the range into N intervals of equal size: uniform
grid
if A and B are the lowest and highest values of the
attribute, the width of intervals will be: W = (B –A)/N.
The most straightforward, but outliers may dominate
presentation
Skewed data is not handled well
Equal-depth (frequency) partitioning
Divides the range into N intervals, each containing
approximately same number of samples
Good data scaling 16
Binning Methods for Data Smoothing
Sorted data for price (in dollars): 4, 8, 9, 15, 21, 21, 24,
25, 26, 28, 29, 34
Partition into equal-frequency (equi-depth) bins:
- Bin 1: 4, 8, 9, 15
- Bin 2: 21, 21, 24, 25
- Bin 3: 26, 28, 29, 34
Smoothing by bin means:
- Bin 1: 9, 9, 9, 9
- Bin 2: 23, 23, 23, 23
- Bin 3: 29, 29, 29, 29
Smoothing by bin boundaries:
- Bin 1: 4, 4, 4, 15
- Bin 2: 21, 21, 25, 25
- Bin 3: 26, 26, 26, 34
17
Regression
Smooth by fitting the data y
into Regression Functions
Finding a fitting function
for two dimensional case Y1
so that the value of the
second variable can be
estimated using the value
of the first variable
Y1’ y=x+1
X1 x
• It can be extended to N dimensional case in which regression finds
multidimensional space so that value of one of the dimension can be
predicted from the rest N-1 values
18
Cluster Analysis
Each cluster centroid is marked with a “+”, representing the average point
in space for that cluster. Outliers may be detected as values that fall
outside of the sets of clusters
19
Data Preprocessing
Why preprocess the data?
Data cleaning
Data integration and transformation
Data reduction
Discretization and Concept hierarchy
generation
Summary
20
Data Integration
Data integration:
Combines data from multiple sources into a coherent store
Schema integration: e.g., A.cust-id B.cust-#
Integrate metadata from different sources
Entity identification problem:
Identify real world entities from multiple data sources, e.g.,
Bill Clinton = William Clinton
Detecting and resolving data value conflicts
For the same real world entity, attribute values from
different sources are different
Possible reasons: different representations, different
scales, e.g., metric vs. British units
21
Handling Redundancy in Data
Integration
Redundant data occur often when integration of
multiple databases
Object identification: The same attribute or
object may have different names in different
databases
Derivable data: One attribute may be a
“derived” attribute in another table, e.g.,
annual revenue
Redundant attributes may be able to be detected
by correlation analysis
Careful integration of the data from multiple
sources may help reduce/avoid redundancies and
inconsistencies and improve mining speed and 22
Correlation Analysis (Numerical Data)
Correlation coefficient (also called Pearson’s
product moment coefficient)
rA, B
(A A)( B B )
( AB) n AB
( n 1)AB ( n 1)AB
where n is the number of tuples,
A and
B are the
respective means of A and B, σA and σB are the respective
standard deviation of A and B, and Σ(AB) is the sum of the
AB cross-product.
If rA,B > 0, A and B are positively correlated (A’s
values increase as B’s). The higher, the stronger
correlation.
rA,B = 0: independent; rA,B < 0: negatively 23
Correlation Analysis (Categorical
Data)
Χ2 (chi-square) test: a statistical procedure used to
determine if there is a correlation between two categorical
variables (Observed Expected ) 2
2
Expected
The larger the Χ2 value, the more likely the
variables are related
The cells that contribute the most to the Χ2 value
are those whose actual count is very different
from the Expected Count
Correlation does not imply causality
# of hospitals and # of car-theft in a city are correlated
Both are causally linked to the third variable: population 24
Chi-Square Calculation: An
Example
Play Not play Sum
chess chess (row)
Like science fiction 250(90) 200(360) 450
Not like science 50(210) 1000(840) 1050
fiction
Sum(col.) 300 1200 1500
Χ2 (chi-square) calculation (numbers in parenthesis
are expected counts calculated based on the data
distribution in the two categories)
2 2 2 2
( 250 90 ) (50 210) ( 200 360) (1000 840)
2 507.93
90 210 360 840
It shows that like_science_fiction and play_chess
are correlated in the group
25
Data Transformation
This involves:
Smoothing: remove noise from data
Aggregation: summarization, data cube construction
Generalization: concept hierarchy climbing
Normalization: scaled to fall within a small, specified
range
min-max normalization
z-score normalization
normalization by decimal scaling
Attribute/feature construction
New attributes constructed from the given ones
26
Data Transformation:
Normalization
Min-max normalization: to [new_minA, new_maxA]
v minA
v' (new _ maxA new _ minA) new _ minA
maxA minA
Ex. Let income range $12,000 to $98,000 normalized to
73,600 12,000
[0.0, 1.0]. Then $73,000 is mapped to (1.0 0) 0 0.716
98,000 12,000
Z-score normalization (μ: mean, σ: standard deviation):
v A
v'
A
73,600 54,000
Ex. Let μ = 54,000, σ = 16,000. Then 1.225
16,000
Normalization by decimal scaling
v
v' j Where j is the smallest integer such that Max(|ν’|) < 1
10
27
Data Preprocessing
Why preprocess the data?
Data cleaning
Data integration and transformation
Data reduction
Discretization and concept hierarchy
generation
Summary
28
Data Reduction Strategies
Why data reduction?
A database/data warehouse may store terabytes of data
Complex data analysis/mining may take a very long time
to run on the complete data set
Data reduction
Obtain a reduced representation of the data set that is
much smaller in volume but yet produce the same (or
almost the same) analytical results
Data Reduction strategies
Data cube aggregation
Attribute Subset Selection
Dimensionality reduction
Numerosity reduction
Discretization and concept hierarchy generation
29
Data Cube Aggregation
The lowest level of a data cube (base cuboid)
The aggregated data for an individual entity of
interest
E.g., a customer in a phone calling data
warehouse
Multiple levels of aggregation in data cubes
Further reduce the size of data to deal with
Reference appropriate levels
Use the smallest representation which is enough
to solve the task
Queries regarding aggregated information should
30
Data Cube Aggregation:
Example
Figure: Sales data for a given branch of AllElectronics for the year 2002 to 2004. On the left, the
sales are shown per quarter. On the right, the data are aggregated to provide the annual sales.
31
Attribute Subset Selection
Feature selection (i.e., attribute subset selection):
Select a minimum set of features such that the
probability distribution of different classes given
the values for those features is as close as
possible to the original distribution given the
values of all features
reduce # of patterns in the patterns, easier to
understand
32
Heuristic Attribute Selection
Methods(1)
There are 2d possible sub-features of d features
Several heuristic feature selection methods:
Best single features under the feature
independence assumption: choose by
significance tests
Stepwise forward selection:
Starts with an empty set of attributes as the reduced
set.
The best of the original attributes is determined and
added to the reduced set.
At each subsequent iteration or step, the best of the
remaining original attributes is added to the set … etc.
Stepwise backward elimination:
Starts with the full set of attributes.
At each step, it removes the worst attribute remaining
in the set.
33
Heuristic Attribute Selection
Methods(2)
Combination of forward selection and backward
elimination:
The stepwise forward selection and backward
elimination methods can be combined.
At each step, the procedure selects the best attribute
and removes the worst from among the remaining
attributes.
Decision tree induction:
Decision tree algorithms, such as ID3, C4.5, and CART,
were originally intended for classification.
It constructs a flow chart like structure
Each internal (nonleaf) node denotes a test on an attribute,
Each branch corresponds to an outcome of the test
Each external (leaf) node denotes a class prediction
At each node, the algorithm chooses the “best” attribute to
partition the data into individual classes.
34
Heuristic Attribute Selection Methods:
Example
35
Dimensionality Reduction
Data encoding or transformations are applied so as to obtain
a reduced or “compressed” representation of the original
data.
If the original data can be reconstructed from the compressed
data without any loss of information, the data reduction is
called lossless.
If, instead, we can reconstruct only an approximation of the
original data, then the data reduction is called lossy.
There are several well-tuned algorithms for string
compression. Although they are typically lossless, they allow
only limited manipulation of the data.
Two popular and effective methods of lossy dimensionality
reduction:
Wavelet transforms
Principal components analysis
36
Data
Compression
Original Data Compressed
Data
lossless
s s y
lo
Original Data
Approximated
37
Data Reduction Method:
Clustering
Partition data set into clusters based on similarity, and store
cluster representation (e.g., centroid and diameter) only
Can be very effective if data is clustered but not if data is
“smeared”
Can have hierarchical clustering and be stored in multi-
dimensional index tree structures
There are many choices of clustering definitions and
clustering algorithms.
38
Data Reduction Method: Sampling
Sampling: obtaining a small sample s to represent the whole
data set N
Allow a mining algorithm to run in complexity that is
potentially sub-linear to the size of the data
Choose a representative subset of the data
Simple random sampling may have very poor
performance in the presence of skew
Develop adaptive sampling methods
Stratified sampling:
Approximate the percentage of each class (or
subpopulation of interest) in the overall database
Used in conjunction with skewed data
Note: Sampling may not reduce database I/Os (page at a
time)
39
Sampling: with or without
Replacement
WOR
S RS dom r an
m p l e o ut
( s i wi t h
p l e
sa m m e nt )
p l a ce
re
SRSW
R
Raw Data
40
Sampling: Cluster or Stratified
Sampling
Raw Data Cluster/Stratified Sample
41
Data Preprocessing
Why preprocess the data?
Data cleaning
Data integration and transformation
Data reduction
Discretization and concept hierarchy
generation
Summary
42
Discretization
Three types of attributes:
Nominal — values from an unordered set, e.g., color,
profession
Ordinal — values from an ordered set, e.g., military or
academic rank
Continuous — real numbers, e.g., integer or real numbers
Discretization:
Divide the range of a continuous attribute into intervals
Some classification algorithms only accept categorical
attributes.
Reduce data size by discretization
43
Discretization and Concept
Hierarchy
Discretization
Reduce the number of values for a given continuous
attribute by dividing the range of the attribute into
intervals
Interval labels can then be used to replace actual data
values
Supervised vs. unsupervised
Split (top-down) vs. merge (bottom-up)
Discretization can be performed recursively on an attribute
Concept hierarchy formation
Recursively reduce the data by collecting and replacing low
level concepts (such as numeric values for age) by higher
44
Discretization and Concept Hierarchy
Generation for Numeric Data
Typical methods: All the methods can be applied recursively
Binning (covered above)
Top-down split, unsupervised,
Histogram analysis
Top-down split, unsupervised
Clustering analysis (covered above)
Either top-down split or bottom-up merge, unsupervised
Entropy-based discretization: supervised, top-down split
Segmentation by natural partitioning: top-down split,
unsupervised
45
Entropy-Based Discretization
Given a set of samples S, if S is partitioned into two intervals
S1 and S2 using boundary T, the information gain after
| | | |
partitioning is I ( S , T ) S 1 Entropy ( S 1) S 2 Entropy ( S 2)
|S| |S|
Entropy is calculated based on class distribution of the
m
Entropy ( S ) p log ( p )
1 i 2 i
samples in the set. Given m classes, the entropy of S1 is
i 1
where pi is the probability of class i in S1
The boundary that minimizes the entropy function over all
possible boundaries is selected as a binary
discretization
The process is recursively applied to partitions obtained until 46
Concept Hierarchy Generation for Categorical
Data
Specification of a partial/total ordering of attributes
explicitly at the schema level by users or experts
street < city < state < country
Specification of a hierarchy for a set of values by
explicit data grouping
{Urbana, Champaign, Chicago} < Illinois
Specification of only a partial set of attributes
E.g., only street < city, not others
Automatic generation of hierarchies (or attribute
levels) by the analysis of the number of distinct
values
E.g., for a set of attributes: {street, city, state,
48
Automatic Concept Hierarchy
Generation
Some hierarchies can be automatically
generated based on the analysis of the number
of distinct values per attribute in the data set
The attribute with the most distinct values is
placed at the lowest level of the
hierarchy
Exceptions, e.g., weekday, month, quarter,
year country 15 distinct values
province_or_ state 365 distinct
values
city 3567 distinct values
street 674,339 distinct values
49
Data Preprocessing
Why preprocess the data?
Data cleaning
Data integration and transformation
Data reduction
Discretization and concept hierarchy
generation
Summary
50
Summary
Data preparation or preprocessing is a big issue
for both data warehousing and data mining
Descriptive data summarization is need for quality
data preprocessing
Data preparation includes
Data cleaning and data integration
Data reduction and feature selection
Discretization
A lot a methods have been developed but data
preprocessing still an active area of research
51