Association Rules
Association Rules
org
3
Items = products; Baskets = sets of products
someone bought in one trip to the store
Real market baskets: Chain stores keep TBs of
data about what customers buy together
▪ Tells how typical customers navigate stores, lets
them position tempting items
▪ Suggests tie-in “tricks”, e.g., run sale on diapers
and raise the price of beer
▪ Need the rule to occur frequently, or no $$’s
Amazon’s people who bought X also bought Y
4
Baskets = sentences; Items = documents
containing those sentences
▪ Items that appear together too often could represent
plagiarism
▪ Notice items do not have to be “in” baskets
▪ You only need to know each item belongs to which basket(s)
For example:
▪ Finding communities in graphs (e.g., Twitter)
6
Finding communities in graphs (e.g., Twitter)
Baskets = nodes; Items = outgoing neighbors
▪ Searching for complete bipartite subgraphs Ks,t of a
big graph
How?
▪ View each node i as a
basket Bi of nodes i it points to
t nodes
s nodes
7
First: Define
Frequent itemsets
Association rules:
Confidence, Support, Interestingness
Then: Algorithms for finding frequent itemsets
Finding frequent pairs
A-Priori algorithm
PCY algorithm + 2 refinements
8
Simplest question: Find sets of items that
appear together “frequently” in baskets
Support for itemset I: Number of baskets
containing all items in I TID Items
Support of
then sets of items that appear {Beer, Bread} = 2
in at least s baskets are called
frequent itemsets
9
Items = {milk, coke, pepsi, beer, juice}
Support threshold = 3 baskets
B1 = {m, c, b} B2 = {m, p, j}
B3 = {m, b} B4 = {c, j}
B5 = {m, p, b} B6 = {m, c, b, j}
B7 = {c, b, j} B8 = {b, c}
Frequent itemsets: {m}, {c}, {b}, {j},
{m,b} , {b,c} , {c,j}.
10
Association Rules:
If-then rules about the contents of baskets
{i1, i2,…,ik} → j means: “if a basket contains
all of i1,…,ik then it is likely to contain j”
In practice there are many rules, want to find
significant/interesting ones!
Confidence of this association rule is the
probability of j given I = {i1,…,ik}
support( I j )
conf( I → j ) =
support( I )
11
Not all high-confidence rules are interesting
▪ The rule X → milk may have high confidence for many
itemsets X, because milk is just purchased very often
(independent of X) and the confidence will be high
Interest of an association rule I → j:
difference between its confidence and the
fraction of baskets that contain j
Interest(I → j ) = conf( I → j ) − Pr[ j ]
▪ Interesting rules are those with high positive or
negative interest values (usually above 0.5)
Positive: high conf low Pr, j is rare but it frequently co-occurs with I (complementary)
Negative: low conf high Pr, j is frequent but seldom co-occurs with I (substitutive)
12
B1 = {m, c, b} B2 = {m, p, j}
B3 = {m, b} B4= {c, j}
B5 = {m, p, b} B6 = {m, c, b, j}
B7 = {c, b, j} B8 = {b, c}
13
Problem: Find all association rules with
support ≥s and confidence ≥c
▪ Note: Support of an association rule is the support
of the set of items on the left side
Hard part: Finding the frequent itemsets!
▪ If {i1, i2,…, ik} → j has high support and
confidence, then both {i1, i2,…, ik} and
{i1, i2,…,ik, j} will be “frequent”
support( I j )
conf( I → j ) =
support( I )
14
Step 1: Find all frequent itemsets I
▪ (we will explain this next)
Step 2: Rule generation
▪ For every subset A of I, generate a rule A → I \ A
▪ Since I is frequent, A is also frequent
▪ Variant 1: Single pass to compute the rule confidence
▪ confidence(A,B→C,D) = support(A,B,C,D) / support(A,B)
▪ Variant 2:
▪ Observation: If A,B,C→D is below confidence, so is A,B→C,D
▪ Can only generate “bigger” rules from smaller ones!
▪ Output the rules above the confidence threshold
15
B1 = {m, c, b} B2 = {m, p, j}
B3 = {m, c, b, n} B4= {c, j}
B5 = {m, p, b} B6 = {m, c, b, j}
B7 = {c, b, j} B8 = {b, c}
Support threshold s = 3, confidence c = 0.75
1) Frequent itemsets:
▪ {b,m} {b,c} {c,m} {c,j} {m,c,b}
2) Generate rules:
▪ b→m: c=4/6 b→c: c=5/6 b,c→m: c=3/5
▪ m→b: c=4/5 … b,m→c: c=3/4
▪ b→c,m: c=3/6
…
16
To reduce the number of rules we can
post-process them and only output:
▪ Maximal frequent itemsets: e.g., “{A, B, C} is frequent”
No superset is frequent is more informative than
“{A, B}” is frequent
▪ i.e., All supersets are infrequent
▪ Gives more pruning
or
▪ Closed itemsets:
No superset has the same count
▪ i.e., All supersets are less frequent
▪ But supersets can still be frequent
▪ Stores not only frequent information, but exact counts
17
Support threshold = 3
Frequent, but
superset BC
Support Maximal(s=3) Closed also frequent.
A 4 No No Frequent, and
B 5 No Yes its only superset,
ABC, not freq.
C 3 No No Superset BC
AB 4 Yes Yes has same count.
ABC 2 No Yes
18
Back to finding frequent itemsets Item
Item
21
Naïve approach to finding frequent pairs
Read file once, counting in main memory
the occurrences of each pair:
▪ From each basket of n items, generate its
n(n-1)/2 pairs by two nested loops
22
Naïve approach to finding frequent pairs
Read file once, counting in main memory
the occurrences of each pair:
▪ From each basket of n items, generate its
n(n-1)/2 pairs by two nested loops
Fails if O(#items2) exceeds main memory
▪ Remember: #items can be
100K (Wal-Mart) or 10B (Web pages)
▪ Suppose 105 items, counts are 4-byte integers
▪ Number of pairs of items: 105(105-1)/2 = 5*109
▪ Therefore, 2*1010 (20 gigabytes) of memory needed
23
For many frequent-itemset algorithms,
main-memory is the critical resource
▪ As we read baskets, we need to count
something, e.g., occurrences of pairs of items
▪ The number of different things we can count
is limited by main memory
▪ Swapping counts in/out is a disaster
▪ Randomly reading/writing disk is very time consuming
24
Two approaches:
Approach 1: Count all pairs using a matrix
Approach 2: Keep a table of triples [i, j, c] =
“the count of the pair of items {i, j} is c.”
▪ If integers and item ids are 4 bytes, we need
approximately 12 bytes for pairs with count > 0
▪ Plus some additional overhead for the hashtable
Note:
Approach 1 only requires 4 bytes per pair
Approach 2 uses 12 bytes per pair
(but only for pairs with count > 0)
25
12 per
4 bytes per pair
occurring pair
26
Approach 1: Triangular Matrix
▪ n = total number items
▪ Count pair of items {i, j} only if i<j
▪ Keep pair counts in lexicographic order:
▪ {1,2}, {1,3},…, {1,n}, {2,3}, {2,4},…,{2,n}, {3,4},…
▪ Pair {i, j} is at position (i –1)(n– i/2) + j –1
▪ Total number of pairs n(n –1)/2; total bytes= 2n(n-1)
▪ Triangular Matrix requires 4 bytes per pair
Approach 2 uses 12 bytes per occurring pair
(but only for pairs with count > 0)
▪ Beats Approach 1 if less than 1/3 of
possible pairs actually occur
27
Approach 1: Triangular Matrix
▪ n = total number items
▪ Count pair of items {i, j} only if i<j
▪ Keep pair counts in lexicographic order:
Problem is if we have too
▪ {1,2}, {1,3},…, {1,n}, {2,3}, {2,4},…,{2,n}, {3,4},…
▪ Pair {i,many items(iso
j} is at position thei/2)pairs
–1)(n– + j –1
▪ Total number
do notoffit pairs n(n –1)/2;
into total bytes= 2n(n-1)
memory.
▪ Triangular Matrix requires 4 bytes per pair
ApproachCan we
2 uses 12 do better?
bytes per pair
(but only for pairs with count > 0)
▪ Beats Approach 1 if less than 1/3 of
possible pairs actually occur
28
In practice, association-rule algorithms read Item
Item
the data in passes – all baskets read in turn Item
Item
Item
Item
Algorithm may read the data for multiple passes Item
Item
Item
Item
The true cost of mining massive disk-resident Item
Item
data is usually the number of disk I/Os
We measure the cost by the number of Etc.
30
A two-pass approach called
A-Priori limits the need for
main memory
Key idea: monotonicity
▪ If a set of items I appears at
least s times, so does every subset J of I
Contrapositive for pairs:
If item i does not appear in s baskets, then no
pair including i can appear in s baskets
So, how does A-Priori find frequent pairs?
31
Pass 1: Read baskets and count in main memory
the occurrences of each individual item
▪ Requires only memory proportional to #items
Counts of
pairs of
frequent items
(candidate
pairs)
Pass 1 Pass 2
33
For each k, we construct two sets of
k-tuples (sets of size k):
▪ Ck = candidate k-tuples = those that might be
frequent k-tuple sets (support > s)
▪ based on information from the pass for k–1
▪ Lk = the set of truly frequent k-tuples
Count All pairs Count
All of items To be
the items the pairs explained
items from L1
35
** Note here we generate new candidates by
generating Ck from Lk-1 and L1.
But that one can be more careful with candidate
generation. For example, in C3 we know {b,m,j}
cannot be frequent since {m,j} is not frequent
Counts of
pairs of
frequent items
(candidate
pairs)
Pass 1 Pass 2 39
Observation:
In pass 1 of A-Priori, most memory is idle
▪ We store only individual item counts
▪ Can we use the idle memory to reduce
memory required in pass 2?
Pass 1 of PCY: In addition to item counts,
maintain a hash table with as many
buckets as fit in memory
▪ Hash function should be deterministic
▪ Keep a count for each bucket into which
pairs of items are hashed
▪ For each bucket just keep the count, not the actual
pairs that hash to the bucket!
40
FOR (each basket) :
FOR (each item in the basket) :
add 1 to item’s count;
New FOR (each pair of items) :
in hash the pair to a bucket;
PCY add 1 to the count for that bucket;
Pass 2:
Only count pairs that hash to frequent buckets
42
How do we reserve memory for pass-2?
Replace the buckets by a bit-vector:
▪ 1 means the bucket count exceeded the support s
(call it a frequent bucket); 0 means it did not
Hash
Also, decide which items are frequent Hash table
table
for pairs
and list them for the second pass
43
Item counts Frequent items
Bitmap
Main memory
Hash
Hash table
table Counts of
for pairs
candidate
pairs
Pass 1 Pass 2
44
Count all pairs {i, j} that meet the
conditions for being a candidate pair:
1. Both i and j are frequent items
2. The pair {i, j} hashes to a bucket whose bit in
the bit vector is 1 (i.e., a frequent bucket)
45
Key idea: After Pass 1 of PCY, rehash only
those pairs that qualify for Pass 2 of PCY
▪ i and j are frequent, and
▪ {i, j} hashes to a frequent bucket from Pass 1
47
Item counts Freq. items Freq. items
Main memory
Bitmap 1 Bitmap 1
First Bitmap 2
hash table
First
Second Counts
hash table Counts ofof
hash table candidate
candidate
pairs
pairs
49
1. The two hash functions have to be
independent
2. We need to check both hashes on the
third pass
▪ If not, we may count pairs of frequent items
i and j that is first hashed to a frequent
bucket but happened to hash to an
infrequent bucket in the second hash
50
Key idea: Use several independent hash
tables on the first pass
Bitmap 1
Main memory
First
First hash
hash table
table Bitmap 2
Counts
Countsofof
Second
Second candidate
candidate
hash table
hash table pairs
pairs
Pass 1 Pass 2
51
Key idea: Use several independent hash
tables on the first pass
52
Either multistage or multihash can use more
than two hash functions
In multistage, there is a point of diminishing
returns, since the bit-vectors eventually
consume all of main memory
53
A-Priori, PCY, etc., take k passes to find
frequent itemsets of size k
Can we use fewer passes?
Use 2 or fewer passes for all sizes,
but may miss some frequent itemsets
▪ Random sampling
▪ SON (Savasere, Omiecinski, and Navathe)
▪ Toivonen (see textbook)
55
Take a random sample of the market baskets
Run a-priori or one of its improvements
in main memory Copy of
▪ So we don’t pay for disk I/O each sample
Main memory
baskets
time we increase the size of itemsets
▪ Reduce support threshold
Space
proportionally to match the sample size for
counts
▪ e.g. sample 10% of the baskets, then use s/10
56
Optionally, verify that the candidate pairs are
truly frequent in the entire data set by a
second pass (avoid false positives)
57
Repeatedly read small subsets of the baskets
into main memory and run an in-memory
algorithm to find all frequent itemsets
▪ Note: we are not sampling, but processing the
entire file in memory-sized chunks
58
On a second pass, count all the candidate
itemsets and determine which are frequent in
the entire set
60