Unit Iii
Unit Iii
TREES
Tree ADT – Tree Traversals - Binary Tree ADT – Expression trees – Binary Search Tree ADT – AVL Trees –
Priority Queue (Heaps) – Binary Heap.
Each node contains info, left, right and father fields. The left, right and father fields of a node point to
the node’s left son, right son and father respectively.
Example: -
A
B
-
C
-
-
-
D
-
The above representation appears to be good for complete binary trees and wasteful for many other
binary trees. In addition, the insertion or deletion of nodes from the middle of a tree requires the insertion
of many nodes to reflect the change in level number of these nodes.
Figure 2.5 Figure 2.6
0 1
A A
B 1 C 2 B C
2 3
D E F G D E F G
3 4 5 6 4 5 6 7
A B C D E F G A B C D E F G
0 1 2 3 4 5 6 1 2 3 4 5 6 7
For Figure 2.5 Root For Figure 2.6
= i leftchild=2i+1 Root = i
leftchild=2i
www.padeepz.net
rightchild=2i+2 rightchild=2i+1
The problems of sequential representation can be easily overcome through the use of a linked
representation.
Each node will have three fields LCHILD, DATA and RCHILD as represented below
In most applications it is adequate. But this structure make it difficult to determine the parent of a node
since this leads only to the forward movement of the links.
Struct treenode
{
int data;
structtreenode *leftchild; structtreenode
*rightchild;
}*T;
T
2000
1000 1 1006
1006
1000
1002 2 1004 1008 5 1010
1002
B C
CONVERSION OF A GENERAL TREE TO BINARY TREE
General Tree:
A General Tree is a tree in which each node can have an unlimited out degree. Each node may have as
many children as is necessary to satisfy its requirements. Example: Directory Structure
B F G
C H I J
It is considered easy to represent binary trees in programs than it is to represent general trees. So,
the general trees can be represented in binary tree format.
The binary tree format can be adopted by changing the meaning of the left and right pointers. There are
two relationships in binary tree,
Parent to child Sibling
to sibling
Using these relationships, the general tree can be implemented as binary tree.
Algorithm
Identify the branch from the parent to its first or leftmost child. These branches from each parent
become left pointers in the binary tree Connect siblings, starting with the leftmost child, using a
branch for each sibling to its right sibling.
Remove all unconnected branches from the parent to its children
B E F
C D G H I
(a) General Tree
A A
B
E F B E F
C D
G H I C D G H I
A
B E F
C E
D F
G
THE RESULTING BINARY TREE
H
Compared to linear data structures like linked lists and one dimensional array, which have only one
logical means of traversal, tree structures can be traversed in many different ways. Starting at the root of
a binary tree, there are three main steps that can be performed and the order in which they are performed
defines the traversal type. These steps (in no particular order) are: performing an action on the current
node (referred to as "visiting" the node),
traversing to the left child node, and traversing to the right child node. Thus the process is most easily
described through recursion.
A binary tree traversal requires that each node of the tree be processed once and only once in a
predetermined sequence.
The two general approaches to the traversal sequence are,
Depth first traversal Breadth
first traversal
Breadth-First Traversal
In a breadth-first traversal, the processing proceeds horizontally form the root to all its children, then to
its children’s children, and so forth until all nodes have been processed. In other words, in breadth
traversal, each level is completely processed before the next level is started.
Depth-First Traversal
In depth first traversal, the processing proceeds along a path from the root through one child to the most
distant descendent of that first child before
processing a second child. In other words, in the depth first traversal, all the descendants of a child are processed
before going to the next child.
Steps :
Traverse left subtree in inorder Process
root node
Traverse right subtree in inorder
B E
C D F
The Output is : C B D A E F
Algorithm
Algorithm inoder traversal (BinTree T) Begin
If ( not empty (T) ) then Begin
Inorder_traversal ( left subtree ( T ) ) Print ( info ( T )
) / * process node */ Inorder_traversal ( right subtree (
T ) ) End
End
Routines
void inorder_traversal ( NODE * T)
{
if( T ! = NULL)
{
inorder_traversal(T->lchild);
printf(“%d \t “, T->info);
inorder_traversal(T->rchild);
}
}
Preorder Traversal
Steps :
Process root node
Traverse left subtree in preorder Traverse right
subtree in preorder
Algorithm
Algorithm inoder traversal (BinTree T) Begin
If ( not empty (T) ) then Begin
Print ( info ( T ) ) / * process node */ Preorder_traversal
( left subtree ( T ) ) Preorder_traversal ( right subtree ( T
) ) End
End
Routines
void inorder_traversal ( NODE * T)
{
if( T ! = NULL)
{
printf(“%d \t “, T->info);
preorder_traversal(T->lchild);
preorder_traversal(T->rchild);
}
}
B
E
Output is : A B C D E F
Postorder Traversal
Steps :
Traverse left subtree in postorder Traverse right
subtree in postorder process root node
Algorithm
Algorithm postorder traversal (BinTree T) Begin
If ( not empty (T) ) then Begin
Postorder_traversal ( left subtree ( T ) )
Postorder_traversal ( right subtree( T)) Print ( Info ( T
) ) / * process node */ End
End
Routines
void postorder_traversal ( NODE * T)
{
if( T ! = NULL)
{
postorder_traversal(T->lchild);
postorder_traversal(T->rchild); printf(“%d
\t”, T->info);
}
}
B E
C D F
Output is : C D B F E A
A
Examples :
B
C
D E G
F
ANSWER : POSTORDER: DEBFGCA INORDER: DBEAFCG
PREORDER:ABDECFG
3.A BINARY TREE HAS 8 NODES. THE INORDER AND POSTORDER TRAVERSAL OF THE
TREE ARE GIVEN BELOW. DRAW THE TREE AND FIND PREORDER.
POSTORDER: F E C H G D B A
INORDER: FCEABHDG
A
Answer:
C B
F E D
H
G
PREORDER: ACFEBDHG
Example 4
Preorder traversal sequence: F, B, A, D, C, E, G, I, H (root, left, right) Inorder traversal
sequence: A, B, C, D, E, F, G, H, I (left, root, right) Postorder traversal sequence: A, C, E, D,
B, H, I, G, F (left, right, root)
APPLICATIONS
1. Some applications of preorder traversal are the evaluation of expressions in prefix notation and the
processing of abstract syntax trees by compilers.
2. Binary search trees (a special type of BT) use inorder traversal to print all of their data in
alphanumeric order.
3.A popular application for the use of postorder traversal is the evaluating of expressions in postfix notation.
EXPRESSION TREES
a/b+(c-d)e
Tree representing the expression a/b+(c-d)e.
Algorithm
1) Examine the next element in the input.
2) If it is an operand, output it.
3) If it is opening parenthesis, push it on stack.
4) If it is an operator, then
i) If stack is empty, push operator on stack.
ii) If the top of the stack is opening parenthesis, push operator on stack.
iii) If it has higher priority than the top of stack, push operator on stack.
iv) Else pop the operator from the stack and output it, repeat step 4.
5) If it is a closing parenthesis, pop operators from the stack and output them until an opening parenthesis
is encountered. pop and discard the opening parenthesis.
6) If there is more input go to step 1
7) If there is no more input, unstack the remaining operators to output.
Example
Suppose we want to convert 2*3/(2-1)+5*(4-1) into Prefix form: Reversed Expression: )1-4(*5+)1-2(/3*2
( /( 23*
2 /( 23*2
- /(- 23*2
1 /(- 23*21
) / 23*21-
+ + 23*21-/
5 + 23*21-/5
* +* 23*21-/5
( +*( 23*21-/5
4 +*( 23*21-/54
- +*(- 23*21-/54
1 +*(- 23*21-/541
) +* 23*21-/541-
Empty 23*21-/541-*+
It is a bit trickier algorithm, in this algorithm we first reverse the input expression so that a+b*c will
become c*b+a and then we do the conversion and then again the output string is reversed. Doing this has
an advantage that except for some minor modifications the algorithm for Infix->Prefix remains almost
same as the one for Infix->Postfix.
Algorithm
1) Reverse the input string.
2) Examine the next element in the input.
3) If it is operand, add it to output string.
4) If it is Closing parenthesis, push it on stack.
5) If it is an operator, then
i) If stack is empty, push operator on stack.
ii) If the top of stack is closing parenthesis, push operator on stack.
iii) If it has same or higher priority than the top of stack, push operator on stack.
iv) Else pop the operator from the stack and add it to output string, repeat step 5.
6) If it is a opening parenthesis, pop operators from stack and add them to output string until a closing
parenthesis is encountered. Pop and discard the closing parenthesis.
7) If there is more input go to step 2
8) If there is no more input, unstack the remaining operators and add them to output string.
9) Reverse the output string.
Example
Suppose we want to convert 2*3/(2-1)+5*(4-1) into Prefix form: Reversed Expression: )1-4(*5+)1-2(/3*2
) )
1 ) 1
- )- 1
4 )- 14
( Empty 14-
* * 14-
5 * 14-5
+ + 14-5*
) +) 14-5*
1 +) 14-5*1
- +)- 14-5*1
2 +)- 14-5*12
( + 14-5*12-
/ +/ 14-5*12-
3 +/ 14-5*12-3
* +/* 14-5*12-3
2 +/* 14-5*12-32
Empty 14-5*12-32*/+
Reverse the output string : +/*23-21*5-41 So, the final Prefix Expression is
+/*23-21*5-41
EVALUATION OF EXPRESSIONS
CONSTRUCTING AN EXPRESSION TREE
Let us consider the postfix expression given as the input, for constructing an expression tree by performing the
following steps :
1. Read one symbol at a time from the postfix expression.
2. Check whether the symbol is an operand or operator.
i. If the symbol is an operand, create a one node tree and push a pointer on to the
stack.
ii. If the symbol is an operator, pop two pointers from the stack namely, T1 and T2
and form a new tree with root as the operator, and T2 as the left child and T1 as the right
child.
iii. A pointer to this new tree is then pushed on to the stack.
We now give an algorithm to convert a postfix expression into an expression tree. Since we already have
an algorithm to convert infix to postfix, we can generate expression trees from the two common types of
input. The method we describe strongly resembles the postfix evaluation algorithm of Section
3.2.3. We read our expression one symbol at a time. If the symbol is an operand, we create a one-node
tree and push a pointer to it onto a stack. If the symbol is an operator, we pop pointers to two trees T1
and T2 from the stack (T1 is popped first) and form a new tree whose root is the operator and whose left
and right children point to T2 and T1 respectively. A pointer to this new tree is then pushed onto the
stack.
+**
The first two symbols are operands, so we create one-node trees and push pointers to them onto a stack.*
*For convenience, we will have the stack grow from left to right in the diagrams.
Next, a '+' is read, so two pointers to trees are popped, a new tree is formed, and a pointer to it is pushed
onto the stack.*
Next, c, d, and e are read, and for each a one-node tree is created and a pointer to the corresponding tree is
pushed onto the stack.
Finally, the last symbol is read, two trees are merged, and a pointer to the final tree is left on the stack.
BINARY SEARCH TREE
Binary search tree (BST) is a node-based binary tree data structure which
has the following properties:
The left sub-tree of a node contains only nodes with keys less than the
node's key.
The right sub-tree of a node contains only nodes with keys greater than
the node's key.
We assume that every node of a binary search tree is capable of holding an integer data item and that the
links can be made to point to the root of the left subtree and the right subtree, respectively. Therefore,
the structure of the node can be defined using the following declaration:
struct tnode
{
int data;
struct tnode *lchild,*rchild;
};
4 5
7 1
So
2
4
4
5
7
and 1<2 so
2
4
5
1
7
is the final BST.
OPERATIONS
Operations on a binary tree require comparisons between nodes. These comparisons are made
with calls to a comparator, which is a subroutine that computes the total order (linear order) on
any two values. This comparator can be explicitly or implicitly defined, depending on the
language in which the BST is implemented.
The following are the operations that are being done in Binary Tree
Searching.
Sorting.
Deletion.
Insertion.
Binary search tree declaration routine
Struct treenode;
Typedef struct treenode *position; Typedef struct
treenode *searchtree; Typedef int elementtype;
Structtreenode
{
Elementtype element;
Searchtree left; Searchtree
right;
};
Struct treenode
{
int element;
struct treenode *left; struct
treenode *right;
};
Make_null
This operation is mainly for initialization. Some programmers prefer to initialize the first element as a one-
node tree, but our implementation follows the recursive definition of trees more closely. It is also a simple
routine.
SEARCH_TREE
make_null ( void )
{
return NULL;
}
Find
This operation generally requires returning a pointer to the node in tree T that has key x, or NULL if
there is no such node. The structure of the tree makes this simple. If T is , then we can just return .
Otherwise, if the key stored at T is x, we can return T. Otherwise, we make a recursive call on a subtree
of T, either left or right, depending on the relationship of x to the key stored in T. The code in Figure
4.18 is an implementation of this strategy.
The insertion routine is conceptually simple. To insert x into tree T, proceed down the tree as you would
with a find. If x is found, do nothing (or "update" something). Otherwise, insert x at the last spot on the
path traversed. Figure below shows what happens. To insert 5, we traverse the tree as though a find were
occurring. At the node with key 4, we need to go right, but there is no subtree, so 5 is not in the tree, and
this is the correct spot.
Duplicates can be handled by keeping an extra field in the node record indicating the frequency of
occurrence. This adds some extra space to the entire tree, but is better than putting duplicates in the tree
(which tends to make the tree very deep). Of course this strategy does not work if the key is only part of a
larger record. If that is the case, then we can keep all of the records that have the same key in an auxiliary
data structure, such as a list or another search tree.
Figure shows the code for the insertion routine. Since T points to the root of the tree, and the root
changes on the first insertion, insert is written as a function that returns a pointer to the root of the new
tree. Lines 8 and 10 recursively insert and attach x into the appropriate subtree.
Thus 5 is inserted.
Delete
As is common with many data structures, the hardest operation is deletion. Once we have found the node
to be deleted, we need to consider several possibilities.
If the node is a leaf, it can be deleted immediately. If the node has one child, the node can be deleted
after its parent adjusts a pointer to bypass the node (we will draw the pointer directions explicitly for
clarity).. Notice that the deleted node is now unreferenced and can be disposed of only if a pointer to it
has been saved. The complicated case deals with a node with two children.
The general strategy is to replace the key of this node with the smallest key of the right subtree (which is
easily found) and recursively delete that node (which is now empty). Because the smallest node in the
right subtree cannot have a left child, the second delete is an easy one.
Case 1:
6 6
2 8 2 8
1 4 1 4
EXAMPLE :
Case 2 :
6 6
2 8 2 8
1 4 1
3 3
Case 3 :
6 6
2 8 3 8
1 4 1 4
5
3 5
The code in performs deletion. It is inefficient, because it makes two passes down the tree to find and
delete the smallest node in the right subtree when
this is appropriate. It is easy to remove this inefficiency, by writing a special
delete_min function, and we have left it in only for simplicity.
If the number of deletions is expected to be small, then a popular strategy to use is lazy deletion: When
an element is to be deleted, it is left in the tree and merely marked as being deleted. This is especially
popular if duplicate keys are present, because then the field that keeps count of the frequency of
appearance can be decremented. If the number of real nodes in the tree is the same as the number of
"deleted" nodes, then the depth of the tree is only expected to go up by a small constant (why?), so there
is a very small time penalty associated with lazy deletion. Also, if a deleted key is reinserted, the
overhead of allocating a new cell is avoided.
Introduction
To count the number of nodes in a given binary tree, the tree is required to be traversed recursively until
a leaf node is encountered. When a leaf node is encountered, a count of 1 is returned to its previous
activation (which is an activation for its parent), which takes the count returned from both the children's
activation, adds 1 to it, and returns this value to the activation of its parent. This way, when the
activation for the root of the tree returns, it returns the count of the total number of the nodes in the tree.
Program
Explanation
Input:
o 1.The number of nodes that the tree to be created should have
2. The data values of each node in the tree to be created
Output:
o The data value of the nodes of the tree in inorder
2. The count of number of node in a tree.
Example
Input:
o 1.The number of nodes the created tree should have = 5
2. The data values of nodes in the tree to be created are: 10, 20, 5, 9, 8
Output: 1. 5 8 9 10 20
2. The number of nodes in the tree is 5
Introduction
An elegant method of swapping the left and right subtrees of a given binary tree makes use of a recursive
algorithm, which recursively swaps the left and right subtrees, starting from the root.
Program
#include <stdio.h>
#include <stdlib.h> struct
tnode
{
int data;
struct tnode *lchild, *rchild;
};
Input:
o 1.The number of nodes that the tree to be created should have
2. The data values of each node in the tree to be created
Output:
o 1.The data value of the nodes of the tree in inorder before interchanging the
left and right subtrees
2. The data value of the nodes of the tree in inorder after interchanging the left and right
subtrees
Example
Input:
o 1.The number of nodes that the created tree should have = 5
2. The data values of the nodes in tree to be created are: 10, 20, 5, 9, 8
Output:
o 1. 5 8 9 10 20
2. 20 10 9 8 5
One of the applications of a binary search tree is the implementation of a dynamic dictionary. This
application is appropriate because a dictionary is an ordered list that is required to be searched
frequently, and is also required to be updated (insertion and deletion mode) frequently. So it can be
implemented by making the entries in a dictionary into the nodes of a binary search tree. A more
efficient implementation of a dynamic dictionary involves considering a key to be a sequence of
characters, and instead of searching by comparison of entire keys, we use these characters to determine a
multi-way branch at each step. This will allow us to make a 26-way branch according to the first letter,
followed by another branch according to the second letter and so on.
Applications of Trees
1. Compiler Design.
2. Unix / Linux.
3. Database Management.
4. Trees are very important data structures in computing.
5. They are suitable for:
a. Hierarchical structure representation, e.g.,
i. File directory.
ii. Organizational structure of an institution.
iii. Class inheritance tree.
b. Problem representation, e.g.,
i. Expression tree.
ii. Decision tree.
c. Efficient algorithmic solutions, e.g.,
i. Search trees.
ii. Efficient priority queues via heaps.
AVL TREE
The AVL tree is named after its two inventors, G.M. Adelson-Velsky and E.M. Landis, who published it
in their 1962 paper "An algorithm for the organization of information."
Avl tree is a self-balancing binary search tree. In an AVL tree, the heights of the two child subtrees of
any node differ by at most one; therefore, it is also said to be height-balanced.
The balance factor of a node is the height of its right subtree minus the height of its left subtree
and a node with balance factor 1, 0, or -1 is considered balanced. A node with any other balance factor is
considered unbalanced and requires rebalancing the tree. This can be done by avl tree rotations
Need for AVL tree
The disadvantage of a binary search tree is that its height can be as large as N-1
This means that the time needed to perform insertion and deletion and many other operations can
be O(N) in the worst case
We want a tree with small height
A binary tree with N node has height at least Q(log N)
Thus, our goal is to keep the height of a binary search tree O(log N) Such trees are called
balanced binary search trees. Examples are AVL tree, red-black tree.
An AVL tree is a special type of binary tree that is always "partially" balanced. The criteria that is used
to determine the "level" of "balanced-ness" which is the difference between the heights of subtrees of a
root in the tree. The "height" of tree is the "number of levels" in the tree. The height of a tree is defined
as follows:
AVL trees are identical to standard binary search trees except that for every node in an AVL tree, the
height of the left and right subtrees can differ by at most 1 . AVL trees are HB-k trees (height balanced
trees of order k) of order HB-1. The following is the height differential formula:
|Height (Tl)-Height(Tr)|<=k
When storing an AVL tree, a field must be added to each node with one of three values: 1, 0, or -1. A
value of 1 in this field means that the left subtree has a height one more than the right subtree. A value of
-1 denotes the opposite. A value of 0 indicates that the heights of both subtrees are the same. EXAMPLE
FOR HEIGHT OF AVL TREE
If BF={ --1,0,1} is satisfied, only then the tree is balanced. AVL tree is a
Height Balanced Tree.
If the calculated value of BF goes out of the range, then balancing has to be done.
Rotation :
Modification to the tree. i.e. , If the AVL tree is Imbalanced, proper rotations has to be done.
A rotation is a process of switching children and parents among two or three adjacent nodes to restore
balance to a tree.
Balance Factor :
BF= --1
7
BF=1 5 12
BF= --1
2 10 14
1. LL Rotation :
2. RR Rotation :
EXAMPLE:
LET US CONSIDER INSERTING OF NODES 20,10,40,50,90,30,60,70 in an AVL TREE
www
www
www
www
Struct avlnode
Typedef struct avlnode *position; Typedef
structavlnode *avltree; Typedef int elementtype;
Struct avlnode
{
Elementtype element; Avltree
left;
Avltree right; Int
height;
};
Static int height(position P)
{ If(P==NULL)
return -1;
else
return P-->height;
}
Avltree insert(elementtype X, avltree T)
{ If(T==NULL)
{ / * Create and return a one node tree*/ T=
malloc(sizeof(structavlnode)); If(T==NULL)
Fatalerror(“Out of Space”); Else
{
T-->element=X;
www
T-->height=0;
T-->left=T-->right=NULL;
}
}
Else if(X<T-->element)
{
T-->left=Insert(X,T-->left);
If(height(T-->left) - height(T-->right)==2) If(X<T--
>left-->element) T=singlerotatewithleft(T);
Else T=doublerotatewithleft(T);
}
Else if(X>T-->element)
{
T-->right=insert(X,T-->right);
If(height(T-->left) - height(T-->right)==2) If(X>T--
>right-->element)
T= singlerotatewithright(T); Else
T= doublerotatewithright(T);
}
T-->height=max(height(T-->left),height(T-->right)) + 1; Return T;
}
{
Position k1;
k1=k2-->left;
k2-->left=k1-->right; k1--
>right=k2;
k2-->height= max(height(k2-->left),height(k2-->right)) + 1; k1-->height=
max(height(k1-->left),height(k1-->right)) + 1; return k1; / * New Root * /
}
PROBLEMS
APPLICATIONS
AVL trees play an important role in most computer related applications. The need and use of avl trees are
increasing day by day. their efficiency and less complexity add value to their reputation. Some of the
applications are
AVL trees guarantee that the difference in height of any two subtrees rooted at the same node will
be at most one. This guarantees an asymptotic running time of O(log(n)) as opposed to O(n) in the
case of a standard bst.
Height of an AVL tree with n nodes is always very close to the theoretical minimum.
Since the avl tree is height balabced the operation like insertion and deletion have low time
complexity.
Since tree is always height balanced.Recursive implementation is possible.
The height of left and the right sub-trees should differ by atmost 1.Rotations are possible.
BINARY HEAPS
A heap is a specialized complete tree structure that satisfies the heap property:
it is empty or
the key in the root is larger than that in either child and both subtrees have the heap property.
In general heap is a group of things placed or thrown, one on top of the other.
In data structures a heap is a binary tree storing keys at its nodes. Heaps are based on the
concepts of a complete tree
Structure Property :
COMPLETE TREE
A binary tree is completely full if it is of height, h, and has 2h+1-1 nodes.
it is empty or
its left subtree is complete of height h-1 and its right subtree is completely full of height h-2 or
its left subtree is completely full of height h-1 and its right subtree is complete of height h-1.
PROCEDURE
INSERTION:
DELETION:
The deletion takes place by removing the root node.
The root node is then replaced by the last leaf node in the tree to obtain the complete binary tree.
It is verified with its children and adjacent node for its heap property. The verification process is
carried downwards until the heap property is satisfied.
If any verification is not satisfied then swapping takes place. Then finally we
have the heap.
PRIORITY QUEUE
Deletion(h) I Insertion(h)
PRIORITY QUEUE
The efficient way of implementing priority queue is Binary Heap (or) Heap.
1. Structure Property :
The Heap should be a complete binary tree, which is a completely filled tree, which is a
completely filled binary tree with the possible exception of the bottom level, which is filled from left to
right.
A Complete Binary tree of height H, has between 2h and (2h+1 - 1) nodes.
Sentinel Value :
The zeroth element is called the sentinel value. It is not a node of the tree. This value is required
because while addition of new node, certain operations are performed in a loop and to terminate the loop,
sentinel value is used.
Index 0 is the sentinel value. It stores irrelated value, inorder to terminate the program in case of complex
codings.
Structure Property : Always index 1 should be starting position.
The property that allows operations to be performed quickly is a heap order property.
Mintree:
Parent should have lesser value than children.
Maxtree:
Parent should have greater value than children.
Min-heap:
The smallest element is always in the root node.Each node must have a key that is less or equal to
the key of each of its children.
Examples
Max-Heap:
The largest Element is always in the root node.
Each node must have a key that is greater or equal to the key of each of its children.
Examples
HEAP OPERATIONS:
2.12.1 Insert:
Adding a new key to the heap
Example Problem :
1. DELETE MIN
2. Delete Min -- 13
Insert Routine
Delete Routine
Elementtype deletemin(priorityqueue H)
{
int i,child;
elementtype minelement,lastelement;
if(isempty(H))
{
Error(“Priority queue is empty”); Return H--
>element[0];
}
Minelement=H-->element[1]; Lastelement=H--
>element[H-->size--]; For(i=1;i*2<=H--
>size;i=child)
{
/ *Find smaller child */ Child=i*2;
If(child!=H-->size && H-->elements[child++]<H-->elements[child])
{
Child++;
}
/ * Percolate one level * / If(lastelement>H--
>elements[child]) H-->element[i]=H--
>elements[child]; Else
Break;
}
H-->element[i]=lastelement;
Return minelement;
}
1. Decrease Key.
2. Increase Key.
3. Delete.
4. Build Heap.
1. Decrease Key :
10 10 8
15 12 8 12 10 12
20 30 20 30 20 30
The Decrease key(P,∆,H) operation decreases the value of the key at position P, by a positive
amount ∆. This may violate the heap order property, which can be fixed by percolate up Ex :
decreasekey(2,7,H)
2. Increase Key :
The Increase Key(P,∆,H) operation increases the value of the key at position P, by a positive
amount ∆. This may violate heap order property, which can be fixed by percolate down.
Ex : increase key(2,7,H)
10 10 10
12 22 12 20 12
15
20 30 20 30 22 30
3. Delete :
The delete(P,H) operation removes the node at the position P, from the heap
H. This can be done by,
Step 1: Perform the decrease key operation, decrease key(P,∞,H). Step 2: Perform
deletemin(H) operation.
10 10 -
∞
12 - 12
20 10 12
∞
22 30 22 30
22 30
Step 2 : Deletemin(H)
10 10
12 12
10 22 12
20 20
30
APPLICATIONS
Heap sort :
One of the best sorting methods being in-place and with no quadratic worst-case scenarios.
Selection algorithms:
Finding the min, max, both the min and max, median, or even the k-th largest element can be
done in linear time using heaps.
Graph algorithms:
By using heaps as internal traversal data structures, run time will be reduced by an order of
polynomial. Examples of such problems are Prim's minimal spanning tree algorithm and Dijkstra's
shortest path problem.
ADVANTAGE
The biggest advantage of heaps over trees in some applications is that construction of heaps can be done in
linear time.
It is used in
o Heap sort
o Selection algorithms
o Graph algorithms
DISADVANTAGE
Performance :
Allocating heap memory usually involves a long negotiation with the OS.
Maintenance:
Dynamic allocation may fail; extra code to handle such exception is required.
Safety :
Object may be deleted more than once or not deleted at all .
B-TREES
Multi-way Tree
Each node has at-most m subtrees, where the subtrees may be empty.
Each node consists of at least 1 and at most m-1 distinct keys The keys in each
node are sorted.
OVERFLOW CONDITION:
A root-node or a non-root node of a B-tree of order m overflows if, after a key insertion, it contains
m keys.
Insertion algorithm:
If a node overflows, split it into two, propagate the "middle" key to the parent of the node. If the parent
overflows the process propagates upward. If the node has no parent, create a new root node.
• Note: Insertion of a key always starts at a leaf node.