7.
Explain skew heaps and write the implementation
of merge operation.
Skew Heaps:
The skew heaps were introduced by Sleator and Tarjan
(1986) as an analog of the leftist heaps, but without
balancing information. The interesting property here is
that, as in the splay trees, one can do without this
information if one accepts amortized bounds instead of
worst-case bounds. And by omitting the balancing
information, in principle the structure becomes simpler;
we just always perform the same sequence of operations.
The memory advantage of doing without balancing
information is insignificant; memory is never a problem,
and in the bottom-up variant of skew heaps, we actually
need several additional pointers per node. Without
balancing information, one cannot decide whether the
rank on the left or on the right is larger, so whether to
exchange left and right subtree to restore the leftist heap
property. In skew heaps, the strategy is just to exchange
always. This leads to simpler code. We do not need a stack
because there is no information propagated back to the
root.
Implementation of merge Operation:
For the analysis, we decompose both insert and merge in
twophases: first the change of the right path, performing
the insertion of the new element or the merging of the
right paths, and then the exchange operation in all nodes
of the right path that we visited. In the first phase of either
insert or merge, all nodes on the right path that were right-
heavy stay right-heavy, because some nodes might be
added in their right subtree whereas nothing changes in
their left subtree. It is possible that left-heavy nodes on the
right path become right-heavy, but there are only O(logn)
such nodes, so this increases the potential by at most
O(logn). The nodes that are not on the right path do not
change their status. In the second phase of either insert or
merge, we exchange left and right in each node we visited.
So, these nodes exchange left-heavy and right-heavy
status. Each left-heavy node that becomes right-heavy
increases the potential by 1, but there are only logn left-
heavy nodes among the nodes we visited. Each right-
heavy node becoming left-heavy decreases the potential
by 1. Thus, the second phase of either insert or merge also
increases the potential by at most O(logn).
8. Explain all the structures required for reaching all
the nodes in a constant time in a heap ordered tree.
Also provide the steps for combining the two
subtrees.
The structure of a node of a (binary) heap-ordered tree is
as follows:
We named the two pointers again left and right, but
different from the search tree, there is no order relation
between them. Again, we define a heap ordered tree
recursively: the heap-ordered tree is either empty or
contains in the root node a key, an object, and two
pointers, each of which might be either NULL or point to
another heap-ordered tree in which all keys are larger than
the key in the root node. Any structure with these
properties is a heap-ordered tree for its objects and key
values.
We have to establish some convention to mark the empty
heap; this is different from the situation in the search trees,
where we could use NULL fields in left and right pointers;
but in a heap-ordered tree, both pointers might
legitimately be NULL pointers. We could use the object
field, but there might be legitimate uses with some NULL
objects. Thus, we will decide on the empty heap
convention only later in the specific structures, but it
should always be something that can be tested just from
the root node in time O (1)O(1).
With these conventions we can now write down the
functions create_heap, heap_empty and find_min - all of
which are very simple constant-time operations. The
find_min function is split in two operations find_min_key
and find_min_object, which is more convenient than
returning a structure.
Steps for Combining Two Subtrees:
Combining two subtrees in a heap-ordered tree typically
involves merging two heaps. The steps for combining
two subtrees are as follows:
Compare the Roots: Compare the keys of the roots of the
two subtrees.
Merge: The subtree with the smaller root key becomes
the new root, and the other subtree becomes one of its
children.
Recursive Merge: If the new root already has a child,
recursively merge the other subtree with the existing
child subtree.
typedef struct heap_node_t {
key_t key;
struct heap_node_t *left;
struct heap_node_t *right;
/* possibly other information */
} heap_node_t;
heap_node_t *merge(heap_node_t *heap1,
heap_node_t*heap2) {
if (heap1 == NULL)
return heap2;
if (heap2 == NULL)
return heap1;
if (heap1->key < heap2->key) {
heap1->right = merge(heap1->right, heap2);
return heap1;
} else {
heap2->right = merge(heap2->right, heap1);
return heap2;
}
}
Root Pointer: The function merge takes two heap nodes,
heap1 and heap2, and returns the root of the merged
heap.
Left and Right Child Pointers: The left and right pointers
are used to recursively merge the subtrees.
Recursive Merge: The function recursively merges the
right subtree of the heap with the smaller root key with
the other heap.
9.Explain the construction and working principles of double-ended
heaps. How do min-heaps and max-heaps interact within this
structure? Describe the process of inserting, finding, deleting
minimum and maximum elements, and merging heaps.
Additionally, discuss the advantages of using this approach over
Brodal’s original element duplication method.
A double-ended heap is a specialized data structure that allows
efficient access to both the minimum and maximum elements in a
dynamic set of keys. Unlike traditional heaps (such as min-heaps
or max-heaps), which only provide fast access to one end of the
set (either the minimum or the maximum), double-ended heaps
generalize this functionality to support operations like insert, find
min, find max, delete min, delete max, and optionally merge or
change key. This makes them highly useful in applications where
both the smallest and largest elements need to be accessed or
removed frequently.
The double-ended heap is constructed using two primary
components: a min-heap and a max-heap. These heaps work
together to maintain the global minimum and maximum elements
efficiently. The structure also includes a pairing mechanism that
links elements between the min-heap and max-heap, ensuring that
the smaller element of each pair resides in the min-heap and the
larger element in the max-heap. Additionally, there is at most one
unmatched element that is not part of any pair.
The double-ended heap is constructed using two primary
components: a min-heap and a max-heap. These heaps work
together to maintain the global minimum and maximum elements
efficiently. The structure also includes a pairing mechanism that
links elements between the min-heap and max-heap, ensuring that
the smaller element of each pair resides in the min-heap and the
larger element in the max-heap. Additionally, there is at most one
unmatched element that is not part of any pair.
Min-Heap:
A min-heap is a binary tree where the smallest element is at the
root, and each parent node is smaller than its children.
It is used to store the smaller elements of the pairs.
Max-Heap:
A max-heap is a binary tree where the largest element is at the
root, and each parent node is larger than its children.
It is used to store the larger elements of the pairs.
Pairing Mechanism:
Elements are grouped into pairs, with the smaller element of each
pair stored in the min-heap and the larger element in the max-
heap.
This ensures that the min-heap always contains the smallest
elements, and the max-heap contains the largest elements.
At any point, there can be at most one element that is not part of
any pair. This element is considered "unmatched" and is treated
separately during operations.
Working Principles of Double-Ended Heaps
The double-ended heap works by maintaining the min-heap and
max-heap in such a way that the global minimum and maximum
can be accessed and updated efficiently. Below is a detailed
explanation of the key operations:
1. Insert Operation:
When a new element is inserted, the structure checks if there is an
unmatched element.
If an unmatched element exists, the new element is paired with it.
The smaller element of the pair is inserted into the min-heap, and
the larger element is inserted into the max-heap
If no unmatched element exists, the new element becomes the
unmatched element.
This ensures that the pairing mechanism is maintained, and the
heaps are balanced.
2. Find Min and Find Max:
Find Min:
The minimum element is either the root of the min-heap or the
unmatched element (if it exists and is smaller than the min-heap’s
root).
The operation compares the two and returns the smaller value.
Find Max:
The maximum element is either the root of the max-heap or the
unmatched element (if it exists and is larger than the max-heap’s
root).
The operation compares the two and returns the larger value.
3. Delete Min and Delete Max:
Delete Min:
The operation first identifies the smallest element by comparing
the root of the min-heap with the unmatched element (if any).
If the unmatched element is smaller, it is deleted and returned.
Otherwise, the root of the min-heap is deleted. Since this breaks a
pair, the corresponding larger element in the max-heap must also
be deleted and reinserted as an unmatched element
Delete Max:
The operation first identifies the largest element by comparing the
root of the max-heap with the unmatched element (if any).
If the unmatched element is larger, it is deleted and returned.
Otherwise, the root of the max-heap is deleted. Since this breaks a
pair, the corresponding smaller element in the min-heap must also
be deleted and reinserted as an unmatched element.
4. Merge Operation:
When two double-ended heaps are merged, their min-heaps and
max-heaps are combined separately.
If both heaps have unmatched elements, these elements are paired,
and the smaller one is inserted into the min-heap while the larger
one is inserted into the max-heap.
This ensures that the merged heap maintains the correct pairing
structure.
Advantages Over Brodal’s Element Duplication Method
Brodal’s method for implementing double-ended heaps involves
element duplication, where each element is stored in both the min-
heap and max-heap. While this approach achieves the same time
complexity for operations, it has significant drawbacks:
Space Efficiency:
Brodal’s method doubles the space requirement because each
element is stored twice (once in the min-heap and once in the
max-heap).
The pairing strategy avoids this by storing each element only
once, either in the min-heap or max-heap, depending on its value
relative to its pair.
Both methods achieve the same time complexity: insert, find min,
find max, and merge in O(1), and delete min and delete max in
O(log n).
However, the pairing strategy reduces the space overhead without
sacrificing performance.
The pairing strategy can be applied to various underlying heap
structures, such as array-based heaps, binomial heaps, or leftist
heaps.
It is a general construction principle that works with any heap
supporting merge and arbitrary deletions.
Practical Implementation:
The pairing strategy is often easier to implement, as it avoids the
need to manage duplicated elements and their pointers.
It also simplifies operations like merge, as only the heaps and
unmatched elements need to be combined.
Applications of Double-Ended Heaps
Double-ended heaps are particularly useful in scenarios where
both the smallest and largest elements of a dynamic set need to be
accessed or removed frequently. Some common applications
include
Priority Queues: Where tasks with both the highest and lowest
priorities need to be processed.
Sliding Window Algorithms: Where the minimum and maximum
values in a sliding window need to be tracked.
Resource Allocation: Where resources need to be allocated based
on both the smallest and largest available quantities.
Double-ended heaps provide a powerful generalization of
traditional heaps by enabling fast access to both the minimum and
maximum elements. The pairing strategy, which avoids element
duplication, is a space-efficient and flexible approach that
achieves the same performance as Brodal’s method. By combining
a min-heap and max-heap with a pairing mechanism, double-
ended heaps support efficient insertions, deletions, and merges,
making them suitable for applications requiring access to both
ends of a dynamic set of keys. This structure strikes a balance
between performance, space efficiency, and ease of
implementation, making it a practical choice for many real-world
problems.
10. a) Theorem. The doubled stack structure supports push, pop,
and find min in O(1) worst-case time.
b) Theorem. The doubled queue is a minqueue that supports
enqueue, dequeue, and find min in O(1) amortized time.
a)Theorem. The doubled stack structure supports push, pop, and
find min in O(1) worst-case time.
The same problem for a queue instead of a stack is more difficult,
but also more important. A minqueue is a structure that supports the
operations enqueue, dequeue, and find min. It models a sliding
window over a sequence of items, where we want to keep track of
the smallest key value in that window. One application of a
minqueue is to partition a sequence of objects into groups of
consecutive objects such that each group has a certain size and the
breakpoints have small values. There, each potential breakpoint
defines an interval of potential next breakpoints, which is a queue,
and we need the minimum value of the next breakpoint as function
of the previous breakpoint. This type of problem was first discussed
by McCreight (1977) in the context of choosing page breaks in an
external-memory index structure; there, normal heaps were used
(Diehr and Faaland 1984). The same problem occurs in many other
contexts, for example, in text formatting, breaking text into lines. A
simple version of a minqueue with amortized O(1) time works as
follows: We have a queue for the objects and additionally a double-
ended queue for the minimum key values (it really needs only one-
and-a-half ends). The operations are as follows: { enqueue:
Enqueue the object in the rear of the object queue; remove from the
rear of the minimum key queue all keys that are larger than the key
of the new object, and then add the new key in the rear of the
minimum key queue. { dequeue: Dequeue and return the object
from the front of the object queue; if its key is the same as the key
in front of the minimum key queue, dequeue that key. { find min:
Return the key value in front of the minimum key queue.
b)Theorem. The doubled queue is a minqueue that supports
enqueue, dequeue, and find min in O(1) amortized time.
A structure that supports all double-ended queue operations and
find min in O(1) worst-case time is described in Gajewska and
Tarjan (1986), and a further extension to allow concatenation, but
only in amortized O(1) time, occurs in Buchsbaum, Sundar, and
Tarjan (1992). A different O(1) worst-case generalization is a min-
heap that discards on each insert all those elements that have a
larger key than the new element (Sundar 1989). That is exactly what
the minimum key queue did in the previously described version of
a minqueue; replacing it by the structure (Sundar 1989) gives
another O(1) worst-case minqueue. A minqueue that additionally
supports key change operations, also in O(1) amortized time, was
given, together with some applications in Suzuki, Ishiguro, and
Nishizeki (1992).
Some heap structures have been proposed that support the general
heap operations, but take advantage of some special update pattern
if it is present. The queaps of Iacono and Langerman (2002) give
O(1) time insert and amortized O(log k) time delete min, where k is
the number of items in the heap that are in it longer than the current
minimum item. Thus, the queap is fast if the minimum item is
always one of the oldest, so the items are inserted approximately in
increasing order. This is achieved by having separate structures for
“old” and “new” elements, converting all “new” to “old” whenever
the current minimum lies in the “new” part. This way, a delete min
operation needs to look up the minimum in both parts, but in most
cases it has to perform the deletion only on the small “old” part.
The fishspear structure by Fischer and Paterson (1994) performs
better in the opposite case, when current minimum usually is in the
heap only for a short time. This will happen if the inserted elements
are chosen from a fixed distribution. The fishspear takes an
amortized O(log m) time for an insert,
where m is the maximum number of elements smaller than the
inserted element that exist at any moment before it is deleted again,
and amortized O(1) time for a delete min. A similar property was
proved by Iacono (2000) for pairing heaps: the amortized
complexity of delete min in a pairing heap is O(log min(n, m)),
where n is the size of the heap at the time of the deletion, and m is
the number of operations between the insertion and the deletion of
the element. As with finger trees and splay trees, this advantage for
special update patterns given by a queap or a fishspear is too small
to perform better than a good ordinary heap unless the update
pattern is extremely strong.
15.Explain the Ford-Fulkerson method along with
algorithm and example problems. (residual graphs)
16. With an example, explain
maximum bipartite matching
problem:
5. Explain the challenges of changing keys in heaps and how different heap structures handle
the decrease key operation. Discuss its importance, element identification methods, and time
complexity, especially in binomial heaps.
Challenges of changing keys in heaps:
In balanced search trees as heaps, we can just delete the element with the old key and insert
it with the new key, which gives an O(log n) change-key operation. This reduction of
change-key to delete followed by insert works in any heap that allows the deletion of
arbitrary elements.
Any heap-ordered tree would support key changes if we introduced back-ward pointers in
the nodes. Then we could move elements up or down, as required by the heap-order
condition. The complexity of this would be the length of the path along which we had to
move the element, so at worst the height of the tree. Neither leftist heaps nor skew heaps
allow a sublinear height bound, so they cannot be used to get efficient key change
operations.
Different heap structures handle the decrease key operation:
The use of balanced search trees as heaps, we can just delete the element with the old key
and insert it with the new key, which gives an O(log n) change-key operation. This
reduction of change-key to delete followed by insert works in any heap that allows the
deletion of arbitrary elements. Indeed, the inverse reduction also exists: if the heap supports
a decrease key operation, we can also delete arbitrary elements: we decrease the key to the
minimum possible key value and then perform a delete min.
In binomial heap,it maintains the optimal height log(n + 1) . We again need back pointers to
allow an element to move in the direction of the root. Because the order condition of
binomial heaps is not quite the heap order, there is a difference between increase and
decrease of keys. If the key of a node is decreased, we follow the path back to the root, but
we need to check the order condition and possibly exchange the nodes only for those nodes
for which the next edge is a right edge; no restrictions apply along the left edges. Thus, a
decrease key operation takes O(log n) time.
Binomial Heaps:
Importance:
It allows for constant time inserts and deletes from the root list, constant time to merge for two root
lists together, and more. The structure of a binomial heap is similar to the binary number system.
Element identification of Binomial Heaps:
A single node is a binomial tree, which is denoted as B0
The binomial tree Bk consists of two binomial trees Bk−1, k ≥ 1.
Since we work with min binomial trees, when two Bk−1’s are combined to get one Bk, the
Bk−1 having minimum value at the root will be the root of Bk, the other Bk−1 will become
the child node.
Time complexity of Binomial Heaps:
Insert and extract min can be done in O(log n) time.
Merging of two heaps can be done in O(log n) in worst case, whereas classical heap incurs
O(n).
Decrease key and delete can be performed in O(log n) time
6. Explain binomial heap structure. Prove that combining two blocks of the same size 2h into
one block of size 2h+1 is O(1). Write the implementation for the find_min_key function in
binomial heap.
The heap is a tree-based structure that is a complete binary tree. The binomial tree is of orders 0 and
1. A binomial heap is a specific implementation of a heap. It is a collection of small binomial heaps
that are linked to each other and follow the heap property. There should be at least one binomial
tree in the binomial heap. It is mainly used to implement the priority queue.
Combining two blocks of the same size 2h into one block of size 2h+1 is O(1):
The central property of these blocks is that one can combine in time O(1) two blocks of the same
size 2 h into one block of size 2 h+1 : if n and m are the top nodes of two blocks, for which both
n->right and m->right are complete binary trees of height h and n->key < m->key, then we can
make n the new top node, whose right field points to m, and m becomes root of a complete binary
tree of height h + 1, with the tree previously below n->right now below m->left. This is the point
where the weaker order condition 1 is needed; if we required heap order, we could not just join
these trees together because the heap-order relation between m and the new m->left could be
violated, but condition 1 does not require any order along the left paths.
Implementation for the find_min_key function in binomial heap: