Copyright @Snoide
Selection repeat: Merge
Sort Sort ⦁ ⦁ ⦁ ⦁ ⦁ ⦁ ⦁ ⦁
⦁ ⦁ ⦁ ⦁ ∘ ⦁ ⦁ ⦁
divide:
> Best: O(n^2) > Best: O(nlogn)
max
> Av.: O(n^2) > Av.: O(nlogn)
> Worst: O(n^2) > Worst: O(nlogn) mergesort ⦁ ⦁ ⦁ ⦁ ⦁ ⦁ ⦁ ⦁
> non-stable > additional O(n) space
⦁ ⦁ ⦁ ⦁ ⦁ ⦁ ⦁ ∘ (not in-place) merge:
max ⦁ ⦁ ⦁ ⦁ ⦁ ⦁ ⦁ ⦁
Quick
Bubble repeat:
Sort
∘ ⦁ ⦁ ⦁ ⦁ ⦁ ⦁ ⦁
Sort
⦁ ⦁ ⦁ ∘ ⦁ ⦁ ⦁ ⦁ pivot
> Best: O(nlogn)
> Best: O(n^2) > Av.: O(nlogn)
> Av.: O(n^2)
> Worst: O(n^2) a b Sort >
>
Worst: O(n^2)
non-stable
smaller
⦁ ⦁ ⦁ ∘ ⦁ ⦁ ⦁ ⦁
larger
If improved by using isSorted: if a < b then swap
> Best: O(n) pivot
> Av.: O(n^2)
> Worst: O(n^2) quicksort ⦁ ⦁ ⦁ ⦁ ⦁ ⦁ ⦁
repeat: Radix
Insertion > non-comparison
Sort Sort For n-digit objects:
sorted unsorted 1. Group by n-th digit.
> Best: O(nd) 2. Ungroup.
> Best: O(n)
> Av.: O(n^2) ⦁ ⦁ ⦁ ⦁ ⦁ ∘ ⦁ ⦁ >
>
Av.: O(nd)
Worst: O(nd)
3. Group by n-th digit.
4. Ungroup
> Worst: O(n^2) > not in-place …
insert to correct position 2n – 1: Group by first digit.
2n. Ungroup.
⦁ ⦁ ∘ ⦁ ⦁ ⦁ ⦁ ⦁
Copyright @Snoide
⦁ ⦁ ⦁ ⦁ Insert
Insert \ ⦁
⦁ ⦁ ∘ ⦁ ⦁
If list is already ⦁ ⦁ ⦁ ⦁ ⦁ ⦁ ⦁ \
filled, need to resize
first (doubling).
ListNode \ ⦁ ∘ ⦁
⦁ ⦁ ∘ ⦁ ⦁
Remove item next ⦁ ⦁ ⦁ ⦁ ⦁ ⦁ ⦁ \
⦁ ⦁ ⦁ ⦁
⦁ ⦁
\ ⦁ head
ArrayList List LinkedList
⦁ ⦁ ⦁ ⦁ ⦁ ⦁ ⦁ \
num ⦁ ⦁ ⦁ ⦁ > access item
> insert item \ ⦁
> remove item Remove
Operations
⦁ ⦁ ⦁ ⦁ ⦁ ⦁ ⦁ \
Operations
> access [best O(1); worst O(1); av. O(1)]
> insert [best O(1); worst O(n); av. O(n)]
> access [best O(1); worst O(n); av. O(n)]
> remove [best O(1); worst O(n); av. O(n)] next next next
> insert [best O(1); worst O(n); av. O(n)]
> remove [best O(1); worst O(n); av. O(n)]
Space complexity: O(n)
Space complexity: O(n)
⦁ ⦁ ⦁
TailedLinkedList CircularLinkedList DoublyLinkedList
\ ⦁ head tail \ ⦁ \ ⦁ head
\ \ ⦁ head
⦁ ⦁ ⦁ ⦁ ⦁ ⦁ ⦁ \ ⦁ ⦁ ⦁ ⦁ ⦁ ⦁ ⦁ ⦁ \ ⦁ ⦁ ⦁ ⦁ ⦁ ⦁ ⦁ \
Copyright @Snoide
⦁ ⦁ ⦁ ⦁ ∘ front
Push Enqueue ⦁
top ⦁
⦁
back ∘ Dequeue
Pop
⦁ ⦁ ⦁ \
front
top CircularArray \
⦁
front ⦁
Array
⦁ ⦁ ⦁ ⦁ ⦁ back
top
⦁
⦁
back
Stack Queue
> LIFO > FIFO
LinkedList > push item > enqueue/offer item TailedLinkedList
> peek item > peek item
> pop item > dequeue/poll item
\ ⦁ head
⦁ ⦁ ⦁ ⦁ ⦁ ⦁ ⦁ \
Enqueue Dequeue
top
> addBack(new item) > return and removeFront()
Push
\ ⦁ head
> addFront(new item)
All stack and queue operations
Pop
⦁ ⦁ ⦁ ⦁ ⦁ ⦁ ⦁ \ have O(1) time complexity.
top
> return and removeFront()
Copyright @Snoide
Separate
Chaining
Hash Function
> use a linked list to store collided keys
⦁ ⦁ ⦁ ⦁
>
>
fast
scatter keys evenly
⦁ ⦁ ⦁ \
> less collisions (1-1 if perfect) > retrieve: O(n)
> less slots required > insert: O(1)
> delete: O(n)
> n is length of linked lists
Uniform Hash
Functions
HashTable Collision Resolution
> distribute keys evenly
> minimise clustering
> retrieve value [av. O(1)] > always find an empty slot
Division Method >> return a[h(key)] > no secondary clustering (give different
> insert value [av. O(1)] probe sequence when two initial probes are
𝒉𝒂𝒔𝒉 𝒌 = 𝒌 % 𝒎 >> a[h(key)] = value the same)
Multiplication Method > delete value [ av. O(1)] > fast
>> a[h(key)] = null
𝒉𝒂𝒔𝒉 𝒌 = 𝒎(𝒌𝑨 − 𝒌𝑨 )
> construct:
HashMap(*capacity, *loadFactor)
> put(K key, V value)
> get(K key) Probing
> containsKey() HashMap
> containsValue() > looks for the next empty space (where)
> clear() > delete: mark as deleted
linear probing modified linear probing quadratic probing Double hashing
𝒉𝒂𝒔𝒉 𝒌 𝒉𝒂𝒔𝒉 𝒌 𝒉𝒂𝒔𝒉 𝒌 𝒉𝒂𝒔𝒉 𝒌
𝒉𝒂𝒔𝒉 𝒌 + 𝟏 % 𝒎 𝒉𝒂𝒔𝒉 𝒌 + 𝟏𝒅 % 𝒎 𝒉𝒂𝒔𝒉 𝒌 + 𝟏 % 𝒎 𝒉𝒂𝒔𝒉 𝒌 + 𝟏 ∗ 𝒉𝒂𝒔𝒉𝟐 (𝒌) % 𝒎
𝒉𝒂𝒔𝒉 𝒌 + 𝟐 % 𝒎 𝒉𝒂𝒔𝒉 𝒌 + 𝟐𝒅 % 𝒎 𝒉𝒂𝒔𝒉 𝒌 + 𝟒 % 𝒎 𝒉𝒂𝒔𝒉 𝒌 + 𝟐 ∗ 𝒉𝒂𝒔𝒉𝟐 (𝒌) % 𝒎
𝒉𝒂𝒔𝒉 𝒌 + 𝟑 % 𝒎 𝒉𝒂𝒔𝒉 𝒌 + 𝟑𝒅 % 𝒎 𝒉𝒂𝒔𝒉 𝒌 + 𝟗 % 𝒎 𝒉𝒂𝒔𝒉 𝒌 + 𝟑 ∗ 𝒉𝒂𝒔𝒉𝟐 (𝒌) % 𝒎
… … … …
> primary clustering > secondary clustering > secondary hash function (!= 0)
(two keys have same hash value)
Copyright @Snoide
v 1
BinaryTree Insert
v 10 v 11
> Each node has two attributes: > insert at the back
value and index [O(1)]
> shift up if required v v v v
v 1 [O(log n)]
100 101 110 111
v 10 v 11
ExtractMax 1
v v v v
> assign the back to the root v 10 v 11
100 101 110 111 [O(1)]
> shift down if required
> parent(i) = i / 2 [O(log n)]
> left(i) = 2 * i v v v v
> right(i) = 2 * i + 1
Heap 100 101 110 111
Create > insert value
> extract max
> Method 1: insert each value to an empty heap
[O(nlog n)]
> Method 2: copy the content and shift down HeapSort
from parent of the lowest nodes
[O(n)]
> call ExtractMax n times
O(nlog n)
Copyright @Snoide
UFDS Constructor
> rep[i] = i
> rank[i] = 0
> find until rep[i] == i
FindSet > path-compression: rep[i] = findSet(rep[i])
IsSameSet > check whether rep[a] == rep[b]
> if rank[a] > rank[b], then rep[b] = a
UnionSet > If rank[a] == rank[b], then rep[b] = a and rank[a]++
Copyright @Snoide
Binary Search Tree
> item
> parent
> left
> right
> if v < node, search(left, i)
Search > if v > node, search(right, i)
> O(h)
In-
order
FindMax/FindMin > O(h)
Pre-
order > similar to search
Insert > O(h)
> similar to search
Delete > O(h)
Post-
order Predecessor/ > leftmost child of right child (successor)
Successor > O(h)
Copyright @Snoide
AVL Tree
> item
> parent
> left
> right
> height Left Rotation
> size
Rebalance
> if bf == 2 and bf(left) == 0/1,
then right rotate
> if bf == 2 and bf(left) == -1,
then left rotate left and right rotate
> if bf == -2 and bf(right) == 0/-1,
then left rotate
> if bf == -2 and bf(right) == 1,
then right rotate right and left rotate
Right Rotation
Copyright @Snoide
> 2D array AdjMatrix
Adjacency Matrix > Space complexity: O(V2)
Pros Cons
> Check edge existence in O(1) > O(V) to enumerate neighbours
> Good for dense graph/Floyd > O(V2) space
Warshall’s
Graph Adjacency List
> Array of list which stores neighbours
> Space complexity: O(V + E)
Pros Cons
> O(k) to enumerate k neighbours > O(k) to check existence
> Good for sparse
graph/Dijkstra’s/DFS/BFS
> O(V+E) space
> Array of all edges
Edge List > Space complexity: O(E)
Pros Cons
> Good for Kruskal’s/Bellman Ford > Hard to find vertices
Copyright @Snoide
Graph Traversal
BFS DFS
> Queue > Stack
> for SSSP > for SCC
for all v in V: for all v in V:
visited[v] = 0 // avoid cycle visited[v] = 0 // avoid cycle
p[v] = -1 // parent O(V) O(V)
p[v] = -1 // parent
Q = {s} // start DFS(s)
visited[s] = 1
def DFS(u):
while Q is not empty: visited[u] = 1
u = Q.dequeue() for all v adjacent to u: O(E)
for all v adjacent to u: O(V + E) if v is unvisited:
visited(v) = true p[v] = u
p[v] = u DFS(v)
Q.enqueue(v)
Applications
> Reachability test
> Shortest path in unweighted graph
> Counting component
> Topological sort (Kahn’s/DFS)
> Counting SCCs (Kosaraju’s)
Minimum Spanning Tree
Prim’s Kruskal’s
> Start from any vertex > Start from shortest edge
> PriorityQueue > Sorted EdgeList and UFDS
> O(ElogV) > O(ElogV)
1. T = {s} // start 1. Sort E
2. enqueue edge connected to s into PQ 2. T = {}
3. while PQ is not empty: 3. while there are unprocessed edges in E:
e = pq.dequeue() pick the smallest edge e
if e.v not in pq: if adding e to T does not form a cycle:
T.append(v) add e to T
enqueue edge connected to v into PQ
4. T is an MST
4. T is an MST
Single-Source Shortest Pathway
Unweighted Weighted
> Modified BFS > Bellman Ford O(VE)
initSSSP(s)
for i = 1 to |V| - 1:
Tree for each edge(u, v) in E:
relax(u, v, w(u, v))
> Every path is a shortest path.
> DFS/BFS O(V)
No Negative
Weight Edge
No Negative
Weight Cycle Directed & Acyclic
> Dijkstra’s O((V+E) log V)
> Modified Dijkstra’s O((V+E) log V)
PQ.enqueue((0, s)) > one-pass Bellman Ford O(E)
initSSSP(s) for all other vertices v:
PQ.enqueue((INF, v))
PQ.enqueue((0, s)) while PQ is not empty:
while PQ is not empty: u = PQ.dequeue()
(d, u) = PQ.dequeue() Solved.append(u)
if d == D[u]: relax all outgoing edges of u
for each v adjacent to u:
relax(u, v) def relax(E(u, v, w)):
PQ.enqueue((D[v], v)) find v in PQ and update d[v]
All-Pairs Shortest Pathway
Floyd Warshall’s O(V3)
for (int k = 0; k < V; k++) {
for (int i = 0; i < V; i++) {
for (int j = 0; j < V; j++) {
D[i][j] = min(D[i][j], D[i][k] + D[k][j])
}
}
}