DSA
DSA
DSA (Data Structures and Algorithms) is the study of efficient ways to store, organize, and process data to solve computational problems.
Data Structure: A way of storing and organizing data to perform operations efficiently.
Algorithm: A step-by-step procedure to solve a problem or perform a computation.
Crux: DSA = Choosing the right data structure + efficient algorithm to optimize time and space.
Data Structures
│
├── 1. Primitive / Basic DS
│ ├─ Integer, Float, Char, Boolean (basic building blocks, fixed memory)
│
├── 2. Non-Primitive / Abstract DS
│ ├── A. Linear DS (Elements in sequence)
│ │ ├─ Array (fixed size, contiguous memory, O(1) access, inefficient insert/delete)
│ │ ├─ Linked List
│ │ │ ├─ Singly (single pointer, forward traversal)
│ │ │ ├─ Doubly (forward + backward, extra memory)
│ │ │ └─ Circular (last node points to first, efficient rotation)
│ │ ├─ Stack (LIFO, Push/Pop O(1), useful in recursion & undo operations)
│ │ └─ Queue (FIFO)
│ │ ├─ Simple (linear, may waste memory)
│ │ ├─ Circular (reuses space efficiently)
│ │ ├─ Priority (elements served by priority)
│ │ └─ Deque (double-ended, insert/delete from both ends)
│ │
│ └── B. Non-Linear DS (Elements not sequential)
│ ├─ Tree (Hierarchical DS)
│ │ ├─ General Tree (any number of children, used in org charts, XML/JSON)
│ │ ├─ Binary Tree (≤2 children per node)
│ │ │ ├─ Simple Binary Tree (no constraints)
│ │ │ ├─ Full Binary Tree (0 or 2 children per node)
│ │ │ ├─ Complete Binary Tree (all levels filled except last, left to right)
│ │ │ ├─ Perfect Binary Tree (all internal nodes full, all leaves same level)
│ │ │ ├─ Degenerate Tree (1 child per parent, behaves like linked list)
│ │ │ └─ Binary Search Tree (BST) (left < parent < right, average O(log n) search)
│ │ │ ├─ AVL Tree (self-balancing, height difference ≤1, guaranteed O(log n))
│ │ │ ├─ Red-Black Tree (color rules, used in STL map/set)
│ │ │ ├─ Splay Tree (recently accessed node moved to root, good for locality)
│ │ │ └─ Threaded Binary Tree (null pointers store successor/predecessor, efficient in-order
traversal)
│ │ ├─ Multi-way / Balanced Trees
│ │ │ ├─ B-Tree (multi-way, used in DB & FS, keeps data sorted)
│ │ │ ├─ B+ Tree (all values in leaves, internal nodes only keys, used in databases)
│ │ │ ├─ B* Tree (more dense than B+, fewer splits)
│ │ │ └─ 2-3 Tree (nodes have 2 or 3 children, always balanced)
│ │ ├─ Heap
│ │ │ ├─ Min-Heap (parent ≤ children, used in priority queues)
│ │ │ └─ Max-Heap (parent ≥ children, used in heap sort)
│ │ ├─ Trie (prefix tree, efficient string search, autocomplete, dictionary)
│ │ ├─ Segment Tree (range queries like sum, min, max, O(log n) query/update)
│ │ └─ Fenwick Tree / BIT (efficient prefix sum & updates in O(log n))
│ │
│ └─ Graph (Networked DS)
│ ├─ Directed / Undirected (edges with/without direction)
│ ├─ Weighted / Unweighted (edges with/without cost)
│ ├─ Simple / Multigraph (no loops/multiples vs multiple edges allowed)
│ └─ Traversals: DFS (stack/recursive), BFS (queue), shortest path, cycle detection
│
├── 3. Hash-Based DS
│ └─ Hash Table / Hash Map (key-value storage, average O(1) insert/search, collisions handled via
chaining/open addressing)
1) Primitive Data Structure
Data Type Description Typical Size (C/C++ Typical Size Typical Size Default Value in C/C++ Default Value in Java
32-bit) (C/C++ 64-bit) (Java) (Uninitialized locals: (Uninitialized locals:
garbage; globals/statics: 0) must be initialized)
long double Extended precision 12/16 bytes 16 bytes N/A 0.0 / garbage N/A
decimal
(compiler-dependent)
char Single character / 1 byte 1 byte 2 bytes (UTF-16) '\0' / garbage '\u0000'
small integer
bool Boolean value 1 byte 1 byte 1 byte false (0) / garbage false
(true/false)
1 In C, what is sizeof('A')? sizeof(int) (usually 4 bytes) sizeof(char) (1 'A' is a character constant of type int in C.
byte)
2 In C/C++, is char always signed? No, it’s implementation-dependent. Yes Many think char means signed char, but it can
be unsigned by default.
3 In Java, is boolean size always 1 byte? No, JVM doesn’t define exact size. Yes, 1 byte Size is abstract in Java; depends on
compiler/JVM memory layout.
4 In C, does sizeof(void) return 1? No, void has no size. Yes void* is allowed, but sizeof(void) is invalid.
5 Can sizeof(short) equal sizeof(int)? Yes No C standard only sets minimum sizes; they can be
equal on some systems.
6 If int is 4 bytes, is int* also 4 bytes? On 32-bit → Yes, Yes, always Pointer size depends on architecture, not the
On 64-bit → No (8 bytes) type it points to.
7 In Java, is char signed or unsigned? Unsigned (UTF-16) Signed Many assume Java chars behave like C chars.
8 Default value of uninitialized local static variable 0 Garbage Static variables (even local ones) are
in C? zero-initialized.
9 Default value of uninitialized local int in C? Garbage 0 Only global/static vars are initialized to 0
automatically.
10 Minimum range of int in C (per standard)? -32767 to 32767 -2,147,483,648 to Standard defines minimum range, not exact —
2,147,483,647 actual depends on implementation.
11 Is float in Java IEEE 754 compliant? Yes No Java explicitly defines floating-point as IEEE 754
single precision.
12 Can bool in C++ store more than 1 byte? Yes (due to padding/alignment) No sizeof(bool) is 1 byte, but containers may
pad it.
13 Can a pointer to char be used to access any Yes (via void* casting) No char* is allowed to access raw memory per C
type in C? standard.
14 Is sizeof('A') same in C and C++? No Yes In C++ 'A' is char, in C it’s int.
15 Does Java have unsigned primitive integers? No (except char) Yes Java only has signed integer types, except char.
16 Can sizeof(long) be same as Yes No On LP32 systems, both can be 4 bytes.
sizeof(int) in C?
18 Is an uninitialized array of static storage Yes No Static/global arrays are zero-initialized in C/C++.
duration zeroed?
19 Can sizeof(float) be greater than Unlikely but possible No Standard doesn’t forbid it, but rare in practice.
sizeof(double)?
20 In Java, is byte unsigned? No, it’s signed (-128 to 127) Yes Many confuse Java byte with C unsigned char.
2) ADT (Abstract Data Type)
Aspect ADT (Abstract Data Type) Data Structure
Meaning Logical description of data and allowed operations Concrete implementation of data and operations in memory
Examples Stack, Queue, List, Map, Tree (as concepts) Array, Linked List, Binary Tree, Hash Table
**Types of ADT**
ADT Type ADT Name Common Operations Possible Implementations (Data Important Facts / Exam Points
Structures)
Linear ADT List Create, Insert, Delete, Traverse, Search, Array, Singly Linked List, Doubly Lists can be static (array) or dynamic (linked
Update Linked List, Circular Linked List list).
Arrays → O(1) access, Linked Lists → O(1)
insertion at head.
Stack push(), pop(), peek()/top(), Array, Linked List, Two Queues LIFO principle;
isEmpty(), isFull() (Stack using Queues) recursion uses stack internally;
Stack using queues: push O(n) or pop O(n)
method possible.
Queue enqueue(), dequeue(), Array, Linked List, Circular Array, FIFO principle;
peek()/front(), isEmpty(), isFull() Circular Linked List, Two Stacks circular array avoids wasted space; Queue
(Queue using Stacks) using stacks can be implemented with costly
enqueue or dequeue.
Deque Insert/Delete at both ends (insertFront, Circular Array, Doubly Linked List Special types: Input-Restricted &
insertRear, deleteFront, deleteRear) Output-Restricted Deque.
Priority Queue insert(item, priority), Array (sorted/unsorted), Linked List Binary Heap → O(log n) insertion/deletion;
deleteHighestPriority() or (sorted/unsorted), Binary Heap, Priority queue using sorted list → fast
deleteLowestPriority() Two Stacks with Sorting deletion, slow insertion.
Non-Linear Tree Create, Insert, Delete, Traverse Linked Structure, Array (for Non-linear hierarchical structure; used in
ADT (Pre/In/Post/Level), Search complete trees) parsing, searching.
Binary Tree / Insert, Delete, Search, Traversals Linked Structure, Array (complete BST property: left < root < right; average
BST tree) O(log n) search.
AVL Tree Insert, Delete, Search, Rotations Linked Structure Self-balancing BST; balance factor ∈ {−1, 0,
1}.
Red-Black Tree Insert, Delete, Search, Rotations, Recoloring Linked Structure Guarantees O(log n) height; used in Java’s
TreeMap.
B-Tree / B+ Tree Search, Insert, Delete Linked Structure (node blocks), Used in databases;
Disk-based optimized for disk access.
Graph Add/Remove Vertex, Add/Remove Edge, Adjacency Matrix, Adjacency List, BFS uses queue;
BFS, DFS, Shortest Path Edge List DFS uses stack/recursion.
Special ADT Map / put(key, value), get(key), Hash Table, Tree Map (BST, Key-value pairs;
Dictionary remove(key) Red-Black Tree), Skip List HashMap average O(1) lookup.
Set add(), remove(), contains() Hash Table, Balanced BST, Bit Does not allow duplicates;
Vector Bit Vector memory-efficient for fixed ranges.
1) ARRAY (Linear, Static)
Array Type Declaration & Initialization Advantages & Disadvantages CRUX
1D Array (One a) Declaration with size: Advantages: Basic array, stores elements
Dimensional) int arr[5]; - Simple and easy to use linearly and supports fast
- Random access with O(1) indexing indexing.
-Static DS b) Initialization at declaration:
int arr[5] = {1, 2, 3, 4, 5}; Disadvantages:
- Fixed size (static)
c) Implicit size: - Insertion/deletion costly if not at the end
int arr[] = {1, 2, 3};
→ Output: 1 row, 3 columns:
1 2 3
e) Partial Initialization:
int arr[5] = {1, 2};
→ static/global: uninitialized elements = 0
→ local: uninitialized elements = indeterminate (garbage)
2D Array (Matrix) a) Declaration with rows and columns: Advantages: Represents a grid or matrix,
int arr[3][4]; - Useful for matrix or tabular data used in tabular data and image
- Supports random access processing.
b) Initialization at declaration (Full):
int arr[2][3] = {{1, 2, 3}, {4, 5, 6}}; Disadvantages:
- Fixed size
- Complex to resize
c) Size with implicit rows: - Higher memory usage
int arr[][3] = {...};
e) Partial Initialization:
// grouped initialization
int arr[2][3] = {{1}, {4, 5}};
→ static/global: uninitialized elements = 0
→ local: uninitialized elements = indeterminate
→ Output: 2 row, 3 columns:
1 0 0
4 5 0
// flat initialization
int b[2][3] = {1, 4};
→ Output: 2 row, 3 columns:
1 4 0
0 0 0
Multidimensiona - Declaration with multiple dimensions: Advantages: Useful for representing
l Arrays (3D, int arr[2][3][4]; - Useful for complex data structures like 3D models, higher-dimensional data like 3D
etc.) simulations space.
- Initialization at declaration:
int arr[2][2][2] = {{{1, 2}, {3, 4}}, Disadvantages:
- Fixed size
- High memory usage
- Initialization with nested braces:
- Complex indexing
int arr[2][2][2] = {{{1, 2}, {3, 4}}, {{5, 6}, {7,
8}}};
- Partial Initialization:
e.g., int arr[2][2][2] = {{{1}, {}}, {{}, {7, 8}}};
→ static/global: uninitialized elements = 0
→ local: uninitialized elements = indeterminate
1D Array arr[i] B = base address (arr[0]), B + (i × w) Base = 1000, Very direct; multiply index
i = index, sizeof(int) = 4 → by element size and add to
w = size of each element (bytes) arr[3] = 1000 + (3 × 4) = base
1012
2D Array (Row-major) arr[i][j] B = base, B + [(i × n) + j] × w Base = 2000, C, C++ use row-major
i = row index, sizeof(int) = 4, n = 4 order
j = column index, → arr[2][1] = 2000 +
n = no. of columns, [(2×4)+1]×4 = 2036
w = size of each element
2D Array (Column-major) arr[i][j] B = base, B + [(j × m) + i] × w Base = 2000, Used in Fortran, MATLAB,
i = row index, sizeof(int) = 4, m = 3 not C
j = column index, → arr[2][1] = 2000 +
m = no. of rows, [(1×3)+2]×4 = 2016
w = size of each element
**Operations - Complexity (Array)**
Operation Case Description 1D Array Time 2D Array Time Extra Space CRUX
Complexity (n Complexity (m × n Complexity for
elements) elements) Operation
Access B, A, W Direct indexing O(1) O(1) O(1) Direct access via index;
constant time for all arrays
Search (Linear) B Element found at first position O(1) O(1) O(1) Linear search checks each
element; best case is first
element
A Element found near middle O(n) O(m × n) O(1) Average case scans about
half elements
W Element not found or at last position O(n) O(m × n) O(1) Worst case requires
checking all elements
Search (Binary) B Element found at middle (sorted O(1) O(1) O(1) Requires sorted array;
array) (if sorted, binary search divides search space by half
over flattened matrix or each step
row/column sorted)
A Element found after few divisions O(log n) O(log (m × n)) O(1) Much faster than linear
search for sorted arrays
W Element not found O(log n) O(log (m × n)) O(1) Still logarithmic time in
worst case
Insertion B Insert at end if space available O(1) O(1) O(1) Fast if space available at
end; static arrays have fixed
size
A Insert anywhere (some shifting O(n) O(m × n) O(1) Inserting not at end requires
needed) shifting elements
W Insert at start (all elements shifted) O(n) O(m × n) O(1) Worst case shifts all
elements
Deletion B Delete near end (few shifts) O(k), k ≤ n O(k), k ≤ m × n O(1) Deleting at end requires
minimal shifting
A Delete near middle (half shifted) O(n) O(m × n) O(1) Deleting in middle shifts
many elements
W Delete at start (all shifted) O(n) O(m × n) O(1) Worst case shifts entire
array
** STRING **
Aspect C-style String (C & C++) std::string (C++ only)
Representation 1D array of char ending with \0 Class object storing characters internally + size/capacity
Null Terminator ✅ Required to mark end of string ❌ Not required (length stored internally)
Memory Allocation Fixed size (static array or manual malloc) Dynamic (auto-resizes as needed)
Size Flexibility ❌ No (array size fixed after creation) ✅ Yes (resize automatically)
Header File <string.h> in C, <cstring> in C++ <string>
Memory Layout for "Hello" [H][e][l][l][o][\0] (contiguous in memory) Stores characters + metadata (length, capacity) —
implementation dependent
** POINTERS **
Topic Details Examples / Code
Definition A pointer is a variable that stores the memory address of another variable. Example:
int *p; // pointer to int
Types of Pointers - Null Pointer → int *p = NULL; Note: Wild & dangling pointers cause undefined behavior.
Array with Pointers (C) Dynamic array creation using malloc and free C Example:
#include <stdio.h>
#include <stdlib.h>
int main() {
int n = 5;
int *arr = (int *)malloc(n * sizeof(int));
free(arr);
return 0;
}
Array with Pointers (C++) Dynamic array creation using new and delete[] C++ Example:
#include <iostream>
using namespace std;
int main() {
int n = 5;
int *arr = new int[n];
delete[] arr;
return 0;
}
Pointer Type Mismatch Occurs when a pointer of one type points to a variable of another type. Direct Example:
assignment without a cast causes a compile-time warning/error. Casting #include <stdio.h>
can bypass it but may cause undefined behavior if data sizes differ.
int main() {
int p;
char c = 'A'; // ASCII 65
//p = &c; // Compiler warning: incompatible pointer type
p = (int)&c; // Forced type cast
return 0;
}
C vs C++ Allocation C → malloc / calloc / free In C++ prefer std::vector for safety
C++ → new / delete or std::vector
2) LINKED LIST
● [Data|Next] -> [Data|Next] -> [Data|Next] -> NULL ● NULL <- [Prev|Data|Next] <-> [Prev|Data|Next] <-> [Prev|Data|Next] -> NULL
Advantages: Simple structure, dynamic size, efficient insertion/deletion at head. Advantages: Traversal in both directions, easier deletion/insertion when node address is
known.
Disadvantages: Only forward traversal, cannot directly access previous node, O(n)
search Disadvantages: Extra memory for previous pointer, more complex implementation.
.
Use Case: Basic dynamic data storage, implementing stacks & queues. Use Case: Deque implementation, navigation in browsers (back/forward).
+ —----------------------------------------------------+
● [Data|Next] -> [Data|Next] -> [Data|Next] --+ | v
^ | ● [Prev|Data|Next] <-> [Prev|Data|Next] <-> [Prev|Data|Next]
—------------------------------------------------------+ ^ |
+-------------------------------------------------------------------------+
● struct Node {
int data; ● struct Node {
struct Node* next; int data;
}; struct Node* prev;
struct Node* head = NULL; // last node's next points to head struct Node* next;
};
struct Node* head = NULL; // prev of head = tail, next of tail = head
Advantages: Can start traversal from any node, efficient for circular traversal. Advantages: Traverse from any node in both directions, no NULL pointers.
Disadvantages: Only forward traversal, complex insertion/deletion logic. Disadvantages: Highest memory overhead, most complex to implement.
Use Case: Round-robin scheduling, playlist looping. Use Case: Advanced scheduling, multi-directional navigation in apps.
**Operations - Complexity (Linked List)**
Insertion at Head O(1) O(1) O(1) O(1) Always O(1) since the head
pointer is known.
Insertion at Tail O(n) O(n) O(1) if tail maintained O(1) if tail maintained Tail pointer drastically
(O(1) if tail maintained) (O(1) if tail maintained) speeds tail insertion.
Insertion at Middle O(1) O(1) O(1) O(1) Direct pointer access allows
(known pointer) constant-time insertion.
Deletion at Head O(1) O(1) O(1) O(1) Always O(1) since the head
pointer is known.
Deletion at Tail O(n) O(1) if tail maintained O(n) O(1) if tail maintained DLL/CDLL can delete tail in
O(1) using prev.
CRUX (Overall) Needs O(n) to delete tail Can delete tail in O(1) with tail Tail insertion O(1) with Tail insertion & deletion All types have O(n)
unless tail pointer pointer (due to prev). tail pointer (tail->next both O(1) with tail pointer traversal/search; tail pointer
maintained. = head). Deleting tail is and prev. improves tail ops
O(n) without extra drastically.
pointer.
Parameter Array Linked List
Memory Overhead Minimal (only data stored) Extra memory for pointers
in each node
A linear data structure that follows LIFO (Last In, First Out) principle; the last element inserted is the first to be removed.
Description Implemented using a fixed-size or dynamic array; Implemented using nodes with data and next Simulated using two queues;
top index tracks the top element. pointer; top points to head node. Either push or pop can be costly depending
on method.
Advantages Simple, fast access, O(1) push/pop, memory Dynamic size, no memory wastage, efficient Shows flexibility of data structures; useful in
contiguous. insertion/deletion. theoretical/interview questions.
Disadvantages Fixed size unless dynamic array, resizing costly, Extra memory for pointers, slightly slower, more Slower (O(n) for either push or pop),
requires contiguous memory. complex to implement. complex to implement; rarely used in
production.
Operations & Complexity Push: O(1) Push: O(1) Push (costly method): O(n)
Pop: O(1) Pop: O(1) Pop: O(1)
Peek/Top: O(1) Peek/Top: O(1) Push (cheap method): O(1)
Search: O(n) Search: O(n) Pop: O(n)
Applications - Expression evaluation (infix/postfix/prefix) for - Expression evaluation (infix/postfix/prefix) for - Interview/theoretical problems
fixed-size or known-length expressions dynamic/large expressions - Understanding LIFO using FIFO
- Call stack for recursion - Recursion handling - Algorithm exercises
- Undo/Redo in editors - Function call management - Advanced DS concepts
- Backtracking problems (maze solving) - Undo/Redo
- Syntax parsing - Backtracking
- Browser history navigation
Example / Code #define MAX 100 struct Node { Stack simulated using two queues;
int stack[MAX]; int data; demonstrates LIFO using FIFO.
int top = -1; struct Node* next;
Push-costly method: Push is O(n), Pop is
};
O(1).
void push(int x){ struct Node* top = NULL;
if(top < MAX-1) Pop-costly method: Push is O(1), Pop is
stack[++top] = x; void push(int x){ O(n).
} struct Node* newNode =
malloc(sizeof(struct Node)); Mainly used for interviews / theoretical
int pop(){ newNode->data = x; understanding.
if(top >= 0) newNode->next = top;
Shows flexibility in implementing one DS
return stack[top--]; top = newNode; using another.
return -1; // stack empty }
} Not efficient for practical applications
int pop(){ compared to array or linked list stack.
int peek(){ if(top == NULL)
if(top >= 0) return -1; // stack empty
return stack[top]; int val = top->data;
return -1; struct Node* temp = top;
} top = top->next;
free(temp);
return val;
}
int peek(){
if(top == NULL) return -1;
return top->data;
}
Use Cases When size is known/bounded, fast access required. When size unknown, frequent insert/delete Mainly theoretical/academic/interview
required. purpose.
Crux Best for fixed size and fast access, very simple. Best for dynamic size and frequent Mainly for theoretical/interview purposes,
insert/delete, flexible. demonstrates LIFO using FIFO.
** EVALUATION OF EXPRESSION - PREFIX, INFIX and POSTFIX **
Definition Operator is between operands. Example: Operator comes after operands. Operator comes before operands.
A + B Example: A B + Example: + A B
Parentheses Required Yes, to indicate precedence No, precedence is implicit No, precedence is implicit
Evaluation Harder for computers, requires parsing Easy using stack Easy using stack
Conversion Tricks / Rules - Convert to postfix/prefix using stack and - Evaluate left to right using stack- - Evaluate right to left using stack-
precedence rules Convert back to infix/prefix using stack Convert back to infix/postfix using stack
Conversion Steps (from Infix) - Use stack for operators- Operand → - Evaluate as described- Operand → push- - Reverse infix expression, swap ( ↔ )-
add to output- ( → push- ) → pop until (- Operator → pop 2 operands, apply, push Convert to postfix- Reverse result → Prefix
Operator → pop higher/equal precedence result
operators first
Example (A + B) * C A B + C * * + A B C
Evaluation - Requires parsing rules or conversion to - Scan left → push operands → apply - Scan right → push operands → apply
postfix/prefix operator operator
Use Cases - Human-readable algebraic expressions- - Compilers- Calculators- Stack-based - Certain programming languages- Prefix
Programming code evaluation- Avoid parentheses calculators- Theoretical algorithms
Important Exam Tips - Use parentheses to handle precedence- - Stack-based evaluation O(n)- No - Stack-based evaluation O(n)- Right to left
Remember associativity rules parentheses required- Left to right scanning- Conversion: Reverse → Postfix
scanning → Reverse
** CONVERSIONS **
Expression: (A + B) * (C - D)
( ( 1 ( Push to stack – (
A ( A 2 D Add to output D (
+ (+ A 3 - Push to stack D (-
B (+ A B 4 C Add to output DC (-
✅ Postfix: A B + C D - * ✅ Prefix: * + A B - C D
3️⃣ Postfix → Infix 4️⃣ Prefix → Infix
Postfix: A B + C D - * Prefix: * + A B - C D
Algorithm: Algorithm:
3. Operator → pop two operands, combine as (operand1 operator 3. Operator → pop two operands, combine (operand1 operator operand2),
operand2), push result. push result.
Step-wise: Step-wise:
A A D D
B A, B C D, C
+ (A + B) - (C - D)
C (A + B), C B (C - D), B
D (A + B), C, D A (C - D), B, A
- (A + B), (C - D) + (C - D), (A + B)
✅ Infix: (A + B) * (C - D) ✅ Infix: (A + B) * (C - D)
5️⃣ Postfix → Prefix 6️⃣ Prefix → Postfix
Postfix: A B + C D - * Prefix: * + A B - C D
2. Operator → pop two operands, combine operator operand1 operand2. 2. Operand → push.
B A, B D D
+ + A B C D, C
C + A B, C - C D -
D + A B, C, D B C D -, B
- + A B, - C D A C D -, B, A
* * + A B - C D + A B +, C D -
* A B + C D - *
✅ Prefix: * + A B - C D ✅ Postfix: A B + C D - *
Linear Data Structure: Elements arranged sequentially; follows a specific order (FIFO).
FIFO (First In First Out): First element inserted is the first removed.
Key Properties :
Applications of Queue :
Definition - Linear DS following FIFO - Array-based queue where rear wraps around to front
- Remove from front, insert at rear - Forms a circle
Visualization - Front -> A -> B -> C -> Rear - Circular array: Front -> A -> B -> C <- Rear (wraps)
Time Complexity - O(1) for enqueue / dequeue / peek - O(1) for enqueue / dequeue / peek
Examples / - int queue[MAX]; int front=-1, rear=-1; - int queue[MAX]; int front=-1, rear=-1; (use modulo for circular)
Initialization
Important Facts - Linear DS, FIFO - Circular array avoids wasted space
- Front & Rear pointers - Rear wraps to front using modulo
- May waste space in array implementation - O(1) enqueue/dequeue
- O(1) enqueue/dequeue
Crux / Tips - Simple FIFO tasks - Preferred for fixed-size array with memory efficiency
- Modulo arithmetic ensures wrap-around
Aspect / Type Deque (Double-Ended Queue) Priority Queue
Definition - Queue with insertion/deletion at both front and rear - Queue where elements have priority
- Highest priority dequeued first
Visualization - Front <-> A <-> B <-> C <-> Rear - Highest priority element removed first (not strictly FIFO)
Time Complexity - O(1) for insert / delete at either end - O(n) array/linked list
- O(log n) heap
Disadvantages - More complex than linear queue - Slower insertion/deletion if not heap-based
Examples / Initialization - struct Node {int data; Node* next; Node* prev;}; - struct Node {int data; int priority; Node* next;};
Conditions / Exam Tips - Deque empty: front == NULL - Queue empty: front == NULL
- Handle both ends carefully for insert/delete - Maintain priority order during insertion
- Heap implementation is efficient
Crux / Tips - Flexible for insertion/deletion at both ends - Use heap for efficient insertion/deletion; otherwise slower
5) TREES
A tree is a hierarchical (Non-Linear) data structure consisting of nodes.
Nodes contain data and links (edges) to child nodes.
A tree with n nodes has exactly n – 1 edges.
There is exactly one path between any two nodes.
** IMPORTANT TERMS **
Root Topmost node of a tree; has no parent. Subtree Tree formed by a node and all its descendants.
Parent Node Node that has one or more children. Forest Collection of disjoint trees.
Child Node Node that has a parent node. Binary Tree Each node has at most 2 children (left and right).
Leaf / External Node Node with no children.(Node with 0 Child) Full Binary Tree Every node has 0 or 2 children.
Internal Node Node with at least one child. (Node with >= 1 Child) Complete Binary Tree All levels completely filled except possibly the
last, filled left to right.
Siblings Nodes that share the same parent.
Perfect Binary Tree Complete and all leaves at same level.
Edge Connection between two nodes.
Balanced Tree Difference between heights of left and right
Path Sequence of nodes connected by edges. subtrees ≤ 1 (e.g., AVL tree).
Path Length Number of edges in a path. Degenerate / Pathological Tree Each parent has only one child; essentially a
linked list.
Degree of Node Number of children a node has.
Binary Search Tree (BST) Binary tree where left child < parent < right child.
Degree of Tree Maximum degree among all nodes in the tree.
AVL Tree / Balanced BST BST with height-balance property.
Height of Node Number of edges on the longest path from the node to a
leaf. Preorder Traversal (DLR) Visit root → left subtree → right subtree.
Note : Height of Leaf Node : 0
Inorder Traversal (LDR) Visit left subtree → root → right subtree.
Height of Tree Height of root node; longest path from root to any leaf.
Postorder Traversal (LRD) Visit left subtree → right subtree → root.
Depth of Node Number of edges from root to that node.
Note : Depth of Root Node : 0 Level-order Traversal / BFS Visit nodes level by level using a queue.
Depth of Tree Maximum depth among all nodes (same as height of tree). Internal Path Length Sum of depths of all internal nodes.
Level of Node Level of root = 1; level = depth + 1. External Path Length Sum of depths of all leaf nodes.
1️⃣ General / N-ary Tree - Traversal: Preorder, - Node can have up to n children - Max nodes at level l: n^l - Used for hierarchical structures
- Linked list (First-child / Next-sibling) Postorder, Level-order: O(n) - Min nodes at level l: 1 (file systems, org charts)
- Array (fixed small n) - Insertion/Deletion: O(1) at - Max nodes at height h: (n^(h+1) -
node
1)/(n-1)
- Search: O(n)
- Min nodes at height h: h+1
- Space: O(h) for recursion
stack
2️⃣ Binary Trees - Traversal: Preorder, Inorder, - Max 2 children per node - Max nodes at level l: 2^l - Parent-child relationships
- Linked list (Node* left, Node* right) Postorder, Level-order: O(n) - Min nodes at level l: 1 essential
- Array (for complete trees) - Insertion: O(1) at head / O(n) - Max nodes at height h: 2^(h+1) - 1 - Sparse trees waste array space
general
- Min nodes at height h: h+1
- Deletion: O(n)
- Search: O(n) - Max height with n nodes: n-1
- Space: O(h) recursion stack (skewed)
(O(log n) for balanced tree) - Min height with n nodes:
log2(n+1) - 1
• Full/Proper/Strict Binary Tree - Traversal: O(n) - Each node has 0 or 2 children - Nodes at height h: 2^h (perfectly - Also called proper or strict
- Linked list / Array balanced) binary tree
- Total nodes: 2^(h+1)-1
• Complete Binary Tree - Traversal: O(n) - All levels filled except last - Nodes at last level ≤ 2^h - Easy array representation (Left
- Array preferred - Last level filled left to right = 2i+1, Right = 2i+2)
• Perfect Binary Tree - Traversal: O(n) - All levels completely filled - Total nodes: 2^(h+1)-1 - Number of nodes strictly follows
- Linked list / Array - Height: log2(n+1)-1 formula
• Skewed - Traversal: O(n) - All nodes have only one child - Height = n-1 - Degenerate tree; behaves like a
Binary Tree (Left/Right) - Nodes at each level = 1 linked list
- Linked list
• Binary Search Tree (BST) - Insertion/Search/Deletion: Avg - Left < Parent < Right - Max height (unbalanced): n-1 - Traversals give sorted order
- Linked list O(log n), Worst O(n) - Unique keys typical - Min height (balanced):
- Traversals: O(n) log2(n+1)-1
• AVL Tree (Self-Balancing BST) - Insertion/Search/Deletion: - Balance factor (-1,0,+1) at all - Height h ≤ 1.44 log2(n+2) - 0.328 - Height-balanced → efficient
- Linked list O(log n) nodes operations
- Rotations: O(1) per rotation
• Red-Black Tree (Self-Balancing - Insertion/Search/Deletion: - Root = black - Height h ≤ 2 log2(n+1) - Widely used in OS, memory
BST) O(log n) - Red node cannot have red child management, DB indexing
- Linked list - Rotations: O(1) per rotation - Equal black height paths
• Splay Tree - Search/Insertion/Deletion: - Performs rotations to move - Height: O(log n) amortized - Frequently accessed nodes
- Linked list O(log n) amortized, O(n) recently accessed nodes closer - Max nodes at height h: 2^(h+1)-1 become quicker to access
worst-case to root - Useful in caches, memory
- Splaying (rotate accessed management, and access
node to root) sequences with locality
• Expression Tree - Traversals: Preorder, Inorder, - Leaf = operand, internal = - Number of internal nodes = - Used for arithmetic evaluation,
- Linked list Postorder operator n_operands - 1 (for full binary prefix/infix/postfix conversions
expression tree)
• Heap (Max/Min) - Insert: O(log n) - Complete binary tree - Height h = ⌊log2 n⌋ - Used in priority queues,
- Array (complete binary tree) / - Delete (root): O(log n) - Max-Heap: Parent ≥ children - Max nodes at height h: 2^h heapsort, scheduling
Linked list - Find max/min: O(1) - Min-Heap: Parent ≤ children
3️⃣ Multi-way / Specialized Trees
• B-Tree - Insert/Search/Delete: O(log n) - Multi-way search tree - Max keys per node: m-1 (order m) - Used in databases, file systems;
- Linked list / Disk-based node array - All leaves at same depth - Min keys per node: ⌈m/2⌉-1 disk-optimized
- Node contains multiple keys
• B+ Tree - Insert/Search/Delete: O(log n) - All data stored at leaf nodes - Leaf nodes contain all actual - Leaf nodes linked → fast
- Linked list / Disk-based node array - Internal nodes store keys only records sequential access
** TRAVERSAL - Pre-Order, In-Order, Post-Order, Level-Order **
Category Traversal Type & Order / Method / Time & Space Complexity Key Facts / Use Cases / Noteworthy Points
Definition Implementation
DFS (Depth-First Search) Preorder - Recursive: - Time: O(n) - Used to create a copy of the tree
(Root → Left → Right) (Root → Left → Right) - Space: O(h) recursion stack - Prefix expression evaluation
(skewed tree O(n), balanced O(log n)) - Node visited before children
- Iterative: Using stack - Frequently asked in expression tree
problems
BFS (Breadth-First Search) Level-order- Level by level, - Iterative using queue - Time: O(n) - Visits nodes level by level
top → bottom, left → right - Space: O(max width of tree) - Used in heap operations, shortest path
algorithms, hierarchical data processing
- Queue space = max width of tree
- Variants like reverse level-order sometimes
asked
- BFS = iterative-friendly,
-DFS = recursive-friendly
QUESTIONS BASED ON -> PRE-ORDER, IN-ORDER, POST-ORDER, LEVEL-ORDER
6) GRAPH
● A graph GGG is a mathematical structure used to model pairwise relationships between objects.
● Formally, it is defined as a pair G=(V,E)G = (V, E)G=(V,E) where:
○ V is a set of vertices (nodes) representing the objects.
○ E is a set of edges (connections) representing the relationships between vertices.
● Each edge connects two vertices; in a directed graph, edges have a direction (from one vertex to another), and in an undirected graph, edges have no direction.
● Graphs may be weighted (edges carry a value) or unweighted (edges only indicate a connection).
● Graphs can be simple (no loops or multiple edges) or multigraphs (may have loops or multiple edges).
Graph (G) • Set of vertices (V) and edges (E) G = ({A, B, C}, {AB, BC, CA}) • Foundation of all graph concepts.
represented as G = (V, E). • Defined by vertices + edges only.
• Models pairwise relationships between
objects.
Vertex (Node) • Fundamental unit of a graph. In a social network, each person is a • Building blocks of a graph.
• Represents an object or entity. vertex. • Represent entities in real-world
problems.
Edge • Connection between two vertices. Road between two cities. • Represents relationships or connections.
• Can be directed (ordered pair) or
undirected (unordered pair).
In-degree • For directed graphs: Number of incoming Vertex B has edges from A and C → • In-degree = incoming traffic measure.
edges to a vertex. in-degree = 2.
Out-degree • For directed graphs: Number of outgoing Vertex A has edges to B and C → • Out-degree = outgoing traffic measure.
edges from a vertex. out-degree = 2.
Isolated Vertex • Vertex with degree 0. In a network, a disconnected computer. • No edges connected.
• No connections to other vertices. • Completely isolated in graph.
Pendant Vertex • Vertex with degree 1. Leaf node in a tree. • Represents end-points in a structure.
• Connected to exactly one other vertex.
Source Vertex • Directed graph vertex with in-degree = 0, A with edges to B, C but none coming in. • Starting points in directed flows.
out-degree > 0.
Sink Vertex • Directed graph vertex with in-degree > 0, B receiving edges from A, C but no • End points in directed flows.
out-degree = 0. outgoing edges.
Neighbor • Vertices directly connected via an edge. A and B connected by edge AB → • Directly connected vertices only.
neighbors.
Incident Edge • Edge connected to a vertex. Edge AB is incident to A and B. • Edge touching a vertex.
Non-incident Edge • Edge not connected to a vertex. Edge CD w.r.t. vertex A. • No endpoint match with the vertex.
Adjacent Vertices • Vertices connected by a common edge. A and B in AB. • Neighbor vertices = adjacent vertices.
Reachability • A vertex u can reach vertex v if there A → B → C → D means A can reach D. • Determines possibility of traversal.
exists a path from u to v.
Walk • Sequence of vertices and edges where A–B–A–C. • Most general form of movement.
repetition of vertices/edges is allowed.
Trail • Walk with no repeated edges (vertices A–B–C–A. • No edge repetition allowed.
may repeat).
Path • Walk with no repeated vertices (and A–B–C–D. • Simple movement with no revisits.
thus no repeated edges).
Cycle • Path where first and last vertices are the A–B–C–A. • Closed path with unique vertices.
same.
• No repeated vertices except start/end.
Connected Graph • Undirected graph where there is a path Road map with no isolated parts. • All vertices reachable from each other.
between every pair of vertices.
Strongly Connected Graph • Directed graph where every vertex can Flight routes where you can go both ways • Mutual reachability in directed graphs.
reach every other vertex. between cities.
Weakly Connected Graph • Directed graph that becomes connected One-way roads forming a connected map • Connected if you drop direction info.
when edges are treated as undirected. when ignoring direction.
Isomorphic Graphs • Graphs with same connectivity but Two triangle graphs labeled differently. • Structure same, names differ.
possibly different labels or drawings.
Subgraph • Graph formed from a subset of vertices Smaller network extracted from larger one. • Part of a bigger graph.
and edges of another graph.
Induced Subgraph • Subgraph formed by a set of vertices and Choose vertices {A, B, C} and keep all • Keeps all edges among chosen vertices.
all edges between them in original graph. connecting edges.
Spanning Subgraph • Subgraph containing all vertices of MST is a spanning subgraph. • All vertices present, edges may be
original graph but possibly fewer edges. missing.
Complete Graph (K ) • Every vertex connected to every other K₄ has 4 vertices and 6 edges. • Max possible connections for given
vertex. vertices.
• Edges = n(n–1)/2 for undirected.
Null Graph • Graph with vertices but no edges. 4 vertices, no connections. • Completely disconnected structure.
TYPES OF GRAPHS
Basis of Division Graph Category + Variants Description / Facts Formula Example CRUX
Definition
Direction of Edges Undirected Graph – Simple Graph - No loops or multiple – Triangle K₃ - Basic undirected graph
Edges have no direction; edges
unordered pairs (u,v)
Pseudograph - Loops and multiple – Vertex A with loop + - Most general undirected
edges allowed multiple edges to B graph
Directed Graph Simple Digraph - No loops, no multiple – A→B→C - Basic directed graph
(Digraph) – Edges have directed edges
direction; ordered pairs
(u,v)
Multidigraph - Multiple directed edges – Two edges from A → B - Flow networks & routing
allowed
Weighted Digraph - Directed edges have – Flight routes with cost - Shortest path problems
weights
Mixed Graph - Both directed and – One-way & two-way - Real-world traffic
undirected edges roads modeling
Edge Weights Weighted Graph – Edges Positive Weighted - All weights > 0 – MST example - Dijkstra / Prim / Kruskal
have weights applicable
Negative Weighted - Some edges negative – Bellman-Ford graph - Handles negative edge
graphs
Unweighted Graph – All – - Default weight = 1 – Simple road network - Weight ignored
edges equal
Graph with Loops – At – - Loops count twice in – Vertex A has A–A edge –
least one loop degree
Connectivity Connected Graph – All Strongly Connected - Path exists both ways – Flight network - Mutual reachability
vertices reachable (Digraph)
Disconnected Graph – – - Multiple components may – Two triangles - BFS/DFS needed for
Some vertices exist disconnected components
unreachable
Regularity Regular Graph – All – - Symmetric connectivity Sum of degrees = 2E Square cycle k=2 - Useful for network design
vertices have same
degree k
Cycles Cyclic Graph – Contains – - Cycle exists – Triangle A–B–C–A - Euler/Hamilton problems
at least one cycle
Acyclic Graph – No DAG (Directed - Directed, no cycles – Task scheduling - Dependency / scheduling
cycles Acyclic Graph) - Topological sort possible
Special / Named Null Graph – Vertices – - Minimum structure – 4 vertices, 0 edges - Empty graph
Graphs only, no edges
Hamiltonian Graph – – - Visits all vertices exactly – Pentagonal cycle - Hamiltonian path/cycle
Contains Hamiltonian once problems
cycle - Dirac’s Theorem: Deg(v)
≥ n/2 sufficient
Eulerian Graph – – - All vertices even degree – Graph with all even - Euler path/cycle
Contains Eulerian cycle (undirected) degrees problems
- In digraph: in-degree =
out-degree for all vertices
GRAPH - TRAVERSAL
Aspect DFS (Depth-First Search) BFS (Breadth-First Search)
Definition / Idea Explores as far as possible along each branch before Explores all neighbors of a vertex before moving to the next
backtracking. level.
Traversal Type Depth-wise (goes deep first). Level-wise (goes broad first).
Time Complexity O(V + E) for adjacency list, O(V²) for adjacency matrix. O(V + E) for adjacency list, O(V²) for adjacency matrix.
Space Complexity O(V) for recursion stack or explicit stack. O(V) for queue.
Cycle Detection Can detect cycles in both directed and undirected graphs. Can detect cycles in undirected graphs (with parent tracking);
less intuitive in directed graphs.
Connectivity Can be used to check connectivity or components. Can also check connectivity or components.
Tree / Forest Produces DFS tree / forest. Produces BFS tree / forest.
Vertex Visiting Order Deep before wide; follows a path to its end before backtracking. Wide before deep; visits all vertices at current distance before
moving deeper.
Characteristics • Uses less memory on sparse graphs • Guaranteed shortest path in unweighted graph
• Can get trapped in deep paths if not careful • Uses more memory on wide graphs
CRUX / Quick Fact • Stack-based, deep-first, not guaranteed shortest path • Queue-based, level-first, guaranteed shortest path in
unweighted graphs
GRAPH REPRESENTATION - METHODS
Attribute / Feature Adjacency Matrix Adjacency List Incidence Matrix Edge List
Definition / Structure - V×V matrix - Array/List of lists - V×E matrix - List of edges as pairs (u,v)
- [i][j] = 1 if edge exists - Each vertex stores its neighbors - [i][j] = 1 if vertex i incident - Weighted edges store
- Weighted graphs store weight instead of 1 - Weighted edges stored as (neighbor, to edge j (u,v,w)
weight) pairs - Directed: +1 (source), -1
(destination)
Traversal Efficiency Traversing neighbors: O(V) Traversing neighbors: O(degree(v)) Slow Slow
Key Facts / Crux - Dense graphs, fast edge check - Sparse graphs, traversal efficient - Rarely used in coding - Simple, iterate edges easily
- Wastes space for sparse graphs - Used in BFS/DFS/Dijkstra/Prim - Best for edge-vertex incidence - Ideal for Kruskal’s MST
- Supports weighted edges directly - Weighted edges stored as tuples problems - Edge lookup slow (O(E))
- Supports loops & multiple
edges
- Directed edges: +1/-1
CRUX (Exam Focus) - Dense graphs → adjacency matrix - Sparse graphs → adjacency list - Rarely used, edge-vertex - Simple edge iteration
- Edge check O(1) - Traversal efficient BFS/DFS/Dijkstra/Prim incidence problems - Kruskal’s MST
- Weighted edges supported - Weighted edges as pairs - Loops & multiple edges allowed - Edge lookup slow (O(E))
- Directed edges +1/-1
GRAPH TRAVERSAL & ALGORITHMS
Category Algorithm / Definition / Steps (Bullet Points) Complexity (Time & Example CRUX
Concept Space)
Graph Traversal Breadth-First - Level-wise traversal using a queue - Time: O(V + E) - Graph: 0–1–2–3–4 - Finds shortest path in
Search (BFS) - Start from a source vertex - Space: O(V) - BFS from 0 → Visit order: 0, unweighted graphs
- Visit all neighbors of current vertex before moving 1, 2, 3, 4 - Useful for connectivity &
deeper components
- Mark visited vertices to avoid repetition - Queue-based traversal
Depth-First - Deep traversal using recursion or stack - Time: O(V + E) - Graph: 0–1–2–3–4 - Detects cycles, components,
Search (DFS) - Start from a vertex - Space: O(V) - DFS from 0 → Visit order: 0, topological ordering
- Explore along a branch before backtracking 1, 2, 3, 4 (order may vary) - Stack or recursion-based
- Mark visited vertices - Basis for many graph
algorithms
Minimum Kruskal’s - Sort all edges by weight - Time: O(E log E) - Graph edges with weights: - Edge-based MST algorithm
Spanning Tree Algorithm - Pick edges in increasing order - Space: O(V + E) • (A–B, 2) - Works on weighted
(MST) - Add edge if it does not form a cycle (use • (B–C, 3) undirected graphs
union-find) • (A–C, 1) - Union-Find prevents cycles
- Repeat until MST formed - MST edges chosen: A–C (1),
A–B (2)
Prim’s Algorithm - Start with any vertex - Time: O(V²) (matrix) - Graph: vertices A, B, C - Vertex-based MST
- Select minimum weight edge connecting MST to Or - Edges with weights: (A–B,2), - Good for dense graphs
remaining vertices O(E log V) (B–C,3), (A–C,1) - Priority queue (min-heap)
- Repeat until all vertices included (min-heap) - MST edges: A–C (1), A–B speeds up
- Space: O(V) (2)
Shortest Path Dijkstra’s - Initialize distances from source to ∞, source = 0 - Time: O(V²) - Graph weighted edges: - No negative edges
Algorithm - Pick vertex with minimum distance or • A–B = 1 - Greedy approach
- Relax all adjacent edges O(E log V) with • A–C = 4 - Basis for routing, pathfinding
- Repeat until all vertices processed min-heap • B–C = 2
- Space: O(V) - Source: A
- Shortest paths: A→B = 1,
A→C = 3
Bellman-Ford - Initialize distances from source to ∞, source = 0 - Time: O(VE) - Graph edges: - Handles negative edges
Algorithm - Relax all edges V–1 times - Space: O(V) • A→B = 4 - Detects negative weight
- Detect negative cycles by one extra iteration • A→C = 5 cycles
• B→C = –2
- Source: A
- Shortest paths: A→B = 4,
A→C = 2
Floyd-Warshall - Dynamic programming for all pairs shortest path - Time: O(V³) - Vertices: A, B, C - Finds shortest path between
Algorithm - dist[i][j] = min(dist[i][j], dist[i][k] + dist[k][j]) for all k - Space: O(V²) - Weighted adjacency matrix all vertex pairs
- Shortest paths updated: - Works for negative edges (no
A→B=2, A→C=3, B→C=1 negative cycles)
Special Topological Sort - Linear ordering of vertices in DAG - Time: O(V + E) - DAG edges: - Only for Directed Acyclic
Traversals / - DFS based: push vertex to stack after visiting all - Space: O(V) • 1→2 Graphs (DAGs)
Concepts neighbors • 1→4 - Used in scheduling,
- Pop stack gives ordering • 2→3 precedence problems
• 4→5
- Topological order: 1,2,4,3,5
Eulerian Path & - Path: visits every edge exactly once – - Graph: A–B–C–A - Degree conditions are key for
Circuit - Circuit: closed Eulerian path - Eulerian circuit exists (all exam
- Conditions: vertices have even degree) - Path vs Circuit distinction
• Undirected Eulerian Circuit: all vertices even important
degree
• Eulerian Path: 0 or 2 vertices odd degree
• Directed Eulerian Circuit: in-degree = out-degree
• Eulerian Path: all vertices balanced except
start/end
Hamiltonian Path - Path: visits every vertex exactly once – - Graph: A–B–C–D–A - Used in TSP problems
& Circuit - Circuit: closed Hamiltonian path - Hamiltonian circuit exists - Often theoretical questions
- NP-complete problem; no simple formula (visit all vertices once and
return to start)
– Algorithms –
1. Foundations 2. Algorithm Analysis
Purpose – Understand what algorithms are, why they matter, and how they are evaluated. Purpose – Measure efficiency for performance comparison.
Step Topic Key Points / Facts Ste Topic Key Points / Facts
p
1.1 Definition of Algorithm - Step-by-step procedure to solve a problem.
- Has input, output, finiteness, definiteness, 2.1 Complexity Time complexity & Space complexity.
effectiveness.
2.2 Time Complexity Best case, Worst case, Average case.
1.2 Characteristics Deterministic, unambiguous, language-independent, Types
general applicability.
2.3 Asymptotic - Big-O (O) → Upper bound
1.3 Difference: Algorithm vs Algorithm is logic/design; program is implementation. Notations - Ω (Omega) → Lower bound
Program - Θ (Theta) → Tight bound.
Pseudocode High-level, structured description of an Text-based, structured like code • Easy to read • Writing an algorithm
algorithm using plain language and • Language-independent • Interpreting logic
programming-like statements • Focuses on logic rather than syntax
• Easily converted to code
Flowchart Graphical representation of an algorithm • Oval → Start/End • Visualizes step-by-step flow • Drawing flowcharts
showing sequence of steps • Rectangle → Process • Easy to understand • Explaining logic visually
• Diamond → Decision • Highlights decisions & loops
• Parallelogram → Input/Output
• Arrows → Flow
Decision Table Tabular method representing conditions Table with conditions, actions, • Handles complex decisions • Converting rules to table
and corresponding actions of an and rules mapping conditions → • Ensures all cases covered • Checking all possible
algorithm actions • Reduces ambiguity scenarios
3. Algorithm Design Paradigms 4. Core Categories of Algorithms
Purpose – Master standard techniques to solve problems efficiently. (High-yield for DSSSB exams)
Ste Paradigm Key Concepts / Facts Examples Category Subtopics / Examples Important Facts
p
Searching Linear Search, Binary Search Binary Search → O(log n),
3.1 Divide and Break → Solve → Merge Sort, Quick Sort, Algorithms requires sorted array
Conquer Combine Binary Search, Strassen’s Matrix
Multiplication
Sorting Bubble, Insertion, Selection, Merge, Sorting stability, in-place vs
Algorithms Quick, Heap Sort, not, comparison vs
3.2 Greedy Method Make locally optimal Kruskal’s MST, Prim’s MST, Counting/Radix/Bucket non-comparison-based
choice hoping for global Dijkstra’s Shortest Path,
optimum Huffman Coding
Graph BFS, DFS, Dijkstra, Bellman-Ford, BFS → shortest path in
Algorithms Floyd-Warshall, Kruskal, Prim unweighted graph
3.3 Dynamic Store subproblem results Fibonacci (DP), Matrix Chain
Programming → avoid recomputation Multiplication, Floyd-Warshall,
String Matching Naive, KMP, Rabin-Karp KMP uses LPS table
(DP) Knapsack
Linear Search (Brute Force) • Scan elements sequentially B: O(1) → first element • Works on unsorted arrays
• Compare each with target A: O(n) → middle • Can terminate early if array sorted
• Stop when found W: O(n) → last/not found • In-place
• Stable
Binary Search (Divide & Conquer) • Compare target with mid element B: O(1) → target = mid • Requires sorted array
• Search left or right half A: O(log n) • Works only on random-access structures
• Repeat until found W: O(log n) → target at ends • In-place
Hash Table / Direct Addressing • Compute index via hash function B: O(1) → no collision • Collisions handled via chaining / open addressing
(Hashing / Direct Access) • Insert/search/delete key at index A: O(1) • Widely used: symbol tables, caches, dictionaries
• Handle collisions W: O(n) → all keys collide • Space-time tradeoff
Greedy Algorithms
Algorithm (Paradigm) Working Complexity (B/A/W) Key Facts
Dijkstra (Greedy) • Pick node with min distance B/A/W: O((V+E) log V) → heap operations • Works for non-negative weights
• Update neighbors • Fails on negative edges
• Repeat until all nodes processed • Priority queue improves efficiency
Kruskal (Greedy) • Sort edges B/A/W: O(E log E) • Good for sparse graphs
• Pick smallest edge that doesn’t form • Cycle detection via union-find
cycle
• Repeat until MST complete
Huffman Coding (Greedy / • Build frequency table B/A/W: O(n log n) • Greedy guarantees minimal total cost
Compression) • Merge nodes using min-heap • Used in file compression
• Assign prefix-free codes
Dynamic Programming
Algorithm (Paradigm) Working Complexity (B/A/W) Key Facts
Bellman-Ford (DP) • Initialize distances B/A/W: O(V·E) • Works with negative weights
• Relax all edges V-1 times • Detects negative cycles
• Detect negative cycles
Floyd-Warshall (DP) • Initialize distance matrix B/A/W: O(V³) • All-pairs shortest paths
• Update using all vertices as • Works with negative weights but no
intermediates negative cycles
Fibonacci (DP / Recurrence) • Store previous results B/A/W: O(n) • Avoids exponential recursion
• Build up to n iteratively • Iterative version O(1) space
• Can use matrix exponentiation for O(log
n)
Matrix Chain Multiplication (DP / • Try all parenthesizations B/A/W: O(n³) • Bottom-up table filling
Optimization) • Store min multiplications • Classic DP optimization example
• Fill DP table bottom-up
Graph Traversal
Algorithm (Paradigm) Working Complexity (B/A/W) Key Facts
BFS (Graph Traversal) • Use queue B/A/W: O(V+E) • Finds shortest path in unweighted graphs
• Visit neighbors level by level • Can check bipartiteness
DFS (Graph Traversal) • Use stack/recursion B/A/W: O(V+E) • Useful for cycle detection
• Explore as deep as possible before • Topological sort
backtracking • Connected components
Recursion / Puzzle
Algorithm (Paradigm) Working Complexity (B/A/W) Key Facts
Tower of Hanoi (Recursion / Puzzle) • Move n-1 disks to aux B/A/W: O(2ⁿ) • Minimum moves = 2ⁿ-1
• Move largest disk to target • Classic recursion example
• Move n-1 disks from aux to target
Number / Theory
Algorithm (Paradigm) Working Complexity (B/A/W) Key Facts
Euclidean GCD (Number Theory) • Recursively compute gcd(b, a mod b) B/A/W: O(log min(a,b)) • In-place
• Oldest known algorithm
• Iterative subtraction method possible
Paradigm Type Algorithm Working (Bullets) Complexity Properties Extra Exam Facts / Use Cases
Brute Force / Searching Linear Search • Scan elements sequentially • Best: O(1) → first element • In-place • Works on unsorted data
Simple • Compare each with target • Avg: O(n) → middle • Stable • No preprocessing needed
• Stop when found or at end • Worst: O(n) → last/not found • Can terminate early
• Easy to implement
String Matching Naive Pattern Search • Slide pattern over text • Best: O(n) • In-place • Worst case: repeated chars
• Compare chars one by one • Avg/Worst: O(n·m) • Basis for KMP & Rabin-Karp
• Shift by 1 on mismatch • Simple brute-force
Sorting Bubble Sort • Repeatedly compare adjacent • Best: O(n) → already sorted • In-place • Easy to implement
elements (optimized) • Stable • Not efficient for large arrays
• Swap if in wrong order • Avg: O(n²) • Adaptive if • Internal sorting algorithm
• Repeat until sorted • Worst: O(n²) → reverse sorted optimized • Adaptive if array nearly sorted
Insertion Sort • Pick element • Best: O(n) • In-place • Efficient for small arrays
• Compare with previous elements • Avg: O(n²) • Stable • Online sorting possible
• Shift larger elements • Worst: O(n²) • Adaptive • Internal sorting algorithm
• Insert element • Adaptive and stable
Selection Sort • Find minimum element • Best: O(n²) • In-place • Simple but inefficient for large
• Swap with first unsorted • Avg: O(n²) • Not stable datasets
• Repeat • Worst: O(n²) • Not adaptive • Internal sorting algorithm
✅
• Independent of data
distribution
Divide & Searching Binary Search • Compare target with mid • Best: O(1) • In-place • Requires sorted array
Conquer • Search left/right half recursively • Avg: O(log n) • Iterative & recursive forms
or iteratively • Worst: O(log n) • Works only on random-access
structures
Sorting Merge Sort • Divide array into halves • Best: O(n log n) • Not in-place • Excellent for linked lists
• Sort each recursively • Avg: O(n log n) (O(n) extra) • Predictable performance
• Merge sorted halves • Worst: O(n log n) • Stable • External sorting algorithm
• Not adaptive
✅
• Independent of input
distribution
Quick Sort • Choose pivot • Best: O(n log n) → balanced • In-place • Tail recursion optimization
• Partition array around pivot pivot • Not stable • Randomized pivot reduces
• Recursively sort partitions • Avg: O(n log n) • Not adaptive worst-case
• Worst: O(n²) → sorted/reverse • Internal sorting algorithm
sorted
Heap Sort • Build max-heap • Best: O(n log n) • In-place • Poor cache performance
• Swap root with last • Avg: O(n log n) • Not stable • Based on binary heap
• Heapify remaining heap • Worst: O(n log n) • Internal sorting algorithm
• Repeat • Independent of input
distribution ✅
Counting / Sorting Counting Sort • Count occurrences • Best: O(n+k) • Not in-place • Works for integers only
Distribution • Compute prefix sums • Avg: O(n+k) • Stable • Efficient if k << n
• Place elements • Worst: O(n+k) • Internal sorting algorithm
Radix Sort • Sort digits from LSD → MSD • Best: O(d·(n+k)) • Stable • Works for integers & strings
using stable sort • Avg: O(d·(n+k)) • Not in-place • Often uses counting sort
• Worst: O(d·(n+k)) internally
• Internal sorting algorithm
Bucket Sort • Divide into buckets • Best: O(n+k) • Can be stable • Good for uniform distribution
• Sort each bucket • Avg: O(n+k) • Poor performance with skewed
• Concatenate • Worst: O(n²) data
• Internal sorting algorithm
Greedy Graph Dijkstra’s Algorithm • Pick node with min distance • Best/Avg/Worst: O((V+E) log • Works for • Fails on negative edges
• Update neighbors V) with heap non-negative • Priority queue improves
• Repeat until all nodes visited weights efficiency
Prim’s Algorithm • Start vertex • Best/Avg/Worst: O((V+E) log • Builds MST • Dense graph: O(V²) with
• Add smallest edge to MST V) with heap adjacency matrix
• Repeat until MST complete • Similar to Dijkstra but for MST
Kruskal’s Algorithm • Sort edges • Best/Avg/Worst: O(E log E) • Uses Disjoint • Good for sparse graphs
• Pick smallest edge not forming Set (Union-Find) • Cycle detection via union-find
cycle
• Repeat
Compression Huffman Coding • Build frequency table • Best/Avg/Worst: O(n log n) • Optimal • Greedy approach guarantees
• Min-heap merge prefix-free minimal total cost
• Generate prefix-free codes encoding • Used in file compression
Dynamic Graph Bellman-Ford • Initialize distances • Best/Avg/Worst: O(V·E) • Works with • Slower than Dijkstra
Programming • Relax all edges V-1 times negative weights • Detects negative cycles
• Detect negative cycles
Floyd-Warshall • Initialize distance matrix • Best/Avg/Worst: O(V³) • Works with • All-pairs shortest paths
• Update using all vertices as negative weights • Triple nested loop
intermediates but no negative
cycles
Recurrence Sequence Fibonacci (DP) • Store previous results • Best/Avg/Worst: O(n) • Iterative • Avoids exponential recursion
• Build up to n version O(1) • Can use matrix exponentiation
space for O(log n)
Optimization Matrix Matrix Chain • Try all parenthesizations • Best/Avg/Worst: O(n³) • DP table • Bottom-up table filling
Multiplication • Store min multiplications • Classic DP example
Graph Traversal BFS • Use queue • Best/Avg/Worst: O(V+E) • Space O(V) • Finds shortest path in
Traversal • Visit level by level unweighted graphs
• Can check bipartiteness
DFS • Use stack/recursion • Best/Avg/Worst: O(V+E) • Space O(V) • Useful for cycle detection,
• Explore depth-first recursion stack topological sort, connected
components
Recursive / Puzzle Tower of Hanoi • Move n-1 disks to aux • Best/Avg/Worst: O(2ⁿ) • Recursive • Minimum moves = 2ⁿ-1
Mathematical • Move largest disk to target • Classic recursion example
• Move n-1 disks from aux to target
Number Number Theory Euclidean GCD • Recursively compute gcd(b, a • Best/Avg/Worst: O(log • In-place • Oldest known algorithm
Theory mod b) min(a,b)) • Iterative subtraction method
possible
Hashing / Searching / Hash Table / Direct • Use hash function to map key → • Best: O(1) • In-place usually • Collisions handled via chaining
Direct Access Mapping Addressing index • Avg: O(1) • Not stable / open addressing
• Insert/search/delete at computed • Worst: O(n) • Extra memory • Widely used: symbol tables,
index required caches, dictionaries
• Handle collisions via chaining or • Performance depends on hash
open addressing function quality