Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
19 views70 pages

DSA

DSA NOTES

Uploaded by

preetam.tgtcs
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views70 pages

DSA

DSA NOTES

Uploaded by

preetam.tgtcs
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 70

DSA

DSA (Data Structures and Algorithms) is the study of efficient ways to store, organize, and process data to solve computational problems.
Data Structure: A way of storing and organizing data to perform operations efficiently.
Algorithm: A step-by-step procedure to solve a problem or perform a computation.
Crux: DSA = Choosing the right data structure + efficient algorithm to optimize time and space.
Data Structures

├── 1. Primitive / Basic DS
│ ├─ Integer, Float, Char, Boolean (basic building blocks, fixed memory)

├── 2. Non-Primitive / Abstract DS
│ ├── A. Linear DS (Elements in sequence)
│ │ ├─ Array (fixed size, contiguous memory, O(1) access, inefficient insert/delete)
│ │ ├─ Linked List
│ │ │ ├─ Singly (single pointer, forward traversal)
│ │ │ ├─ Doubly (forward + backward, extra memory)
│ │ │ └─ Circular (last node points to first, efficient rotation)
│ │ ├─ Stack (LIFO, Push/Pop O(1), useful in recursion & undo operations)
│ │ └─ Queue (FIFO)
│ │ ├─ Simple (linear, may waste memory)
│ │ ├─ Circular (reuses space efficiently)
│ │ ├─ Priority (elements served by priority)
│ │ └─ Deque (double-ended, insert/delete from both ends)
│ │
│ └── B. Non-Linear DS (Elements not sequential)
│ ├─ Tree (Hierarchical DS)
│ │ ├─ General Tree (any number of children, used in org charts, XML/JSON)
│ │ ├─ Binary Tree (≤2 children per node)
│ │ │ ├─ Simple Binary Tree (no constraints)
│ │ │ ├─ Full Binary Tree (0 or 2 children per node)
│ │ │ ├─ Complete Binary Tree (all levels filled except last, left to right)
│ │ │ ├─ Perfect Binary Tree (all internal nodes full, all leaves same level)
│ │ │ ├─ Degenerate Tree (1 child per parent, behaves like linked list)
│ │ │ └─ Binary Search Tree (BST) (left < parent < right, average O(log n) search)
│ │ │ ├─ AVL Tree (self-balancing, height difference ≤1, guaranteed O(log n))
│ │ │ ├─ Red-Black Tree (color rules, used in STL map/set)
│ │ │ ├─ Splay Tree (recently accessed node moved to root, good for locality)
│ │ │ └─ Threaded Binary Tree (null pointers store successor/predecessor, efficient in-order
traversal)
│ │ ├─ Multi-way / Balanced Trees
│ │ │ ├─ B-Tree (multi-way, used in DB & FS, keeps data sorted)
│ │ │ ├─ B+ Tree (all values in leaves, internal nodes only keys, used in databases)
│ │ │ ├─ B* Tree (more dense than B+, fewer splits)
│ │ │ └─ 2-3 Tree (nodes have 2 or 3 children, always balanced)
│ │ ├─ Heap
│ │ │ ├─ Min-Heap (parent ≤ children, used in priority queues)
│ │ │ └─ Max-Heap (parent ≥ children, used in heap sort)
│ │ ├─ Trie (prefix tree, efficient string search, autocomplete, dictionary)
│ │ ├─ Segment Tree (range queries like sum, min, max, O(log n) query/update)
│ │ └─ Fenwick Tree / BIT (efficient prefix sum & updates in O(log n))
│ │
│ └─ Graph (Networked DS)
│ ├─ Directed / Undirected (edges with/without direction)
│ ├─ Weighted / Unweighted (edges with/without cost)
│ ├─ Simple / Multigraph (no loops/multiples vs multiple edges allowed)
│ └─ Traversals: DFS (stack/recursive), BFS (queue), shortest path, cycle detection

├── 3. Hash-Based DS
│ └─ Hash Table / Hash Map (key-value storage, average O(1) insert/search, collisions handled via
chaining/open addressing)
1)​ Primitive Data Structure

Data Type Description Typical Size (C/C++ Typical Size Typical Size Default Value in C/C++ Default Value in Java
32-bit) (C/C++ 64-bit) (Java) (Uninitialized locals: (Uninitialized locals:
garbage; globals/statics: 0) must be initialized)

int Stores integers 4 bytes 4 bytes 4 bytes 0 (global/static) / garbage 0


(signed/unsigned) (local)

short Smaller-range 2 bytes 2 bytes 2 bytes 0 / garbage 0


integer

long Larger-range integer 4 bytes 8 bytes 8 bytes 0 / garbage 0


(platform dependent)

long long Very large-range 8 bytes 8 bytes N/A 0 / garbage N/A


integer

float Single-precision 4 bytes 4 bytes 4 bytes 0.0 / garbage 0.0f


decimal

double Double-precision 8 bytes 8 bytes 8 bytes 0.0 / garbage 0.0d


decimal

long double Extended precision 12/16 bytes 16 bytes N/A 0.0 / garbage N/A
decimal
(compiler-dependent)

char Single character / 1 byte 1 byte 2 bytes (UTF-16) '\0' / garbage '\u0000'
small integer

bool Boolean value 1 byte 1 byte 1 byte false (0) / garbage false
(true/false)

unsigned int Positive-only integer 4 bytes 4 bytes N/A 0 / garbage N/A

unsigned short Positive-only smaller 2 bytes 2 bytes N/A 0 / garbage N/A


integer

unsigned long Positive-only larger 4 bytes 8 bytes N/A 0 / garbage N/A


integer

unsigned long Positive-only very 8 bytes 8 bytes N/A 0 / garbage N/A


long large integer
Primitive Data Structure – DSSSB TGT CS Mixed Q&A
1.​ Q: Are primitive data structures language-independent? - A: No, their size and behavior can vary by language, compiler, and architecture.
2.​ Q: Why is char in C/C++ considered an integer type? - A: Because it stores integer ASCII/Unicode code values, not textual data internally.
3.​ Q: What is the smallest addressable unit of memory in C/C++? - A: 1 byte (size of char).
4.​ Q: Why does the default value of a local variable in C/C++ often appear as a garbage value? - A: Because memory for locals is allocated on the stack and not
auto-initialized.
5.​ Q: In Java, can primitive types be null? - A: No, only their wrapper classes (e.g., Integer, Float) can be null.
6.​ Q: Why are float and double not suitable for exact monetary calculations? - A: Due to binary floating-point precision errors.
7.​ Q: What does the term "value range" of a data type mean? - A: The minimum to maximum values it can store, based on size and signedness.
8.​ Q: Why is boolean size in Java not strictly defined? - A: JVM specification leaves physical size implementation-dependent.
9.​ Q: What is the difference between unsigned int and signed int in C/C++? - A: Signed can store both positive and negative numbers; unsigned only
positive, doubling the positive range.
10.​Q: In Java, why is char 2 bytes while in C/C++ it’s 1 byte? - A: Java char stores UTF-16 code units to support Unicode; C/C++ char stores single-byte
ASCII.
11.​Q: Can a primitive type be directly stored in a Java ArrayList? - A: No, it must be wrapped in its corresponding wrapper class due to generics.
12.​Q: In C, why can sizeof(short) be equal to sizeof(int) on some systems? - A: Because C standard defines only minimum sizes, actual sizes depend
on the architecture.
13.​Q: What happens if you overflow a signed integer in C/C++? - A: Behavior is undefined (may wrap around, crash, or produce unexpected results).
14.​Q: Why does Java have fixed primitive sizes across platforms? - A: To ensure portability and platform independence.
15.​Q: What is the significance of long double in C/C++? - A: It provides extended precision floating point, typically more than double.
16.​Q: In C, what does sizeof('A') return and why? - A: Returns size of int (not char) because character constants in C are of type int.
17.​Q: Why is a pointer size not fixed in C/C++? - A: It depends on architecture (e.g., 4 bytes on 32-bit, 8 bytes on 64-bit).
18.​Q: What is the default value of a static variable inside a function in C? - A: 0 (initialized only once).
19.​Q: In Java, what’s the difference between float literal 3.14 and 3.14f? - A: 3.14 is a double by default, 3.14f explicitly makes it a float.
20.​Q: Can sizeof(void) be used in C? - A: No, because void is an incomplete type and has no size.
No Question Correct Answer Common Wrong Why Candidates Get Confused
. Answer

1 In C, what is sizeof('A')? sizeof(int) (usually 4 bytes) sizeof(char) (1 'A' is a character constant of type int in C.
byte)

2 In C/C++, is char always signed? No, it’s implementation-dependent. Yes Many think char means signed char, but it can
be unsigned by default.

3 In Java, is boolean size always 1 byte? No, JVM doesn’t define exact size. Yes, 1 byte Size is abstract in Java; depends on
compiler/JVM memory layout.

4 In C, does sizeof(void) return 1? No, void has no size. Yes void* is allowed, but sizeof(void) is invalid.

5 Can sizeof(short) equal sizeof(int)? Yes No C standard only sets minimum sizes; they can be
equal on some systems.

6 If int is 4 bytes, is int* also 4 bytes? On 32-bit → Yes, Yes, always Pointer size depends on architecture, not the
On 64-bit → No (8 bytes) type it points to.

7 In Java, is char signed or unsigned? Unsigned (UTF-16) Signed Many assume Java chars behave like C chars.

8 Default value of uninitialized local static variable 0 Garbage Static variables (even local ones) are
in C? zero-initialized.

9 Default value of uninitialized local int in C? Garbage 0 Only global/static vars are initialized to 0
automatically.

10 Minimum range of int in C (per standard)? -32767 to 32767 -2,147,483,648 to Standard defines minimum range, not exact —
2,147,483,647 actual depends on implementation.

11 Is float in Java IEEE 754 compliant? Yes No Java explicitly defines floating-point as IEEE 754
single precision.

12 Can bool in C++ store more than 1 byte? Yes (due to padding/alignment) No sizeof(bool) is 1 byte, but containers may
pad it.

13 Can a pointer to char be used to access any Yes (via void* casting) No char* is allowed to access raw memory per C
type in C? standard.

14 Is sizeof('A') same in C and C++? No Yes In C++ 'A' is char, in C it’s int.

15 Does Java have unsigned primitive integers? No (except char) Yes Java only has signed integer types, except char.
16 Can sizeof(long) be same as Yes No On LP32 systems, both can be 4 bytes.
sizeof(int) in C?

17 Is sizeof(double) always 8 bytes? No Yes On some platforms (DSPs, embedded), it can be


4 bytes.

18 Is an uninitialized array of static storage Yes No Static/global arrays are zero-initialized in C/C++.
duration zeroed?

19 Can sizeof(float) be greater than Unlikely but possible No Standard doesn’t forbid it, but rare in practice.
sizeof(double)?

20 In Java, is byte unsigned? No, it’s signed (-128 to 127) Yes Many confuse Java byte with C unsigned char.
2)​ ADT (Abstract Data Type)
Aspect ADT (Abstract Data Type) Data Structure

Meaning Logical description of data and allowed operations Concrete implementation of data and operations in memory

Focus What operations are to be performed How operations are performed

Implementation Hidden (encapsulation) Visible and specific

Level Conceptual / abstract level Physical / implementation level

Examples Stack, Queue, List, Map, Tree (as concepts) Array, Linked List, Binary Tree, Hash Table

Dependency Independent of programming language Dependent on programming language and system

Purpose Defines behavior Provides actual working mechanism

**Types of ADT**

ADT Type ADT Name Common Operations Possible Implementations (Data Important Facts / Exam Points
Structures)

Linear ADT List Create, Insert, Delete, Traverse, Search, Array, Singly Linked List, Doubly Lists can be static (array) or dynamic (linked
Update Linked List, Circular Linked List list).
Arrays → O(1) access, Linked Lists → O(1)
insertion at head.

Stack push(), pop(), peek()/top(), Array, Linked List, Two Queues LIFO principle;
isEmpty(), isFull() (Stack using Queues) recursion uses stack internally;
Stack using queues: push O(n) or pop O(n)
method possible.

Queue enqueue(), dequeue(), Array, Linked List, Circular Array, FIFO principle;
peek()/front(), isEmpty(), isFull() Circular Linked List, Two Stacks circular array avoids wasted space; Queue
(Queue using Stacks) using stacks can be implemented with costly
enqueue or dequeue.

Deque Insert/Delete at both ends (insertFront, Circular Array, Doubly Linked List Special types: Input-Restricted &
insertRear, deleteFront, deleteRear) Output-Restricted Deque.

Priority Queue insert(item, priority), Array (sorted/unsorted), Linked List Binary Heap → O(log n) insertion/deletion;
deleteHighestPriority() or (sorted/unsorted), Binary Heap, Priority queue using sorted list → fast
deleteLowestPriority() Two Stacks with Sorting deletion, slow insertion.
Non-Linear Tree Create, Insert, Delete, Traverse Linked Structure, Array (for Non-linear hierarchical structure; used in
ADT (Pre/In/Post/Level), Search complete trees) parsing, searching.

Binary Tree / Insert, Delete, Search, Traversals Linked Structure, Array (complete BST property: left < root < right; average
BST tree) O(log n) search.

AVL Tree Insert, Delete, Search, Rotations Linked Structure Self-balancing BST; balance factor ∈ {−1, 0,
1}.

Red-Black Tree Insert, Delete, Search, Rotations, Recoloring Linked Structure Guarantees O(log n) height; used in Java’s
TreeMap.

Heap insert(), Array, Linked Structure Complete binary tree;


extractMax()/extractMin(), Min-Heap & Max-Heap types.
heapify()

B-Tree / B+ Tree Search, Insert, Delete Linked Structure (node blocks), Used in databases;
Disk-based optimized for disk access.

Graph Add/Remove Vertex, Add/Remove Edge, Adjacency Matrix, Adjacency List, BFS uses queue;
BFS, DFS, Shortest Path Edge List DFS uses stack/recursion.

Special ADT Map / put(key, value), get(key), Hash Table, Tree Map (BST, Key-value pairs;
Dictionary remove(key) Red-Black Tree), Skip List HashMap average O(1) lookup.

Set add(), remove(), contains() Hash Table, Balanced BST, Bit Does not allow duplicates;
Vector Bit Vector memory-efficient for fixed ranges.
1)​ ARRAY (Linear, Static)
Array Type Declaration & Initialization Advantages & Disadvantages CRUX

1D Array (One a) Declaration with size: Advantages: Basic array, stores elements
Dimensional) int arr[5]; - Simple and easy to use linearly and supports fast
- Random access with O(1) indexing indexing.
-Static DS b) Initialization at declaration:
int arr[5] = {1, 2, 3, 4, 5}; Disadvantages:
- Fixed size (static)
c) Implicit size: - Insertion/deletion costly if not at the end
int arr[] = {1, 2, 3};
→ Output: 1 row, 3 columns:
1 2 3

Int arr[] = {0}; → 0

d) Initialize all zeros explicitly:


int arr[5] = {0};

e) Partial Initialization:
int arr[5] = {1, 2};
→ static/global: uninitialized elements = 0
→ local: uninitialized elements = indeterminate (garbage)
2D Array (Matrix) a) Declaration with rows and columns: Advantages: Represents a grid or matrix,
int arr[3][4]; - Useful for matrix or tabular data used in tabular data and image
- Supports random access processing.
b) Initialization at declaration (Full):
int arr[2][3] = {{1, 2, 3}, {4, 5, 6}}; Disadvantages:
- Fixed size
- Complex to resize
c) Size with implicit rows: - Higher memory usage
int arr[][3] = {...};

d) Initialize all zeros explicitly:


int arr[][3] = {0};
→ Output: 1 row, 3 columns:
0 0 0

e) Partial Initialization:
// grouped initialization
int arr[2][3] = {{1}, {4, 5}};
→ static/global: uninitialized elements = 0
→ local: uninitialized elements = indeterminate
→ Output: 2 row, 3 columns:
1 0 0
4 5 0

// flat initialization
int b[2][3] = {1, 4};
→ Output: 2 row, 3 columns:
1 4 0
0 0 0
Multidimensiona - Declaration with multiple dimensions: Advantages: Useful for representing
l Arrays (3D, int arr[2][3][4]; - Useful for complex data structures like 3D models, higher-dimensional data like 3D
etc.) simulations space.
- Initialization at declaration:
int arr[2][2][2] = {{{1, 2}, {3, 4}}, Disadvantages:
- Fixed size
- High memory usage
- Initialization with nested braces:
- Complex indexing
int arr[2][2][2] = {{{1, 2}, {3, 4}}, {{5, 6}, {7,
8}}};

- Partial Initialization:
e.g., int arr[2][2][2] = {{{1}, {}}, {{}, {7, 8}}};
→ static/global: uninitialized elements = 0
→ local: uninitialized elements = indeterminate

**Calculate Address Of Elements**

Array Type Notation Variables Used Formula Example (C uses Crux


Row-major)

1D Array arr[i] B = base address (arr[0]), B + (i × w) Base = 1000, Very direct; multiply index
i = index, sizeof(int) = 4 → by element size and add to
w = size of each element (bytes) arr[3] = 1000 + (3 × 4) = base
1012

2D Array (Row-major) arr[i][j] B = base, B + [(i × n) + j] × w Base = 2000, C, C++ use row-major
i = row index, sizeof(int) = 4, n = 4 order
j = column index, → arr[2][1] = 2000 +
n = no. of columns, [(2×4)+1]×4 = 2036
w = size of each element

2D Array (Column-major) arr[i][j] B = base, B + [(j × m) + i] × w Base = 2000, Used in Fortran, MATLAB,
i = row index, sizeof(int) = 4, m = 3 not C
j = column index, → arr[2][1] = 2000 +
m = no. of rows, [(1×3)+2]×4 = 2016
w = size of each element
**Operations - Complexity (Array)**
Operation Case Description 1D Array Time 2D Array Time Extra Space CRUX
Complexity (n Complexity (m × n Complexity for
elements) elements) Operation

Access B, A, W Direct indexing O(1) O(1) O(1) Direct access via index;
constant time for all arrays

Search (Linear) B Element found at first position O(1) O(1) O(1) Linear search checks each
element; best case is first
element

A Element found near middle O(n) O(m × n) O(1) Average case scans about
half elements

W Element not found or at last position O(n) O(m × n) O(1) Worst case requires
checking all elements

Search (Binary) B Element found at middle (sorted O(1) O(1) O(1) Requires sorted array;
array) (if sorted, binary search divides search space by half
over flattened matrix or each step
row/column sorted)

A Element found after few divisions O(log n) O(log (m × n)) O(1) Much faster than linear
search for sorted arrays

W Element not found O(log n) O(log (m × n)) O(1) Still logarithmic time in
worst case

Insertion B Insert at end if space available O(1) O(1) O(1) Fast if space available at
end; static arrays have fixed
size

A Insert anywhere (some shifting O(n) O(m × n) O(1) Inserting not at end requires
needed) shifting elements

W Insert at start (all elements shifted) O(n) O(m × n) O(1) Worst case shifts all
elements

Deletion B Delete near end (few shifts) O(k), k ≤ n O(k), k ≤ m × n O(1) Deleting at end requires
minimal shifting

A Delete near middle (half shifted) O(n) O(m × n) O(1) Deleting in middle shifts
many elements

W Delete at start (all shifted) O(n) O(m × n) O(1) Worst case shifts entire
array
** STRING **
Aspect C-style String (C & C++) std::string (C++ only)

Representation 1D array of char ending with \0 Class object storing characters internally + size/capacity

Null Terminator ✅ Required to mark end of string ❌ Not required (length stored internally)
Memory Allocation Fixed size (static array or manual malloc) Dynamic (auto-resizes as needed)

Size Flexibility ❌ No (array size fixed after creation) ✅ Yes (resize automatically)
Header File <string.h> in C, <cstring> in C++ <string>

Safety Low — no bounds checking Higher — .at() method checks bounds

Operator Overloading ❌ No ✅ Yes (+, ==, <, etc.)


String Operations Library functions:strlen(str), strcpy(dest, src), Member functions:str.length(), str.substr(pos,
strcmp(a, b), strcat(a, b) len), str.find("sub"), str.append("txt")

Memory Layout for "Hello" [H][e][l][l][o][\0] (contiguous in memory) Stores characters + metadata (length, capacity) —
implementation dependent
** POINTERS **
Topic Details Examples / Code

Definition A pointer is a variable that stores the memory address of another variable. Example:
int *p; // pointer to int

Syntax data_type *pointer_name; Example:


int *ptr;
char *chPtr;

Basic Example Declare, store address, dereference Example:


int x = 10;
int *p = &x; // store address
printf("%d", *p); // dereference

Use Cases 1. Access variable indirectly Call by Reference Example:


2. Call by reference void swap(int *a, int *b) {
3. Dynamic memory allocation int t = *a;
4. Arrays & pointer arithmetic *a = *b;
5. Strings (char pointers) *b = t;
6. Data structures }
7. Function pointers

Types of Pointers - Null Pointer → int *p = NULL; Note: Wild & dangling pointers cause undefined behavior.

- Void Pointer → void *vp;

- Wild Pointer → uninitialized pointer

- Dangling Pointer → after freeing memory

- Function Pointer → int (*fp)(int,int);

- Constant Pointer → int *const p = &x;

- Pointer to Constant → const int *p = &x;

- Double Pointer → int **pp;

- nullptr (C++11) → int *p = nullptr;


Pointer Size Same for all pointer types on same system: sizeof(int*) == sizeof(char*) == sizeof(float*)
- 32-bit → 4 bytes
- 64-bit → 8 bytes

Array with Pointers (C) Dynamic array creation using malloc and free C Example:
#include <stdio.h>
#include <stdlib.h>

int main() {
int n = 5;
int *arr = (int *)malloc(n * sizeof(int));

for (int i = 0; i < n; i++) {


arr[i] = i + 1;
}

for (int i = 0; i < n; i++) {


printf("%d ", *(arr + i));
}

free(arr);
return 0;
}

Array with Pointers (C++) Dynamic array creation using new and delete[] C++ Example:
#include <iostream>
using namespace std;

int main() {
int n = 5;
int *arr = new int[n];

for (int i = 0; i < n; i++) {


arr[i] = i + 1;
}

for (int i = 0; i < n; i++) {


cout << arr[i] << " ";
}

delete[] arr;
return 0;
}
Pointer Type Mismatch Occurs when a pointer of one type points to a variable of another type. Direct Example:
assignment without a cast causes a compile-time warning/error. Casting #include <stdio.h>
can bypass it but may cause undefined behavior if data sizes differ.
int main() {
int p;
char c = 'A'; // ASCII 65
//p = &c; // Compiler warning: incompatible pointer type
p = (int)&c; // Forced type cast

printf("Value via int pointer: %d\n", *p); // Undefined


behavior

return 0;
}

Explanation: Here, p expects to read sizeof(int)


bytes, but c is only 1 byte. The remaining bytes read are
garbage memory, leading to unpredictable output.

C vs C++ Allocation C → malloc / calloc / free In C++ prefer std::vector for safety
C++ → new / delete or std::vector
2)​ LINKED LIST

1) Singly Linked List 2) Doubly Linked List

●​ [Data|Next] -> [Data|Next] -> [Data|Next] -> NULL ●​ NULL <- [Prev|Data|Next] <-> [Prev|Data|Next] <-> [Prev|Data|Next] -> NULL

●​ struct Node { ●​ struct Node {


int data; int data;
struct Node* next; struct Node* prev;
}; struct Node* next;
struct Node* head = NULL; // Start of the list };
struct Node* head = NULL;

Advantages: Simple structure, dynamic size, efficient insertion/deletion at head. Advantages: Traversal in both directions, easier deletion/insertion when node address is
​ known.
Disadvantages: Only forward traversal, cannot directly access previous node, O(n) ​
search Disadvantages: Extra memory for previous pointer, more complex implementation.
.​ ​
Use Case: Basic dynamic data storage, implementing stacks & queues. Use Case: Deque implementation, navigation in browsers (back/forward).

3) Circular Singly Linked List 4) Circular Doubly Linked List

+ —----------------------------------------------------+
●​ [Data|Next] -> [Data|Next] -> [Data|Next] --+ | v
^ | ●​ [Prev|Data|Next] <-> [Prev|Data|Next] <-> [Prev|Data|Next]
—------------------------------------------------------+ ^ |
+-------------------------------------------------------------------------+
●​ struct Node {
int data; ●​ struct Node {
struct Node* next; int data;
}; struct Node* prev;
struct Node* head = NULL; // last node's next points to head struct Node* next;
};
struct Node* head = NULL; // prev of head = tail, next of tail = head

Advantages: Can start traversal from any node, efficient for circular traversal. Advantages: Traverse from any node in both directions, no NULL pointers.
​ ​
Disadvantages: Only forward traversal, complex insertion/deletion logic. Disadvantages: Highest memory overhead, most complex to implement.
​ ​
Use Case: Round-robin scheduling, playlist looping. Use Case: Advanced scheduling, multi-directional navigation in apps.
**Operations - Complexity (Linked List)**

Operation SLL DLL CSLL CDLL Crux

Traversal O(n) O(n) O(n) O(n) All types have O(n)


traversal; no random access
support.

Search O(n) O(n) O(n) O(n) All types require linear


search unless extra
indexing is used.

Insertion at Head O(1) O(1) O(1) O(1) Always O(1) since the head
pointer is known.

Insertion at Tail O(n) O(n) O(1) if tail maintained O(1) if tail maintained Tail pointer drastically
(O(1) if tail maintained) (O(1) if tail maintained) speeds tail insertion.

Insertion at Middle O(1) O(1) O(1) O(1) Direct pointer access allows
(known pointer) constant-time insertion.

Deletion at Head O(1) O(1) O(1) O(1) Always O(1) since the head
pointer is known.

Deletion at Tail O(n) O(1) if tail maintained O(n) O(1) if tail maintained DLL/CDLL can delete tail in
O(1) using prev.

Deletion at Middle O(1) O(1) O(1) O(1) Constant time if node


(known pointer) pointer is given.

CRUX (Overall) Needs O(n) to delete tail Can delete tail in O(1) with tail Tail insertion O(1) with Tail insertion & deletion All types have O(n)
unless tail pointer pointer (due to prev). tail pointer (tail->next both O(1) with tail pointer traversal/search; tail pointer
maintained. = head). Deleting tail is and prev. improves tail ops
O(n) without extra drastically.
pointer.
Parameter Array Linked List

Memory Allocation Contiguous memory Non-contiguous memory

Size Fixed (static), must be Dynamic, can grow/shrink


known at compile time at runtime
(except dynamic arrays)

Access Time O(1) for index access O(n), sequential access


(random access) only

Insertion/Deletion Costly, may require Efficient if position/node is


shifting elements (O(n)) known (O(1)), else O(n) for
searching

Memory Overhead Minimal (only data stored) Extra memory for pointers
in each node

Cache Friendliness High, due to contiguous Poor, nodes scattered in


memory memory

Implementation Simple More complex due to


Complexity pointers

Traversal Forward only (by index) Forward (singly) or


forward/backward (doubly)

Use Cases When size is fixed and When frequent


random access needed insertion/deletion required
or size is dynamic

Examples int arr[10]; float marks[50]; Singly, Doubly, Circular


Linked Lists

Advantages Fast access, simple Dynamic size, efficient


structure, low memory insertion/deletion, flexible
overhead memory use

Disadvantages Fixed size, costly Sequential access only,


insertion/deletion extra memory for pointers
3)​ STACK (LIFO)
Stack of plates: can only add/remove the top plate.

A linear data structure that follows LIFO (Last In, First Out) principle; the last element inserted is the first to be removed.

Aspect Array-Based Stack Linked List Stack Queue-Based Stack

Description Implemented using a fixed-size or dynamic array; Implemented using nodes with data and next Simulated using two queues;
top index tracks the top element. pointer; top points to head node. Either push or pop can be costly depending
on method.

Advantages Simple, fast access, O(1) push/pop, memory Dynamic size, no memory wastage, efficient Shows flexibility of data structures; useful in
contiguous. insertion/deletion. theoretical/interview questions.

Disadvantages Fixed size unless dynamic array, resizing costly, Extra memory for pointers, slightly slower, more Slower (O(n) for either push or pop),
requires contiguous memory. complex to implement. complex to implement; rarely used in
production.

Operations & Complexity Push: O(1) Push: O(1) Push (costly method): O(n)
Pop: O(1) Pop: O(1) Pop: O(1)
Peek/Top: O(1) Peek/Top: O(1) Push (cheap method): O(1)
Search: O(n) Search: O(n) Pop: O(n)

Applications - Expression evaluation (infix/postfix/prefix) for - Expression evaluation (infix/postfix/prefix) for - Interview/theoretical problems
fixed-size or known-length expressions dynamic/large expressions - Understanding LIFO using FIFO
- Call stack for recursion - Recursion handling - Algorithm exercises
- Undo/Redo in editors - Function call management - Advanced DS concepts
- Backtracking problems (maze solving) - Undo/Redo
- Syntax parsing - Backtracking
- Browser history navigation
Example / Code #define MAX 100 struct Node { Stack simulated using two queues;
int stack[MAX]; int data; demonstrates LIFO using FIFO.​
int top = -1; struct Node* next;
Push-costly method: Push is O(n), Pop is
};
O(1).​
void push(int x){ struct Node* top = NULL;
if(top < MAX-1) Pop-costly method: Push is O(1), Pop is
stack[++top] = x; void push(int x){ O(n).​
} struct Node* newNode =
malloc(sizeof(struct Node)); Mainly used for interviews / theoretical
int pop(){ newNode->data = x; understanding.​
if(top >= 0) newNode->next = top;
Shows flexibility in implementing one DS
return stack[top--]; top = newNode; using another.​
return -1; // stack empty }
} Not efficient for practical applications
int pop(){ compared to array or linked list stack.
int peek(){ if(top == NULL)
if(top >= 0) return -1; // stack empty
return stack[top]; int val = top->data;
return -1; struct Node* temp = top;
} top = top->next;
free(temp);
return val;
}

int peek(){
if(top == NULL) return -1;
return top->data;
}

Use Cases When size is known/bounded, fast access required. When size unknown, frequent insert/delete Mainly theoretical/academic/interview
required. purpose.

Crux Best for fixed size and fast access, very simple. Best for dynamic size and frequent Mainly for theoretical/interview purposes,
insert/delete, flexible. demonstrates LIFO using FIFO.
** EVALUATION OF EXPRESSION - PREFIX, INFIX and POSTFIX **

Aspect Infix Postfix (Reverse Polish Notation) Prefix (Polish Notation)

Definition Operator is between operands. Example: Operator comes after operands. Operator comes before operands.
A + B Example: A B + Example: + A B

Operator Position Between operands After operands Before operands

Parentheses Required Yes, to indicate precedence No, precedence is implicit No, precedence is implicit

Evaluation Harder for computers, requires parsing Easy using stack Easy using stack

Conversion Tricks / Rules - Convert to postfix/prefix using stack and - Evaluate left to right using stack- - Evaluate right to left using stack-
precedence rules Convert back to infix/prefix using stack Convert back to infix/postfix using stack

Conversion Steps (from Infix) - Use stack for operators- Operand → - Evaluate as described- Operand → push- - Reverse infix expression, swap ( ↔ )-
add to output- ( → push- ) → pop until (- Operator → pop 2 operands, apply, push Convert to postfix- Reverse result → Prefix
Operator → pop higher/equal precedence result
operators first

Example (A + B) * C A B + C * * + A B C

Evaluation - Requires parsing rules or conversion to - Scan left → push operands → apply - Scan right → push operands → apply
postfix/prefix operator operator

Use Cases - Human-readable algebraic expressions- - Compilers- Calculators- Stack-based - Certain programming languages- Prefix
Programming code evaluation- Avoid parentheses calculators- Theoretical algorithms

Important Exam Tips - Use parentheses to handle precedence- - Stack-based evaluation O(n)- No - Stack-based evaluation O(n)- Right to left
Remember associativity rules parentheses required- Left to right scanning- Conversion: Reverse → Postfix
scanning → Reverse
** CONVERSIONS **
Expression: (A + B) * (C - D)

1️⃣ Infix → Postfix 2️⃣ Infix → Prefix


Algorithm: Algorithm:
1.​ Scan left to right. 1.​ Reverse infix:​
2.​ Operand → output. (A + B) * (C - D) → (D - C) * (B + A) (swap parentheses)
3.​ ( → push to stack. 2.​ Convert to postfix → step-wise:
○​ Operand → output
4.​ ) → pop until (.
○​ Operator → stack (based on precedence)
5.​ Operator → pop operators with higher or equal precedence, then push current ○​ Pop stack to output when precedence rules demand
operator.
Steps: Steps : Conversion of ( D - C ) * ( B + A ) to Postfix
Scan Stack Output Step Symbol Read Action Output (Postfix) Stack

( ( 1 ( Push to stack – (

A ( A 2 D Add to output D (

+ (+ A 3 - Push to stack D (-

B (+ A B 4 C Add to output DC (-

) (pop +) A B + 5 ) Pop until ( DC- –

* * A B + 6 * Push to stack DC- *

( * ( A B + 7 ( Push to stack DC- *(

C * ( A B + C 8 B Add to output DC-B *(

- * (- A B + C 9 + Push to stack DC-B *(+

D * (- A B + C D 10 A Add to output DC-BA *(+

) (pop -) A B + C D - 11 ) Pop until ( DC-BA+ *

End Pop remaining (*) A B + C D - * 12 End Pop remaining DC-BA+* –

Postfix after reversal : D C - B A + *


3.​ Reverse postfix → prefix: * + A B - C D

✅ Postfix: A B + C D - * ✅ Prefix: * + A B - C D
3️⃣ Postfix → Infix 4️⃣ Prefix → Infix

Postfix: A B + C D - * Prefix: * + A B - C D

Algorithm: Algorithm:

1.​ Scan left → right.​ 1.​ Scan right → left.​

2.​ Operand → push to stack.​ 2.​ Operand → push.​

3.​ Operator → pop two operands, combine as (operand1 operator 3.​ Operator → pop two operands, combine (operand1 operator operand2),
operand2), push result.​ push result.​

Step-wise: Step-wise:

Symbol Stack Symbol (scan) Stack

A A D D

B A, B C D, C

+ (A + B) - (C - D)

C (A + B), C B (C - D), B

D (A + B), C, D A (C - D), B, A

- (A + B), (C - D) + (C - D), (A + B)

* ((A + B) * (C - D)) * ((A + B) * (C - D))

✅ Infix: (A + B) * (C - D) ✅ Infix: (A + B) * (C - D)
5️⃣ Postfix → Prefix 6️⃣ Prefix → Postfix

Postfix: A B + C D - * Prefix: * + A B - C D

Algorithm (via stack): Algorithm (via stack):

1.​ Operand → push.​ 1.​ Scan right → left.​

2.​ Operator → pop two operands, combine operator operand1 operand2.​ 2.​ Operand → push.​

3.​ Operator → pop two operands, combine operand1 operand2 operator.​


Step-wise:

Symbol Stack Step-wise:

A A Symbol (scan) Stack

B A, B D D

+ + A B C D, C

C + A B, C - C D -

D + A B, C, D B C D -, B

- + A B, - C D A C D -, B, A

* * + A B - C D + A B +, C D -

* A B + C D - *

✅ Prefix: * + A B - C D ✅ Postfix: A B + C D - *

✅ Key Crux / Memory Tips:


●​ Postfix evaluation: scan left → right.
●​ Prefix evaluation: scan right → left.
●​ Infix → Prefix: reverse infix, convert to postfix, reverse result.
●​ Stack is central to all conversions.
●​ Always use parentheses in infix to avoid confusion.
Conversion Type Input Expression Step-wise Conversion / Stack Logic Output Expression

Infix → Postfix (A + B) * (C - D) Scan left → right: A B + C D - *


1. ( → push 7. ( → push
2. A → output 8. C → output
3. + → push 9. - → push
4. B → output 10. D → output
5. ) → pop + 11. ) → pop -
6. * → push 12. Pop remaining *

Infix → Prefix (A + B) * (C - D) 1. Reverse infix, swap ( ↔ ) → (D - C) * (B + A) * + A B - C D


2. Convert to postfix → D C - B A + *
3. Reverse result → * + A B - C D

Postfix → Infix A B + C D - * Scan left → right: (A + B) * (C - D)


1. A, B → push
2. + → pop A, B, combine (A + B) → push
3. C, D → push
4. - → pop C, D, combine (C - D) → push
5. * → pop (A + B), (C - D), combine → ((A + B) * (C - D))

Prefix → Infix * + A B - C D Scan right → left: (A + B) * (C - D)


1. D, C → push
2. - → pop C, D → (C - D) → push
3. B, A → push
4. + → pop A, B → (A + B) → push
5. * → pop (A + B), (C - D) → combine → ((A + B) * (C - D))

Postfix → Prefix A B + C D - * Scan left → right: * + A B - C D


1. A, B → push
2. + → pop A, B → + A B → push
3. C, D → push
4. - → pop C, D → - C D → push
5. * → pop + A B, - C D → * + A B - C D

Prefix → Postfix * + A B - C D Scan right → left: A


1. D, C → push
2. - → pop C, D → C D - → push
3. B, A → push
4. + → pop A, B → A B + → push
5. * → pop A B +, C D - → A B + C D - *
4) QUEUE (FIFO / LILO)

Linear Data Structure: Elements arranged sequentially; follows a specific order (FIFO).
FIFO (First In First Out): First element inserted is the first removed.
Key Properties :

●​ Front: Points to element to be dequeued.


●​ Rear: Points to last element inserted.
●​ Insertion is called enqueue (adds element at the rear/end).
●​ Deletion is called dequeue (removes element from the front).
●​ Insertion is called enqueue (adds element at the rear/end).
●​ Deletion is called dequeue (removes element from the front).
●​ Time Complexity:
○​ Enqueue: O(1)
○​ Dequeue: O(1)

Dynamic or Static Implementation :


●​ Array: Fixed-size (static) queue.
●​ Linked List: Dynamic queue; grows/shrinks as needed.

Applications of Queue :

●​ CPU Scheduling (process scheduling in operating systems)


●​ Printer Spooling
●​ Breadth-First Search (BFS) in graphs
●​ Handling requests in real-time systems
●​ Buffers in I/O operations
** TYPES OF QUEUE **
Aspect / Type Linear Queue Circular Queue (Circular Array)

Definition - Linear DS following FIFO - Array-based queue where rear wraps around to front
- Remove from front, insert at rear - Forms a circle

Visualization - Front -> A -> B -> C -> Rear - Circular array: Front -> A -> B -> C <- Rear (wraps)

Basic Operations - Enqueue - Enqueue


- Dequeue - Dequeue
- Peek / Front - Peek / Front
- IsEmpty - IsEmpty
- IsFull - IsFull

Time Complexity - O(1) for enqueue / dequeue / peek - O(1) for enqueue / dequeue / peek

Advantages - Simple to implement - Efficient memory usage


- Avoids wasted space

Disadvantages - Wastes space after multiple dequeues - Slightly complex implementation


- Needs modulo calculation

Implementation - Array - Array using (Circular Array) - (rear + 1) % MAX


- Linked list - Or linked list
- Two Stack

Examples / - int queue[MAX]; int front=-1, rear=-1; - int queue[MAX]; int front=-1, rear=-1; (use modulo for circular)
Initialization

Use Cases - CPU scheduling - CPU scheduling


- Printer queue - Buffering
- BFS - OS queues

Important Facts - Linear DS, FIFO - Circular array avoids wasted space
- Front & Rear pointers - Rear wraps to front using modulo
- May waste space in array implementation - O(1) enqueue/dequeue
- O(1) enqueue/dequeue

Conditions / Exam Tips - Queue empty: front == -1 - Queue empty: front == -1


- Queue full (array): rear == MAX-1 - Queue full: (rear + 1) % MAX == front
- Use modulo for wrap-around

Crux / Tips - Simple FIFO tasks - Preferred for fixed-size array with memory efficiency
- Modulo arithmetic ensures wrap-around
Aspect / Type Deque (Double-Ended Queue) Priority Queue

Definition - Queue with insertion/deletion at both front and rear - Queue where elements have priority
- Highest priority dequeued first

Visualization - Front <-> A <-> B <-> C <-> Rear - Highest priority element removed first (not strictly FIFO)

Basic Operations - InsertFront - Insert(element, priority)


- InsertRear - Dequeue(highest priority)
- DeleteFront - Peek
- DeleteRear
- Peek

Time Complexity - O(1) for insert / delete at either end - O(n) array/linked list
- O(log n) heap

Advantages - Flexible insertion/deletion at both ends - Handles prioritized tasks efficiently

Disadvantages - More complex than linear queue - Slower insertion/deletion if not heap-based

Implementation - Doubly linked list - Array


- Circular array - Linked list
- Heap (Most Efficient)

Examples / Initialization - struct Node {int data; Node* next; Node* prev;}; - struct Node {int data; int priority; Node* next;};

Use Cases - Sliding window problems - Job scheduling


- Deque algorithms - Dijkstra’s algorithm
- Task management

Important Facts - Double-ended insertion/deletion - Elements processed by priority


- Can be implemented with doubly linked list or circular array - Can be implemented with array, linked list, or heap
- Useful for real-time scheduling

Conditions / Exam Tips - Deque empty: front == NULL - Queue empty: front == NULL
- Handle both ends carefully for insert/delete - Maintain priority order during insertion
- Heap implementation is efficient

Crux / Tips - Flexible for insertion/deletion at both ends - Use heap for efficient insertion/deletion; otherwise slower
5) TREES
A tree is a hierarchical (Non-Linear) data structure consisting of nodes.
Nodes contain data and links (edges) to child nodes.
A tree with n nodes has exactly n – 1 edges.
There is exactly one path between any two nodes.
** IMPORTANT TERMS **

Term Definition / Facts

Tree Hierarchical data structure consisting of nodes with a


single root and zero or more child nodes.
Ancestor Any node in the path from root to a given node.
Node An element of a tree containing data and pointers to child
nodes. Descendant Any node in the subtree rooted at a given node.

Root Topmost node of a tree; has no parent. Subtree Tree formed by a node and all its descendants.

Parent Node Node that has one or more children. Forest Collection of disjoint trees.

Child Node Node that has a parent node. Binary Tree Each node has at most 2 children (left and right).

Leaf / External Node Node with no children.(Node with 0 Child) Full Binary Tree Every node has 0 or 2 children.

Internal Node Node with at least one child. (Node with >= 1 Child) Complete Binary Tree All levels completely filled except possibly the
last, filled left to right.
Siblings Nodes that share the same parent.
Perfect Binary Tree Complete and all leaves at same level.
Edge Connection between two nodes.
Balanced Tree Difference between heights of left and right
Path Sequence of nodes connected by edges. subtrees ≤ 1 (e.g., AVL tree).

Path Length Number of edges in a path. Degenerate / Pathological Tree Each parent has only one child; essentially a
linked list.
Degree of Node Number of children a node has.
Binary Search Tree (BST) Binary tree where left child < parent < right child.
Degree of Tree Maximum degree among all nodes in the tree.
AVL Tree / Balanced BST BST with height-balance property.
Height of Node Number of edges on the longest path from the node to a
leaf. Preorder Traversal (DLR) Visit root → left subtree → right subtree.
Note : Height of Leaf Node : 0
Inorder Traversal (LDR) Visit left subtree → root → right subtree.
Height of Tree Height of root node; longest path from root to any leaf.
Postorder Traversal (LRD) Visit left subtree → right subtree → root.
Depth of Node Number of edges from root to that node.
Note : Depth of Root Node : 0 Level-order Traversal / BFS Visit nodes level by level using a queue.

Depth of Tree Maximum depth among all nodes (same as height of tree). Internal Path Length Sum of depths of all internal nodes.

Level of Node Level of root = 1; level = depth + 1. External Path Length Sum of depths of all leaf nodes.

Level of Tree Maximum level among all nodes (height + 1).


** TYPES OF TREES**
Tree Type (Hierarchy + Operations & Complexity Important Conditions / Properties (Formulas & Key Important Facts / Notes
Implementation) Properties Points)

1️⃣ General / N-ary Tree - Traversal: Preorder, - Node can have up to n children - Max nodes at level l: n^l - Used for hierarchical structures
- Linked list (First-child / Next-sibling) Postorder, Level-order: O(n) - Min nodes at level l: 1 (file systems, org charts)
- Array (fixed small n) - Insertion/Deletion: O(1) at - Max nodes at height h: (n^(h+1) -
node
1)/(n-1)
- Search: O(n)
- Min nodes at height h: h+1
- Space: O(h) for recursion
stack

2️⃣ Binary Trees - Traversal: Preorder, Inorder, - Max 2 children per node - Max nodes at level l: 2^l - Parent-child relationships
- Linked list (Node* left, Node* right) Postorder, Level-order: O(n) - Min nodes at level l: 1 essential
- Array (for complete trees) - Insertion: O(1) at head / O(n) - Max nodes at height h: 2^(h+1) - 1 - Sparse trees waste array space
general
- Min nodes at height h: h+1
- Deletion: O(n)
- Search: O(n) - Max height with n nodes: n-1
- Space: O(h) recursion stack (skewed)
(O(log n) for balanced tree) - Min height with n nodes:
log2(n+1) - 1
• Full/Proper/Strict Binary Tree - Traversal: O(n) - Each node has 0 or 2 children - Nodes at height h: 2^h (perfectly - Also called proper or strict
- Linked list / Array balanced) binary tree
- Total nodes: 2^(h+1)-1

• Complete Binary Tree - Traversal: O(n) - All levels filled except last - Nodes at last level ≤ 2^h - Easy array representation (Left
- Array preferred - Last level filled left to right = 2i+1, Right = 2i+2)
• Perfect Binary Tree - Traversal: O(n) - All levels completely filled - Total nodes: 2^(h+1)-1 - Number of nodes strictly follows
- Linked list / Array - Height: log2(n+1)-1 formula

• Skewed - Traversal: O(n) - All nodes have only one child - Height = n-1 - Degenerate tree; behaves like a
Binary Tree (Left/Right) - Nodes at each level = 1 linked list
- Linked list
• Binary Search Tree (BST) - Insertion/Search/Deletion: Avg - Left < Parent < Right - Max height (unbalanced): n-1 - Traversals give sorted order
- Linked list O(log n), Worst O(n) - Unique keys typical - Min height (balanced):
- Traversals: O(n) log2(n+1)-1
• AVL Tree (Self-Balancing BST) - Insertion/Search/Deletion: - Balance factor (-1,0,+1) at all - Height h ≤ 1.44 log2(n+2) - 0.328 - Height-balanced → efficient
- Linked list O(log n) nodes operations
- Rotations: O(1) per rotation
• Red-Black Tree (Self-Balancing - Insertion/Search/Deletion: - Root = black - Height h ≤ 2 log2(n+1) - Widely used in OS, memory
BST) O(log n) - Red node cannot have red child management, DB indexing
- Linked list - Rotations: O(1) per rotation - Equal black height paths

• Splay Tree - Search/Insertion/Deletion: - Performs rotations to move - Height: O(log n) amortized - Frequently accessed nodes
- Linked list O(log n) amortized, O(n) recently accessed nodes closer - Max nodes at height h: 2^(h+1)-1 become quicker to access
worst-case to root - Useful in caches, memory
- Splaying (rotate accessed management, and access
node to root) sequences with locality
• Expression Tree - Traversals: Preorder, Inorder, - Leaf = operand, internal = - Number of internal nodes = - Used for arithmetic evaluation,
- Linked list Postorder operator n_operands - 1 (for full binary prefix/infix/postfix conversions
expression tree)

• Heap (Max/Min) - Insert: O(log n) - Complete binary tree - Height h = ⌊log2 n⌋ - Used in priority queues,
- Array (complete binary tree) / - Delete (root): O(log n) - Max-Heap: Parent ≥ children - Max nodes at height h: 2^h heapsort, scheduling
Linked list - Find max/min: O(1) - Min-Heap: Parent ≤ children
3️⃣ Multi-way / Specialized Trees

• B-Tree - Insert/Search/Delete: O(log n) - Multi-way search tree - Max keys per node: m-1 (order m) - Used in databases, file systems;
- Linked list / Disk-based node array - All leaves at same depth - Min keys per node: ⌈m/2⌉-1 disk-optimized
- Node contains multiple keys

• B+ Tree - Insert/Search/Delete: O(log n) - All data stored at leaf nodes - Leaf nodes contain all actual - Leaf nodes linked → fast
- Linked list / Disk-based node array - Internal nodes store keys only records sequential access
** TRAVERSAL - Pre-Order, In-Order, Post-Order, Level-Order **

Category Traversal Type & Order / Method / Time & Space Complexity Key Facts / Use Cases / Noteworthy Points
Definition Implementation

DFS (Depth-First Search) Preorder - Recursive: - Time: O(n) - Used to create a copy of the tree
(Root → Left → Right) (Root → Left → Right) - Space: O(h) recursion stack - Prefix expression evaluation
(skewed tree O(n), balanced O(log n)) - Node visited before children
- Iterative: Using stack - Frequently asked in expression tree
problems

Inorder - Recursive: - Time: O(n) - BST traversal gives sorted order


(Left → Root → Right) (Left → Root → Right) - Space: O(h) recursion stack - Used for infix expression evaluation
- Node visited between children
- Iterative: Using stack - Important for sorted data extraction
questions

Postorder - Recursive: - Time: O(n) - Used to delete a tree (children first)


(Left → Right → Root) (Left → Right → Root) - Space: O(h) recursion stack - Postfix expression evaluation
- Node visited after children
- Iterative: Two stacks / - Useful in bottom-up calculations, memory
modified one stack deallocation

BFS (Breadth-First Search) Level-order- Level by level, - Iterative using queue - Time: O(n) - Visits nodes level by level
top → bottom, left → right - Space: O(max width of tree) - Used in heap operations, shortest path
algorithms, hierarchical data processing
- Queue space = max width of tree
- Variants like reverse level-order sometimes
asked
- BFS = iterative-friendly,
-DFS = recursive-friendly
QUESTIONS BASED ON -> PRE-ORDER, IN-ORDER, POST-ORDER, LEVEL-ORDER
6) GRAPH

●​ A graph GGG is a mathematical structure used to model pairwise relationships between objects.
●​ Formally, it is defined as a pair G=(V,E)G = (V, E)G=(V,E) where:
○​ V is a set of vertices (nodes) representing the objects.
○​ E is a set of edges (connections) representing the relationships between vertices.​

●​ Each edge connects two vertices; in a directed graph, edges have a direction (from one vertex to another), and in an undirected graph, edges have no direction.
●​ Graphs may be weighted (edges carry a value) or unweighted (edges only indicate a connection).
●​ Graphs can be simple (no loops or multiple edges) or multigraphs (may have loops or multiple edges).​

Crux: Graph = vertices + edges + (optional direction/weight/multiplicity).


KEY TERMINOLOGIES
Terminology Description / Formula Example Crux

Graph (G) • Set of vertices (V) and edges (E) G = ({A, B, C}, {AB, BC, CA}) • Foundation of all graph concepts.
represented as G = (V, E). • Defined by vertices + edges only.
• Models pairwise relationships between
objects.

Vertex (Node) • Fundamental unit of a graph. In a social network, each person is a • Building blocks of a graph.
• Represents an object or entity. vertex. • Represent entities in real-world
problems.

Edge • Connection between two vertices. Road between two cities. • Represents relationships or connections.
• Can be directed (ordered pair) or
undirected (unordered pair).

Order of Graph • Number of vertices in a graph. V = number of vertices.


• Formula:

Size of Graph • Number of edges in a graph. E = number of edges.


• Formula:

Degree of Vertex • Number of edges incident on a vertex. E .


• In undirected graphs: degree = incident
edges count.
• Handshaking Lemma: Σdeg(v) = 2

In-degree • For directed graphs: Number of incoming Vertex B has edges from A and C → • In-degree = incoming traffic measure.
edges to a vertex. in-degree = 2.

Out-degree • For directed graphs: Number of outgoing Vertex A has edges to B and C → • Out-degree = outgoing traffic measure.
edges from a vertex. out-degree = 2.

Isolated Vertex • Vertex with degree 0. In a network, a disconnected computer. • No edges connected.
• No connections to other vertices. • Completely isolated in graph.

Pendant Vertex • Vertex with degree 1. Leaf node in a tree. • Represents end-points in a structure.
• Connected to exactly one other vertex.

Source Vertex • Directed graph vertex with in-degree = 0, A with edges to B, C but none coming in. • Starting points in directed flows.
out-degree > 0.

Sink Vertex • Directed graph vertex with in-degree > 0, B receiving edges from A, C but no • End points in directed flows.
out-degree = 0. outgoing edges.

Neighbor • Vertices directly connected via an edge. A and B connected by edge AB → • Directly connected vertices only.
neighbors.
Incident Edge • Edge connected to a vertex. Edge AB is incident to A and B. • Edge touching a vertex.

Non-incident Edge • Edge not connected to a vertex. Edge CD w.r.t. vertex A. • No endpoint match with the vertex.

Adjacent Vertices • Vertices connected by a common edge. A and B in AB. • Neighbor vertices = adjacent vertices.

Reachability • A vertex u can reach vertex v if there A → B → C → D means A can reach D. • Determines possibility of traversal.
exists a path from u to v.

Walk • Sequence of vertices and edges where A–B–A–C. • Most general form of movement.
repetition of vertices/edges is allowed.

Trail • Walk with no repeated edges (vertices A–B–C–A. • No edge repetition allowed.
may repeat).

Path • Walk with no repeated vertices (and A–B–C–D. • Simple movement with no revisits.
thus no repeated edges).

Cycle • Path where first and last vertices are the A–B–C–A. • Closed path with unique vertices.
same.
• No repeated vertices except start/end.

Connected Graph • Undirected graph where there is a path Road map with no isolated parts. • All vertices reachable from each other.
between every pair of vertices.

Strongly Connected Graph • Directed graph where every vertex can Flight routes where you can go both ways • Mutual reachability in directed graphs.
reach every other vertex. between cities.

Weakly Connected Graph • Directed graph that becomes connected One-way roads forming a connected map • Connected if you drop direction info.
when edges are treated as undirected. when ignoring direction.

Isomorphic Graphs • Graphs with same connectivity but Two triangle graphs labeled differently. • Structure same, names differ.
possibly different labels or drawings.

Subgraph • Graph formed from a subset of vertices Smaller network extracted from larger one. • Part of a bigger graph.
and edges of another graph.

Induced Subgraph • Subgraph formed by a set of vertices and Choose vertices {A, B, C} and keep all • Keeps all edges among chosen vertices.
all edges between them in original graph. connecting edges.

Spanning Subgraph • Subgraph containing all vertices of MST is a spanning subgraph. • All vertices present, edges may be
original graph but possibly fewer edges. missing.

Complete Graph (K ) • Every vertex connected to every other K₄ has 4 vertices and 6 edges. • Max possible connections for given
vertex. vertices.
• Edges = n(n–1)/2 for undirected.

Null Graph • Graph with vertices but no edges. 4 vertices, no connections. • Completely disconnected structure.
TYPES OF GRAPHS
Basis of Division Graph Category + Variants Description / Facts Formula Example CRUX
Definition

Direction of Edges Undirected Graph – Simple Graph - No loops or multiple – Triangle K₃ - Basic undirected graph
Edges have no direction; edges
unordered pairs (u,v)

Multigraph - Multiple edges allowed, – Two edges between - Generalization of simple


loops forbidden A–B graph

Pseudograph - Loops and multiple – Vertex A with loop + - Most general undirected
edges allowed multiple edges to B graph

Tree - Connected, acyclic, n E = n–1 5-vertex tree - Hierarchical structures,


vertices → n-1 edges spanning trees
- Handshaking Lemma
applies

Forest - Collection of trees, – 2 disjoint trees - Multiple acyclic


disconnected, acyclic components

Directed Graph Simple Digraph - No loops, no multiple – A→B→C - Basic directed graph
(Digraph) – Edges have directed edges
direction; ordered pairs
(u,v)

Multidigraph - Multiple directed edges – Two edges from A → B - Flow networks & routing
allowed

Weighted Digraph - Directed edges have – Flight routes with cost - Shortest path problems
weights

Mixed Graph - Both directed and – One-way & two-way - Real-world traffic
undirected edges roads modeling
Edge Weights Weighted Graph – Edges Positive Weighted - All weights > 0 – MST example - Dijkstra / Prim / Kruskal
have weights applicable

Negative Weighted - Some edges negative – Bellman-Ford graph - Handles negative edge
graphs

Unweighted Graph – All – - Default weight = 1 – Simple road network - Weight ignored
edges equal

Loops / Multiple Loopless Graph – No – - Simplifies degree – Triangle K₃ –


Edges vertex has loop calculation

Graph with Loops – At – - Loops count twice in – Vertex A has A–A edge –
least one loop degree

Connectivity Connected Graph – All Strongly Connected - Path exists both ways – Flight network - Mutual reachability
vertices reachable (Digraph)

Weakly Connected - Connected ignoring – One-way streets - Checks connectivity


(Digraph) direction ignoring direction

Disconnected Graph – – - Multiple components may – Two triangles - BFS/DFS needed for
Some vertices exist disconnected components
unreachable
Regularity Regular Graph – All – - Symmetric connectivity Sum of degrees = 2E Square cycle k=2 - Useful for network design
vertices have same
degree k

Complete Graph (K ) – – - Max edges possible E = n(n–1)/2 K₄ → 6 edges - Combinatorial problems


Every vertex connected to
every other

Complete Bipartite – - Special bipartite, no E = m×n K₃,₂ → 6 edges - Matching / scheduling


Graph (Kₘ, ) – Vertices intra-set edges
in 2 sets; all cross
connections
Planarity Planar Graph – Can be Outerplanar Graph - All vertices on outer face Euler: V – E + R = 2 Triangle K₃ - Edge crossing avoided
drawn without crossing V - nodes
edges E - Edges
R - Regions

Non-Planar Graph – – - Examples: K₅, K₃,₃ – K₅ - Cannot embed in 2D


Cannot draw without
crossings

Cycles Cyclic Graph – Contains – - Cycle exists – Triangle A–B–C–A - Euler/Hamilton problems
at least one cycle

Acyclic Graph – No DAG (Directed - Directed, no cycles – Task scheduling - Dependency / scheduling
cycles Acyclic Graph) - Topological sort possible
Special / Named Null Graph – Vertices – - Minimum structure – 4 vertices, 0 edges - Empty graph
Graphs only, no edges

Complement Graph – – - Shows missing – Triangle complement - Inverse edges


Edges where original connections
graph has none

Hamiltonian Graph – – - Visits all vertices exactly – Pentagonal cycle - Hamiltonian path/cycle
Contains Hamiltonian once problems
cycle - Dirac’s Theorem: Deg(v)
≥ n/2 sufficient

Eulerian Graph – – - All vertices even degree – Graph with all even - Euler path/cycle
Contains Eulerian cycle (undirected) degrees problems
- In digraph: in-degree =
out-degree for all vertices
GRAPH - TRAVERSAL
Aspect DFS (Depth-First Search) BFS (Breadth-First Search)

Definition / Idea Explores as far as possible along each branch before Explores all neighbors of a vertex before moving to the next
backtracking. level.

Traversal Type Depth-wise (goes deep first). Level-wise (goes broad first).

Data Structure Used Stack (can use recursion). Queue.

Implementation Recursive or iterative (with stack). Iterative (with queue).

Time Complexity O(V + E) for adjacency list, O(V²) for adjacency matrix. O(V + E) for adjacency list, O(V²) for adjacency matrix.

Space Complexity O(V) for recursion stack or explicit stack. O(V) for queue.

Shortest Path Not guaranteed. Finds shortest path in unweighted graphs.

Cycle Detection Can detect cycles in both directed and undirected graphs. Can detect cycles in undirected graphs (with parent tracking);
less intuitive in directed graphs.

Connectivity Can be used to check connectivity or components. Can also check connectivity or components.

Tree / Forest Produces DFS tree / forest. Produces BFS tree / forest.

Vertex Visiting Order Deep before wide; follows a path to its end before backtracking. Wide before deep; visits all vertices at current distance before
moving deeper.

Applications • Topological sorting • Shortest path in unweighted graphs


• Cycle detection • Level-order traversal
• Pathfinding in mazes • Connectivity
• Solving puzzles (e.g., Sudoku) • Bipartite checking
• Connectivity & components • Social networking distances

Characteristics • Uses less memory on sparse graphs • Guaranteed shortest path in unweighted graph
• Can get trapped in deep paths if not careful • Uses more memory on wide graphs

CRUX / Quick Fact • Stack-based, deep-first, not guaranteed shortest path • Queue-based, level-first, guaranteed shortest path in
unweighted graphs
GRAPH REPRESENTATION - METHODS

Attribute / Feature Adjacency Matrix Adjacency List Incidence Matrix Edge List

Definition / Structure - V×V matrix - Array/List of lists - V×E matrix - List of edges as pairs (u,v)
- [i][j] = 1 if edge exists - Each vertex stores its neighbors - [i][j] = 1 if vertex i incident - Weighted edges store
- Weighted graphs store weight instead of 1 - Weighted edges stored as (neighbor, to edge j (u,v,w)
weight) pairs - Directed: +1 (source), -1
(destination)

Space Complexity O(V²) O(V + E) O(V×E) O(E)

Edge Lookup O(1) O(degree(v)) O(E) O(E)

Traversal Efficiency Traversing neighbors: O(V) Traversing neighbors: O(degree(v)) Slow Slow

Add Edge O(1) O(1) O(V) O(1)

Delete Edge O(1) O(degree(v)) O(V) O(E)

Add Vertex O(V²) O(1) O(V×E) O(1)

Formulas / Size Size = V² Size = V + E Size = V×E Size = E

Key Facts / Crux - Dense graphs, fast edge check - Sparse graphs, traversal efficient - Rarely used in coding - Simple, iterate edges easily
- Wastes space for sparse graphs - Used in BFS/DFS/Dijkstra/Prim - Best for edge-vertex incidence - Ideal for Kruskal’s MST
- Supports weighted edges directly - Weighted edges stored as tuples problems - Edge lookup slow (O(E))
- Supports loops & multiple
edges
- Directed edges: +1/-1

Example K₃: K₃: 3-vertex, 3-edge cycle: K₃: (0,1),(0,2),(1,2)


011 0→1,2; 1→0,2; 2→0,1 Edge1 Edge2 Edge3
101 010
110 101
001

CRUX (Exam Focus) - Dense graphs → adjacency matrix - Sparse graphs → adjacency list - Rarely used, edge-vertex - Simple edge iteration
- Edge check O(1) - Traversal efficient BFS/DFS/Dijkstra/Prim incidence problems - Kruskal’s MST
- Weighted edges supported - Weighted edges as pairs - Loops & multiple edges allowed - Edge lookup slow (O(E))
- Directed edges +1/-1
GRAPH TRAVERSAL & ALGORITHMS
Category Algorithm / Definition / Steps (Bullet Points) Complexity (Time & Example CRUX
Concept Space)

Graph Traversal Breadth-First - Level-wise traversal using a queue - Time: O(V + E) - Graph: 0–1–2–3–4 - Finds shortest path in
Search (BFS) - Start from a source vertex - Space: O(V) - BFS from 0 → Visit order: 0, unweighted graphs
- Visit all neighbors of current vertex before moving 1, 2, 3, 4 - Useful for connectivity &
deeper components
- Mark visited vertices to avoid repetition - Queue-based traversal

Depth-First - Deep traversal using recursion or stack - Time: O(V + E) - Graph: 0–1–2–3–4 - Detects cycles, components,
Search (DFS) - Start from a vertex - Space: O(V) - DFS from 0 → Visit order: 0, topological ordering
- Explore along a branch before backtracking 1, 2, 3, 4 (order may vary) - Stack or recursion-based
- Mark visited vertices - Basis for many graph
algorithms
Minimum Kruskal’s - Sort all edges by weight - Time: O(E log E) - Graph edges with weights: - Edge-based MST algorithm
Spanning Tree Algorithm - Pick edges in increasing order - Space: O(V + E) • (A–B, 2) - Works on weighted
(MST) - Add edge if it does not form a cycle (use • (B–C, 3) undirected graphs
union-find) • (A–C, 1) - Union-Find prevents cycles
- Repeat until MST formed - MST edges chosen: A–C (1),
A–B (2)

Prim’s Algorithm - Start with any vertex - Time: O(V²) (matrix) - Graph: vertices A, B, C - Vertex-based MST
- Select minimum weight edge connecting MST to Or - Edges with weights: (A–B,2), - Good for dense graphs
remaining vertices O(E log V) (B–C,3), (A–C,1) - Priority queue (min-heap)
- Repeat until all vertices included (min-heap) - MST edges: A–C (1), A–B speeds up
- Space: O(V) (2)
Shortest Path Dijkstra’s - Initialize distances from source to ∞, source = 0 - Time: O(V²) - Graph weighted edges: - No negative edges
Algorithm - Pick vertex with minimum distance or • A–B = 1 - Greedy approach
- Relax all adjacent edges O(E log V) with • A–C = 4 - Basis for routing, pathfinding
- Repeat until all vertices processed min-heap • B–C = 2
- Space: O(V) - Source: A
- Shortest paths: A→B = 1,
A→C = 3

Bellman-Ford - Initialize distances from source to ∞, source = 0 - Time: O(VE) - Graph edges: - Handles negative edges
Algorithm - Relax all edges V–1 times - Space: O(V) • A→B = 4 - Detects negative weight
- Detect negative cycles by one extra iteration • A→C = 5 cycles
• B→C = –2
- Source: A
- Shortest paths: A→B = 4,
A→C = 2

Floyd-Warshall - Dynamic programming for all pairs shortest path - Time: O(V³) - Vertices: A, B, C - Finds shortest path between
Algorithm - dist[i][j] = min(dist[i][j], dist[i][k] + dist[k][j]) for all k - Space: O(V²) - Weighted adjacency matrix all vertex pairs
- Shortest paths updated: - Works for negative edges (no
A→B=2, A→C=3, B→C=1 negative cycles)

Special Topological Sort - Linear ordering of vertices in DAG - Time: O(V + E) - DAG edges: - Only for Directed Acyclic
Traversals / - DFS based: push vertex to stack after visiting all - Space: O(V) • 1→2 Graphs (DAGs)
Concepts neighbors • 1→4 - Used in scheduling,
- Pop stack gives ordering • 2→3 precedence problems
• 4→5
- Topological order: 1,2,4,3,5

Eulerian Path & - Path: visits every edge exactly once – - Graph: A–B–C–A - Degree conditions are key for
Circuit - Circuit: closed Eulerian path - Eulerian circuit exists (all exam
- Conditions: vertices have even degree) - Path vs Circuit distinction
• Undirected Eulerian Circuit: all vertices even important
degree
• Eulerian Path: 0 or 2 vertices odd degree
• Directed Eulerian Circuit: in-degree = out-degree
• Eulerian Path: all vertices balanced except
start/end

Hamiltonian Path - Path: visits every vertex exactly once – - Graph: A–B–C–D–A - Used in TSP problems
& Circuit - Circuit: closed Hamiltonian path - Hamiltonian circuit exists - Often theoretical questions
- NP-complete problem; no simple formula (visit all vertices once and
return to start)
– Algorithms –
1. Foundations 2. Algorithm Analysis
Purpose – Understand what algorithms are, why they matter, and how they are evaluated. Purpose – Measure efficiency for performance comparison.

Step Topic Key Points / Facts Ste Topic Key Points / Facts
p
1.1 Definition of Algorithm - Step-by-step procedure to solve a problem.
- Has input, output, finiteness, definiteness, 2.1 Complexity Time complexity & Space complexity.
effectiveness.
2.2 Time Complexity Best case, Worst case, Average case.
1.2 Characteristics Deterministic, unambiguous, language-independent, Types
general applicability.
2.3 Asymptotic - Big-O (O) → Upper bound
1.3 Difference: Algorithm vs Algorithm is logic/design; program is implementation. Notations - Ω (Omega) → Lower bound
Program - Θ (Theta) → Tight bound.

1.4 Phases in Algorithm Problem definition → Design → Analysis → Coding


Development → Testing → Maintenance. 2.4 Order of Growth Constant O(1), Logarithmic O(log n), Linear O(n),
Linearithmic O(n log n), Quadratic O(n²), Cubic O(n³),
1.5 Representation Methods - Pseudocode Exponential O(2ⁿ), Factorial O(n!).
- Flowcharts
- Decision tables 2.5 Space Complexity Fixed part (constants, program size) + Variable part
Components (recursion stack, dynamic allocation).

Representation Method Definition Symbols / Syntax Advantages Typical Exam Use

Pseudocode High-level, structured description of an Text-based, structured like code • Easy to read • Writing an algorithm
algorithm using plain language and • Language-independent • Interpreting logic
programming-like statements • Focuses on logic rather than syntax
• Easily converted to code

Flowchart Graphical representation of an algorithm • Oval → Start/End • Visualizes step-by-step flow • Drawing flowcharts
showing sequence of steps • Rectangle → Process • Easy to understand • Explaining logic visually
• Diamond → Decision • Highlights decisions & loops
• Parallelogram → Input/Output
• Arrows → Flow

Decision Table Tabular method representing conditions Table with conditions, actions, • Handles complex decisions • Converting rules to table
and corresponding actions of an and rules mapping conditions → • Ensures all cases covered • Checking all possible
algorithm actions • Reduces ambiguity scenarios
3. Algorithm Design Paradigms 4. Core Categories of Algorithms
Purpose – Master standard techniques to solve problems efficiently. (High-yield for DSSSB exams)

Ste Paradigm Key Concepts / Facts Examples Category Subtopics / Examples Important Facts
p
Searching Linear Search, Binary Search Binary Search → O(log n),
3.1 Divide and Break → Solve → Merge Sort, Quick Sort, Algorithms requires sorted array
Conquer Combine Binary Search, Strassen’s Matrix
Multiplication
Sorting Bubble, Insertion, Selection, Merge, Sorting stability, in-place vs
Algorithms Quick, Heap Sort, not, comparison vs
3.2 Greedy Method Make locally optimal Kruskal’s MST, Prim’s MST, Counting/Radix/Bucket non-comparison-based
choice hoping for global Dijkstra’s Shortest Path,
optimum Huffman Coding
Graph BFS, DFS, Dijkstra, Bellman-Ford, BFS → shortest path in
Algorithms Floyd-Warshall, Kruskal, Prim unweighted graph
3.3 Dynamic Store subproblem results Fibonacci (DP), Matrix Chain
Programming → avoid recomputation Multiplication, Floyd-Warshall,
String Matching Naive, KMP, Rabin-Karp KMP uses LPS table
(DP) Knapsack

Mathematical Euclid GCD, Sieve of Eratosthenes, Useful in number theory and


3.4 Backtracking Try → Abandon if fails → N-Queens, Rat in a Maze, Algorithms Modular exponentiation cryptography
Try next Hamiltonian Cycle
Recursion & Factorial, Fibonacci Tail vs non-tail recursion
3.5 Branch and Like backtracking but with Travelling Salesman Problem Iteration
Bound bounds to prune search

3.6 Randomized Random choices in Randomized Quick Sort,


Algorithms execution Monte Carlo methods
5. Problem-Specific Algorithms
3.7 Brute Force Try all possibilities Linear Search, String Matching
(Likely to be asked in exam applications)

●​ Shortest Path → Dijkstra, Bellman-Ford​


6. Optimization & Advanced Topics
●​ Minimum Spanning Tree (MST) → Kruskal, Prim​
(For deeper understanding & tough questions)
●​ Scheduling → Activity selection (Greedy)​
●​ NP, P, NP-complete, NP-hard Problems – Basic theory & examples.​
●​ Knapsack Problem → 0/1 Knapsack (DP), Fractional Knapsack (Greedy)​
●​ Approximation Algorithms – For NP-hard problems.​
●​ Matrix Operations → Strassen’s multiplication (Divide & Conquer)
●​ Amortized Analysis – e.g., Dynamic Arrays, Splay Trees.
Sorting Algorithms
Algorithm (Paradigm) Working Complexity (B/A/W) In-place Stable Key Facts

Bubble Sort (Brute Force) • Compare adjacent elements


• Swap if out of order
B: O(n) → already sorted
A: O(n²)
✅ ✅ • Adaptive if optimized
• Internal sorting
• Repeat passes until sorted W: O(n²) → reverse sorted • Simple to implement

Insertion Sort (Brute Force) • Pick element from unsorted part


• Insert at correct position in
B: O(n) → already sorted
A: O(n²)
✅ ✅ • Adaptive
• Efficient for small/almost sorted arrays
sorted part W: O(n²) → reverse sorted • Online sorting possible
• Repeat • Internal sorting
Selection Sort (Brute Force) • Find minimum element in
unsorted part
B/A/W: O(n²) → independent of
distribution
✅ ❌ • Independent of input distribution
• Internal sorting

• Swap with first unsorted • Simple but inefficient
• Repeat

Merge Sort (Divide &


Conquer)
• Divide array into halves
• Sort each half recursively
B/A/W: O(n log n) → always divides ❌ ✅ • Stable
• Excellent for linked lists
• Merge sorted halves • External sorting
• Independent of input distribution ✅
Quick Sort (Divide &
Conquer)
• Choose pivot
• Partition array around pivot
B: O(n log n) → balanced pivot
A: O(n log n)
✅ ❌ • Tail recursion optimization
• Randomized pivot reduces worst case
• Recursively sort partitions W: O(n²) → sorted/reverse sorted • Internal sorting

Heap Sort (Heap / Divide &


Conquer)
• Build max-heap
• Swap root with last element
B/A/W: O(n log n) → always builds
heap
✅ ❌ • Independent of input distribution
• Internal sorting

• Heapify reduced heap • Poor cache performance
• Repeat
Counting Sort (Counting /
Distribution)
• Count occurrences of elements
• Compute prefix sums
B/A/W: O(n+k) → count & place ❌ ✅ • Works for integers only
• Efficient if k << n
• Place elements in sorted order • Internal sorting

Radix Sort (Counting /


Distribution)
• Sort digits from LSD to MSD
using stable sort
B/A/W: O(d·(n+k)) → digit-wise
sorting
❌ ✅ • Works for integers & strings
• Often uses counting sort internally
• Internal sorting

Bucket Sort (Counting /


Distribution)
• Divide elements into buckets
• Sort each bucket
B: O(n+k) → uniform distribution
A: O(n+k)
❌ Can be • Good for uniform distribution
• Internal sorting
• Concatenate W: O(n²) → skewed data • Poor performance with skewed data
Searching Algorithms
Algorithm (Paradigm) Working Complexity (B/A/W) Key Facts

Linear Search (Brute Force) • Scan elements sequentially B: O(1) → first element • Works on unsorted arrays
• Compare each with target A: O(n) → middle • Can terminate early if array sorted
• Stop when found W: O(n) → last/not found • In-place
• Stable

Binary Search (Divide & Conquer) • Compare target with mid element B: O(1) → target = mid • Requires sorted array
• Search left or right half A: O(log n) • Works only on random-access structures
• Repeat until found W: O(log n) → target at ends • In-place
Hash Table / Direct Addressing • Compute index via hash function B: O(1) → no collision • Collisions handled via chaining / open addressing
(Hashing / Direct Access) • Insert/search/delete key at index A: O(1) • Widely used: symbol tables, caches, dictionaries
• Handle collisions W: O(n) → all keys collide • Space-time tradeoff
Greedy Algorithms
Algorithm (Paradigm) Working Complexity (B/A/W) Key Facts

Dijkstra (Greedy) • Pick node with min distance B/A/W: O((V+E) log V) → heap operations • Works for non-negative weights
• Update neighbors • Fails on negative edges
• Repeat until all nodes processed • Priority queue improves efficiency

Prim (Greedy) • Start vertex B/A/W: O((V+E) log V) • Builds MST


• Add smallest edge to MST • Dense graph: O(V²) with adjacency matrix
• Repeat until MST complete • Similar to Dijkstra

Kruskal (Greedy) • Sort edges B/A/W: O(E log E) • Good for sparse graphs
• Pick smallest edge that doesn’t form • Cycle detection via union-find
cycle
• Repeat until MST complete

Huffman Coding (Greedy / • Build frequency table B/A/W: O(n log n) • Greedy guarantees minimal total cost
Compression) • Merge nodes using min-heap • Used in file compression
• Assign prefix-free codes
Dynamic Programming
Algorithm (Paradigm) Working Complexity (B/A/W) Key Facts

Bellman-Ford (DP) • Initialize distances B/A/W: O(V·E) • Works with negative weights
• Relax all edges V-1 times • Detects negative cycles
• Detect negative cycles

Floyd-Warshall (DP) • Initialize distance matrix B/A/W: O(V³) • All-pairs shortest paths
• Update using all vertices as • Works with negative weights but no
intermediates negative cycles

Fibonacci (DP / Recurrence) • Store previous results B/A/W: O(n) • Avoids exponential recursion
• Build up to n iteratively • Iterative version O(1) space
• Can use matrix exponentiation for O(log
n)

Matrix Chain Multiplication (DP / • Try all parenthesizations B/A/W: O(n³) • Bottom-up table filling
Optimization) • Store min multiplications • Classic DP optimization example
• Fill DP table bottom-up
Graph Traversal
Algorithm (Paradigm) Working Complexity (B/A/W) Key Facts

BFS (Graph Traversal) • Use queue B/A/W: O(V+E) • Finds shortest path in unweighted graphs
• Visit neighbors level by level • Can check bipartiteness

DFS (Graph Traversal) • Use stack/recursion B/A/W: O(V+E) • Useful for cycle detection
• Explore as deep as possible before • Topological sort
backtracking • Connected components
Recursion / Puzzle
Algorithm (Paradigm) Working Complexity (B/A/W) Key Facts

Tower of Hanoi (Recursion / Puzzle) • Move n-1 disks to aux B/A/W: O(2ⁿ) • Minimum moves = 2ⁿ-1
• Move largest disk to target • Classic recursion example
• Move n-1 disks from aux to target

Number / Theory
Algorithm (Paradigm) Working Complexity (B/A/W) Key Facts

Euclidean GCD (Number Theory) • Recursively compute gcd(b, a mod b) B/A/W: O(log min(a,b)) • In-place
• Oldest known algorithm
• Iterative subtraction method possible
Paradigm Type Algorithm Working (Bullets) Complexity Properties Extra Exam Facts / Use Cases

Brute Force / Searching Linear Search • Scan elements sequentially • Best: O(1) → first element • In-place • Works on unsorted data
Simple • Compare each with target • Avg: O(n) → middle • Stable • No preprocessing needed
• Stop when found or at end • Worst: O(n) → last/not found • Can terminate early
• Easy to implement

String Matching Naive Pattern Search • Slide pattern over text • Best: O(n) • In-place • Worst case: repeated chars
• Compare chars one by one • Avg/Worst: O(n·m) • Basis for KMP & Rabin-Karp
• Shift by 1 on mismatch • Simple brute-force

Sorting Bubble Sort • Repeatedly compare adjacent • Best: O(n) → already sorted • In-place • Easy to implement
elements (optimized) • Stable • Not efficient for large arrays
• Swap if in wrong order • Avg: O(n²) • Adaptive if • Internal sorting algorithm
• Repeat until sorted • Worst: O(n²) → reverse sorted optimized • Adaptive if array nearly sorted

Insertion Sort • Pick element • Best: O(n) • In-place • Efficient for small arrays
• Compare with previous elements • Avg: O(n²) • Stable • Online sorting possible
• Shift larger elements • Worst: O(n²) • Adaptive • Internal sorting algorithm
• Insert element • Adaptive and stable

Selection Sort • Find minimum element • Best: O(n²) • In-place • Simple but inefficient for large
• Swap with first unsorted • Avg: O(n²) • Not stable datasets
• Repeat • Worst: O(n²) • Not adaptive • Internal sorting algorithm


• Independent of data
distribution

Divide & Searching Binary Search • Compare target with mid • Best: O(1) • In-place • Requires sorted array
Conquer • Search left/right half recursively • Avg: O(log n) • Iterative & recursive forms
or iteratively • Worst: O(log n) • Works only on random-access
structures

Sorting Merge Sort • Divide array into halves • Best: O(n log n) • Not in-place • Excellent for linked lists
• Sort each recursively • Avg: O(n log n) (O(n) extra) • Predictable performance
• Merge sorted halves • Worst: O(n log n) • Stable • External sorting algorithm
• Not adaptive

• Independent of input
distribution

Quick Sort • Choose pivot • Best: O(n log n) → balanced • In-place • Tail recursion optimization
• Partition array around pivot pivot • Not stable • Randomized pivot reduces
• Recursively sort partitions • Avg: O(n log n) • Not adaptive worst-case
• Worst: O(n²) → sorted/reverse • Internal sorting algorithm
sorted

Heap Sort • Build max-heap • Best: O(n log n) • In-place • Poor cache performance
• Swap root with last • Avg: O(n log n) • Not stable • Based on binary heap
• Heapify remaining heap • Worst: O(n log n) • Internal sorting algorithm
• Repeat • Independent of input
distribution ✅
Counting / Sorting Counting Sort • Count occurrences • Best: O(n+k) • Not in-place • Works for integers only
Distribution • Compute prefix sums • Avg: O(n+k) • Stable • Efficient if k << n
• Place elements • Worst: O(n+k) • Internal sorting algorithm

Radix Sort • Sort digits from LSD → MSD • Best: O(d·(n+k)) • Stable • Works for integers & strings
using stable sort • Avg: O(d·(n+k)) • Not in-place • Often uses counting sort
• Worst: O(d·(n+k)) internally
• Internal sorting algorithm

Bucket Sort • Divide into buckets • Best: O(n+k) • Can be stable • Good for uniform distribution
• Sort each bucket • Avg: O(n+k) • Poor performance with skewed
• Concatenate • Worst: O(n²) data
• Internal sorting algorithm

Greedy Graph Dijkstra’s Algorithm • Pick node with min distance • Best/Avg/Worst: O((V+E) log • Works for • Fails on negative edges
• Update neighbors V) with heap non-negative • Priority queue improves
• Repeat until all nodes visited weights efficiency

Prim’s Algorithm • Start vertex • Best/Avg/Worst: O((V+E) log • Builds MST • Dense graph: O(V²) with
• Add smallest edge to MST V) with heap adjacency matrix
• Repeat until MST complete • Similar to Dijkstra but for MST

Kruskal’s Algorithm • Sort edges • Best/Avg/Worst: O(E log E) • Uses Disjoint • Good for sparse graphs
• Pick smallest edge not forming Set (Union-Find) • Cycle detection via union-find
cycle
• Repeat

Compression Huffman Coding • Build frequency table • Best/Avg/Worst: O(n log n) • Optimal • Greedy approach guarantees
• Min-heap merge prefix-free minimal total cost
• Generate prefix-free codes encoding • Used in file compression

Dynamic Graph Bellman-Ford • Initialize distances • Best/Avg/Worst: O(V·E) • Works with • Slower than Dijkstra
Programming • Relax all edges V-1 times negative weights • Detects negative cycles
• Detect negative cycles

Floyd-Warshall • Initialize distance matrix • Best/Avg/Worst: O(V³) • Works with • All-pairs shortest paths
• Update using all vertices as negative weights • Triple nested loop
intermediates but no negative
cycles

Recurrence Sequence Fibonacci (DP) • Store previous results • Best/Avg/Worst: O(n) • Iterative • Avoids exponential recursion
• Build up to n version O(1) • Can use matrix exponentiation
space for O(log n)

Optimization Matrix Matrix Chain • Try all parenthesizations • Best/Avg/Worst: O(n³) • DP table • Bottom-up table filling
Multiplication • Store min multiplications • Classic DP example
Graph Traversal BFS • Use queue • Best/Avg/Worst: O(V+E) • Space O(V) • Finds shortest path in
Traversal • Visit level by level unweighted graphs
• Can check bipartiteness

DFS • Use stack/recursion • Best/Avg/Worst: O(V+E) • Space O(V) • Useful for cycle detection,
• Explore depth-first recursion stack topological sort, connected
components

Recursive / Puzzle Tower of Hanoi • Move n-1 disks to aux • Best/Avg/Worst: O(2ⁿ) • Recursive • Minimum moves = 2ⁿ-1
Mathematical • Move largest disk to target • Classic recursion example
• Move n-1 disks from aux to target

Number Number Theory Euclidean GCD • Recursively compute gcd(b, a • Best/Avg/Worst: O(log • In-place • Oldest known algorithm
Theory mod b) min(a,b)) • Iterative subtraction method
possible

Hashing / Searching / Hash Table / Direct • Use hash function to map key → • Best: O(1) • In-place usually • Collisions handled via chaining
Direct Access Mapping Addressing index • Avg: O(1) • Not stable / open addressing
• Insert/search/delete at computed • Worst: O(n) • Extra memory • Widely used: symbol tables,
index required caches, dictionaries
• Handle collisions via chaining or • Performance depends on hash
open addressing function quality

Exam Crux Summary


●​ Stable Sorting: Bubble, Insertion, Merge, Counting, Radix
●​ In-place Sorting: Bubble, Insertion, Selection, Quick, Heap, Linear Search, Binary Search
●​ Adaptive: Bubble (optimized), Insertion
●​ External Sorting: Merge Sort
●​ Independent of data distribution: Selection Sort, Heap Sort, Merge Sort
●​ Internal Sorting: Bubble, Insertion, Selection, Quick, Heap, Counting, Radix, Bucket​

You might also like