Summary of Lecture 1: Introduction to Operating Systems
1. What is an Operating System?
An Operating System (OS) is software that acts as an interface between the user and
computer hardware. It manages hardware resources and provides services for application
programs.
2. Basic Components of a Computer System
A computer system consists of four main components:
1. Hardware – CPU, Memory, I/O devices.
2. Operating System – Manages hardware and software.
3. Application Programs – Software used by users (e.g., Browsers, Compilers).
4. Users – Humans, machines, other computers.
3. Functions of an Operating System
● Process Management – Controls and schedules execution of processes.
● Memory Management – Allocates and deallocates memory.
● Storage Management – Handles file systems and disk storage.
● I/O System Management – Manages input/output operations.
● Security & Protection – Ensures authorized access.
● Resource Allocation – Efficiently assigns resources to processes.
4. Computer System Organization & Operations
● Booting Process – The system starts using a bootstrap program stored in ROM.
● Interrupts – Used for handling device communication (Hardware & Software Interrupts).
● Direct Memory Access (DMA) – Allows data transfer between memory and devices
without CPU intervention.
5. Types of Operating Systems
1. Batch OS – Processes tasks in batches with no user interaction.
2. Multitasking/Time-Sharing OS – Allows multiple programs to run simultaneously.
3. Real-Time OS (RTOS) – Processes data in real-time (e.g., Space Systems, Military
Software).
4. Distributed OS – Manages multiple connected computers.
5. Network OS – Supports network operations.
6. Mobile OS – Designed for smartphones and tablets (e.g., Android, iOS).
6. System Architecture
● Single Processor Systems – One CPU handles all tasks.
● Multiprocessor Systems – Multiple CPUs share work.
○ Symmetric Multiprocessing (SMP) – CPUs share memory.
○ Asymmetric Multiprocessing – CPUs do not share memory.
7. Key OS Concepts
● Multiprogramming – Increases CPU utilization by running multiple jobs.
● Time-sharing (Multitasking) – Rapid switching between processes for user interaction.
● Dual Mode Operation – OS operates in user mode (normal apps) and kernel mode
(critical operations).
● System Calls & Traps – Mechanisms for processes to request OS services.
8. Open-Source Operating Systems
● Linux (GNU/Linux)
● FreeBSD (BSD UNIX)
● Sun Solaris
✅ Operating System Lecture – Chapter 2: OS Structures
🔷 1. Operating System Services
📌 OS provides services to:
● Users (for ease of use)
● Programs (for execution support)
● The System itself (for resource efficiency)
🔹 Services for Users:
1. User Interface – CLI (Command Line) or GUI (Graphical)
2. Program Execution – Load and run programs
3. I/O Operations – Read/write with files or devices
4. File System Manipulation – Create, delete, read, write, search
5. Communication – Process-to-process (same or different systems)
6. Error Detection – Handle and recover from hardware/software errors
🔹 Services for System Efficiency:
1. Resource Allocation – CPU, memory, I/O devices
2. Accounting – Track resource usage
3. Protection & Security – Prevent unauthorized access
🔷 2. User Operating System Interface
🔹 Command Line Interface (CLI)
● Text-based, e.g., Linux shell, Windows Command Prompt
● Commands may be:
○ Built-in (internal)
○ Separate programs (external)
🔹 Graphical User Interface (GUI)
● Visual elements: icons, windows, menus
● Mouse & keyboard driven
● E.g., Windows, macOS, KDE on Linux
🔹 Touchscreen Interface
● Gestures instead of mouse
● Includes virtual keyboards, voice commands
🔷 3. System Calls
📌 What Are They?
● The interface between a user program and the OS
● Used to request services from the kernel (e.g., read a file)
🔹 Accessed through APIs:
1. Win32 API (Windows)
2. POSIX API (UNIX, Linux, macOS)
3. Java API (JVM)
🔹 System Call Examples:
● read(), write(), fork(), exit(), open(), close()
🔹 Parameter Passing Methods:
1. Registers
2. Memory block/table
3. Stack
🔷 4. System Programs
These are utility programs that help manage the system and develop applications.
🔹 Types:
● File manipulation (copy, delete)
● Status info (disk usage, time)
● File modification (editors)
● Programming tools (compilers, debuggers)
● Program execution tools (loaders, linkers)
● Communication (FTP, SSH)
📌 Most users interact with system programs, not raw system calls.
🔷 5. Operating System Structure
🔹 Structure Types:
Structure Description
Simple MS-DOS – minimal separation, tightly coupled
Layered Each layer builds on the one below (better modularity)
Microkernel Minimal code in kernel, more services in user space (uses message
passing)
Modules Object-oriented, flexible, loadable kernel components
🔹 Examples:
● UNIX: Kernel + System Programs
● Linux: Modular
● Solaris: Modular
● Microkernel: More secure and portable, but slightly slower
🔷 6. Virtual Machines
📌 What Is It?
● Hardware abstraction to run multiple OSes on the same physical system
🔹 Examples:
● VMware – Emulates x86 hardware
● JVM – Runs Java bytecode
● .NET CLR
Each virtual machine appears to be an independent system to the OS running on it.
🔷 7. System Booting Process
1. Bootstrap Program (Loader) is loaded at startup.
2. It loads the kernel.
3. Kernel starts up the OS environment (GUI or CLI).
🔷 8. OS Debugging & Generation
● Debugging Tools: Logs, core dumps, DTrace, top
● OS Generation: Configure and compile kernel
● Performance Tuning: Monitor usage and optimize
🎯 QUICK FLASHCARD REVIEW
Q1: What is a system call?
A: A method for user programs to request services from the OS.
Q2: What are the 3 common APIs?
A: Win32, POSIX, Java API
Q3: Give 2 examples of system programs.
A: Compiler, File editor
Q4: What’s a microkernel?
A: An OS architecture that moves most services to user space to enhance security and
portability.
Q5: Name 2 OS services for users.
A: File manipulation, Program execution
Q6: What’s the purpose of the bootstrap program?
A: To load the kernel and start the OS
✅ Lecture 3 – Processes (Detailed Breakdown)
🔷 1. What is a Process?
● A process is a program in execution.
● It’s more than just code — it also includes:
○ Program code (text section)
○ Current activity (program counter, registers)
○ Stack (function calls, local variables)
○ Data section (global variables)
○ Heap (dynamic memory)
📌 A program is passive; a process is active.
🔷 2. Process States
A process goes through several states:
1. New – being created
2. Running – instructions are being executed
3. Waiting – waiting for an event (e.g., I/O)
4. Ready – waiting to be assigned to the CPU
5. Terminated – finished execution
🔷 3. Process Control Block (PCB)
A data structure the OS uses to manage each process. It contains:
● Process state
● Program counter
● CPU registers
● CPU scheduling info
● Memory-management info
● I/O status
● Accounting info
📌 Stored in kernel memory and linked into queues.
🔷 4. CPU Scheduling & Queues
🔹 Queues:
● Job queue – All processes in the system
● Ready queue – In memory and ready for CPU
● Device queues – Waiting for I/O
🔹 Schedulers:
● Long-Term Scheduler – Chooses which processes enter ready queue (controls
multiprogramming)
● Short-Term Scheduler – Chooses which ready process gets CPU (very frequent)
● Medium-Term Scheduler – Temporarily removes/resumes processes (swapping)
🔹 CPU Burst:
● Time the process spends in the CPU between I/O
📌 I/O-bound process → Short CPU bursts
📌 CPU-bound process → Long CPU bursts
🔷 5. Context Switch
When the CPU switches from one process to another:
● Save the old process’s state in its PCB
● Load the new process’s state from its PCB
📌 Context switching is overhead (no useful work during the switch).
🔷 6. Operations on Processes
🔹 Process Creation:
● Parent creates child processes
● Children may share:
○ All, some, or none of the parent’s resources
● Execution may be:
○ Concurrent with parent
○ After parent waits
🔹 UNIX Example:
● fork() → creates new process
● exec() → replaces current process with a new program
🔷 7. Process Termination
● exit() – Process finishes and is terminated
● Parent may terminate child using abort() for reasons like:
○ Exceeded resources
○ Task no longer needed
○ Parent exiting
🔹 Special Terms:
● Zombie: Child ends, parent hasn’t waited
● Orphan: Parent ends before child
🔷 8. Multiprocess Architectures
Example: Google Chrome
● Browser runs as multiple processes
○ UI, Renderer, Plugins — each in separate process
○ Provides isolation & better security
🔷 9. Interprocess Communication (IPC)
Needed for cooperating processes.
🔹 Reasons for Cooperation:
● Information sharing
● Speedup computation
● Modularity
● Convenience
🔷 10. IPC Models
🔹 1. Shared Memory
● Processes share a memory region
● Fast, but requires synchronization (e.g., semaphores)
🔹 2. Message Passing
● Use send() and receive() to exchange messages
● No shared memory required
● Supports communication over network
🔷 11. Message Passing – Advanced Concepts
Communication link characteristics:
● Direct vs Indirect
● Synchronous vs Asynchronous
● Automatic vs Explicit buffering
● Can be unidirectional or bidirectional
🔷 12. Producer-Consumer Problem
Classic example of IPC (especially with shared memory)
🔹 Producer:
● Generates data and places into buffer
🔹 Consumer:
● Takes data from buffer and uses it
🔹 Buffer Types:
● Unbounded – No size limit
● Bounded – Fixed size (most common)
🎯 QUICK FLASHCARD REVIEW
Q1: What is a process?
A: A program in execution with code, stack, data, and heap.
Q2: What are the 5 process states?
A: New, Ready, Running, Waiting, Terminated
Q3: What does a PCB store?
A: Info like state, PC, registers, memory, scheduling info
Q4: What are the three types of schedulers?
A: Long-term, Short-term, Medium-term
Q5: What is a context switch?
A: Switching CPU from one process to another
Q6: What’s the difference between fork() and exec()?
A: fork() creates a new process; exec() replaces a process image
Q7: What is IPC and why is it needed?
A: Interprocess communication, used for data sharing between cooperating processes
Q8: Name two IPC models.
A: Shared Memory, Message Passing
✅ Lecture 4 – Threads (Detailed Breakdown)
🔷 1. What is a Thread?
A thread is the smallest unit of execution within a process.
Each thread has:
● Its own program counter
● A set of registers
● Its own stack
All threads within a process share:
● Code section
● Data section
● Open files
📌 Threads are also called lightweight processes.
🔷 2. Motivation for Using Threads
● Modern applications are multithreaded (e.g., browsers, word processors).
● Allows performing multiple tasks concurrently in a single process (e.g., fetch data,
update UI, handle input).
● More efficient than creating new processes.
🔷 3. Threads vs. Processes
Process Thread
Heavyweight Lightweight
Has its own memory & Shares memory/resources with
resources threads
Slower context switching Faster switching
Blocking affects whole process Only the blocked thread is affected
🔷 4. Benefits of Multithreading
1. Responsiveness – Other threads run if one blocks
2. Resource Sharing – Share memory, files easily
3. Economy – Less overhead than multiple processes
4. Scalability – Utilizes multiple CPUs efficiently
🔷 5. Multithreading Models
Multithreading models map user-level threads to kernel threads.
🔹 1. Many-to-One
● Many user threads → One kernel thread
● Fast, but can't use multiple cores
● Used in older systems like Solaris Green Threads
🔹 2. One-to-One
● Each user thread → One kernel thread
● More concurrency
● More overhead (system calls)
● Used in Windows, Linux
🔹 3. Many-to-Many
● Many user threads ↔ Many kernel threads
● Flexible, better resource use
● Examples: Solaris ≤8, Windows NT with fibers
🔹 Two-Level Model
● Variation of many-to-many
● Some user threads can bind directly to kernel threads
🔷 6. User vs. Kernel Threads
User-Level Threads Kernel-Level Threads
Managed by thread library (user Managed by OS kernel
space)
Faster to create & switch Slower (requires system calls)
Kernel unaware of threads Kernel aware and manages
scheduling
Blocking one thread blocks all Other threads can run if one blocks
🔷 7. Threading Libraries
Common APIs used for multithreading:
● Pthreads (POSIX)
● Win32 Threads (Windows)
● Java Threads
These APIs let you create, manage, and synchronize threads.
🔷 8. Threading Issues & Challenges
🔹 fork() and exec() Semantics
● If a multithreaded program calls fork(), does the child get all threads or just one?
● Use fork() carefully — often followed by exec().
🔹 Thread Cancellation
● Asynchronous: Terminate immediately
● Deferred: Checkpoints to decide when to cancel
🔹 Signal Handling
● UNIX uses signals for events like Ctrl+C
● Signal delivery options:
○ To one thread
○ To all threads
○ To a specific thread
🔹 Thread Pools
● Pre-create threads and reuse them
● Limits number of threads, improves performance
🔹 Thread-Specific Data
● Threads may need private data (e.g., logging, session info)
🔹 Scheduler Activations
● Efficient communication between kernel and user-level thread manager
🔷 9. Multicore Programming Challenges
● Dividing Activities – Find what can run in parallel
● Balance – Threads should have similar workloads
● Data Splitting – Separate data for parallel threads
● Data Dependency – Avoid conflicts (synchronization needed)
🔷 10. Execution Types
Single-Core System Multicore System
Threads run concurrently Threads run in true
(pseudo-parallel) parallel
Time-sliced by scheduler Assigned to different cores
🎯 QUICK FLASHCARD REVIEW
Q1: What is a thread?
A: A lightweight process, the smallest unit of execution within a process.
Q2: Name 2 advantages of multithreading.
A: Responsiveness, resource sharing
Q3: What’s the difference between user and kernel threads?
A: User threads are managed in user space; kernel threads are managed by the OS.
Q4: What is the Many-to-One model?
A: Many user threads mapped to one kernel thread (limited concurrency)
Q5: What is a thread pool?
A: A collection of pre-created threads that wait for tasks
Q6: What are the two types of thread cancellation?
A: Asynchronous and deferred
Q7: Name a threading library used in Linux.
A: POSIX Pthreads
Q8: What does the exec() call do in multithreading?
A: Replaces the current process memory with a new program
Let me know if you'd like:
● A mini quiz
● A comparison chart of thread models
● Or we can move on to the next lecture!
Great! Let’s walk through Operating Systems – Lecture 5: Process Synchronization in a
clear, step-by-step way so you can deeply understand it and explain it in your own words.
✅ Lecture 5 – Process Synchronization
🔷 1. Why Is Synchronization Needed?
When multiple processes or threads share data (like a variable or file), concurrent access
can lead to problems like data inconsistency or race conditions.
🔷 2. Race Condition
A race condition happens when:
● Two or more processes access shared data at the same time
● The final result depends on the execution order
📌 Example: If both producer and consumer try to change the same count variable
without synchronization, the outcome becomes unpredictable.
🔷 3. Critical Section Problem
The critical section is the part of code where a process accesses shared data.
🔸 Goal: Ensure only one process at a time executes its critical section.
🔷 4. Three Requirements for a Good Solution
1. Mutual Exclusion – Only one process in the critical section at a time
2. Progress – If no process is in its critical section, someone waiting must be allowed in
3. Bounded Waiting – Limit the number of times other processes can enter their sections
before one waiting process gets its turn
🔷 5. Software Solutions
✅ 1. Turn Variable
● Two processes take turns using a shared variable turn
● Simple, but no guarantee of fairness if more processes are added
✅ 2. Flag Variable
● Each process uses a flag to say “I want to enter”
● Risk of deadlock if both flags are true and no one backs off
🔷 6. Peterson’s Solution (for 2 Processes)
Combines turn and flag[] to solve the critical section problem.
do {
flag[i] = true;
turn = j;
while (flag[j] && turn == j);
// critical section
flag[i] = false;
// remainder section
} while (true);
✅ Satisfies all 3 conditions (mutual exclusion, progress, bounded waiting)
🔷 7. Synchronization Hardware Solutions
Sometimes the OS uses hardware support for synchronization:
🔸 Options:
● Disable interrupts – Simple, but not for multiprocessors
● Atomic Instructions:
○ TestAndSet()
○ Swap()
These prevent race conditions by performing actions in one uninterruptible step.
🔷 8. Locks
Simple mechanism using a lock variable:
do {
acquire(lock);
// critical section
release(lock);
// remainder
} while (true);
❌ If implemented incorrectly, it can cause race conditions.
🔷 9. Semaphores
Semaphores are variables used for process synchronization. They use two atomic operations:
● wait(S) – Decrements S if S > 0, otherwise blocks
● signal(S) – Increments S and unblocks waiting processes if needed
🔷 10. Types of Semaphores
Type Description
Counting Integer > 1, used for managing access to a pool of resources (e.g.,
Semaphore printers)
Binary Semaphore Only 0 or 1; similar to a lock/mutex (mutual exclusion)
🔷 11. Advantages of Semaphores
✅ Ensures mutual exclusion
✅ More efficient than busy waiting
✅ Machine-independent (can work on any OS)
🔷 12. Disadvantages of Semaphores
❌ Complex to implement correctly
❌ Can cause deadlocks if misused
❌ Can lead to priority inversion (low-priority thread blocks high-priority one)
📝 How to Explain It in Your Own Words:
“Process synchronization is used to manage access to shared resources in a
multiprogramming system. Without it, race conditions and data inconsistency can
occur. The critical section problem arises when multiple processes try to access
shared data at the same time. It can be solved using software solutions like
Peterson’s algorithm or hardware-based techniques like atomic instructions.
Semaphores are commonly used to synchronize processes without busy waiting.”
Would you like:
● A short MCQ quiz on this topic?
● A solved example of Peterson’s algorithm or semaphore code?
● Help summarizing it in paragraph form for your assignment?
Great! Let's break down Operating Systems – Lecture 6: CPU Scheduling into simple
sections, so you can understand and explain it confidently in your own words.
✅ Lecture 6 – CPU Scheduling
🔷 1. What is CPU Scheduling?
CPU Scheduling is the process of deciding which process in the ready queue
gets to use the CPU next.
This is needed in multiprogrammed systems, where multiple processes compete for CPU
time.
🔷 2. CPU–I/O Burst Cycle
● A process alternates between using the CPU and waiting for I/O.
● This creates a pattern: CPU burst → I/O burst → CPU burst...
● Efficient CPU scheduling tries to keep the CPU busy as much as possible during this
cycle.
🔷 3. CPU Scheduling Types
✅ Non-Preemptive:
● Once a process starts, it keeps the CPU until it finishes or switches to waiting.
● Triggered at: process termination or switch to waiting state.
✅ Preemptive:
● A running process can be interrupted if a higher priority process arrives.
● Triggered at: new arrival, I/O completion, etc.
🔷 4. Dispatcher
The dispatcher gives control of the CPU to the selected process. It performs:
● Context switching
● Switching to user mode
● Jumping to the correct instruction
🕓 Dispatch latency = Time to switch between processes.
🔷 5. Scheduling Criteria
Metric Goal
CPU Utilization Maximize (keep CPU busy)
Throughput Maximize (more processes
completed)
Turnaround Time Minimize (submission to completion)
Waiting Time Minimize (time in ready queue)
Response Time Minimize (time to first response)
✅ CPU Scheduling Algorithms
🔷 6. First-Come, First-Served (FCFS)
● Processes run in order of arrival.
● Non-preemptive.
● Can lead to the convoy effect: short jobs wait behind long ones.
🧮 Average waiting time varies with arrival order.
🔷 7. Shortest Job First (SJF)
● Picks process with the shortest next CPU burst.
● Non-preemptive.
● Gives optimal average waiting time.
❗ Difficult to implement — requires knowing future burst time.
🔷 8. Shortest Remaining Job First (SRJF)
● Preemptive version of SJF.
● If a new process arrives with a shorter remaining time, it preempts the current one.
Provides lowest waiting time, but complex to predict burst lengths.
🔷 9. Priority Scheduling
● Each process has a priority number.
● The CPU is given to the highest priority process.
Types:
● Preemptive – can interrupt a lower-priority process.
● Non-preemptive – waits for current process to finish.
🛑 Problem: Starvation
✅ Solution: Aging (gradually increase waiting process's priority)
🔷 10. Round Robin (RR)
● Time-sharing system.
● Each process gets a time slice (quantum), e.g., 10ms.
● After quantum, process is preempted and added to the back of the queue.
⏱️ If time quantum is too small → too many context switches
⏱️ If too large → behaves like FCFS
🔷 11. Multilevel Queue Scheduling
● Multiple ready queues (foreground, background, etc.)
● Each queue has its own algorithm (e.g., RR for foreground, FCFS for background)
● Fixed priority or time-slice decides between queues
❗ Risk of starvation for lower-priority queues
🔷 12. Multilevel Feedback Queue
● Processes move between queues based on behavior.
● New jobs start in higher-priority (faster) queues.
● If they don’t finish quickly, they’re moved to lower-priority queues.
✅ Solves starvation via aging and flexible promotions/demotions.
📝 How to Write in Your Own Words
“CPU scheduling determines which process in the ready queue runs next. It’s
needed when multiple processes want the CPU. Different algorithms like FCFS,
SJF, Priority Scheduling, and Round Robin have their own strengths and
weaknesses. The goal is to keep the CPU busy and minimize waiting, turnaround,
and response times.”
Would you like:
● A solved example for SRJF or RR?
● A comparison table of all algorithms?
● Or short questions/MCQs for revision?
Great! You've uploaded Lecture 7 – Deadlocks from Operating Systems. Let’s go through it
step by step with a clear explanation, so you can understand everything and explain it easily in
your own words during exams or assignments.
✅ Lecture 7 – Deadlocks (Explained Clearly)
🔷 1. What is a Deadlock?
A deadlock happens when a group of processes are stuck because each one is waiting for a
resource that another is holding.
🔸 Example:
● Two processes (P1 and P2)
● Two resources (R1 and R2)
● P1 holds R1 and needs R2
● P2 holds R2 and needs R1
● Both are stuck → deadlock!
🔷 2. The Bridge Analogy
Cars going from both sides on a one-lane bridge.
If they meet in the middle and no one backs up, they’re stuck — that’s a deadlock!
🔷 3. Conditions for Deadlock (All 4 Must Happen)
1. Mutual Exclusion – Only one process can use a resource at a time
2. Hold and Wait – Process holds one resource, waiting for more
3. No Preemption – Resources can’t be forcibly taken
4. Circular Wait – A cycle of waiting processes exists
🔷 4. Resource Allocation Graph (RAG)
● Processes = circles
● Resources = squares
● Edges show requests or assignments
🔸 Facts:
● No cycle = no deadlock
● Cycle exists =
○ Deadlock (if one instance per resource)
○ Possible deadlock (if multiple instances)
🔷 5. Ways to Handle Deadlocks
1. Prevention – Make sure one of the 4 conditions can’t happen
2. Avoidance – Predict and avoid unsafe situations
3. Detection and Recovery – Let it happen, then fix it
4. Ignore – Most OSs do nothing (e.g., UNIX)
✅ Deadlock Prevention Techniques
🔹 1. Mutual Exclusion
● Not required for sharable resources (e.g., read-only files)
🔹 2. Hold and Wait
● Force processes to request all resources at once
● Can cause low resource use and starvation
🔹 3. No Preemption
● If a process requests a resource and can’t get it, release all held resources and try
again
🔹 4. Circular Wait
● Assign a number to each resource
● Require processes to request in increasing order only
✅ Deadlock Avoidance (Banker’s Algorithm)
🔷 Basic Idea:
● Every process must declare its maximum resource need
● The system only allocates resources if it stays in a safe state
🔷 Safe State:
● There is a sequence in which all processes can finish
● Even in worst-case demand, no deadlock will occur
🔷 Example Snapshot:
Proces Allocation Max Need = Max - Allocation
s
P0 010 753 743
P1 200 322 122
... ... ... ...
🔷 Banker’s Algorithm Steps:
1. Start with Work = Available
2. Find a process P such that Finish[P] = false and Need[P] ≤ Work
3. If found:
○ Mark Finish[P] = true
○ Add its Allocation to Work
4. Repeat until all processes are marked Finish = true
5. If so → safe state
🔷 If No Safe Sequence?
● Then system is in an unsafe state → deadlock may occur
✅ Deadlock Detection
If deadlocks are allowed to happen, OS uses:
● Detection algorithm to find them
● Recovery strategies to fix them
🔹 Detection Using Resource Allocation Graph:
● If cycle appears → deadlock
🔹 Recovery Methods:
● Terminate a process
● Roll back to safe state
● Preempt resources from a process
📝 How to Write It in Your Own Words:
“Deadlock occurs when processes are blocked waiting for resources held by each
other. It happens only if four conditions are true at the same time: mutual exclusion,
hold and wait, no preemption, and circular wait. We can prevent, avoid, or detect
deadlocks. Prevention involves breaking one of the four conditions. Avoidance uses
algorithms like the Banker’s algorithm to ensure a safe state is maintained. If
deadlocks are allowed, we detect them using graphs and recover by killing or rolling
back processes.”
Would you like:
● A Banker's algorithm example solved step-by-step?
● A quiz/practice sheet?
● Or to move to the next lecture?
Perfect! Let’s go through Operating Systems – Lecture 8: Memory Management in detail so
you understand it clearly and can explain it in your own words during exams or assignments.
✅ Lecture 8 – Memory Management
🔷 1. What is Memory Management?
Memory management is the function of the operating system that handles how main
memory is used. It decides:
● What memory is in use
● Which process gets memory
● When memory is allocated or freed
● Moves processes between memory and disk
📌 Main memory is essential because the CPU can’t directly access programs on
disk.
🔷 2. Goals of Memory Management
● Efficient space usage (avoid fragmentation)
● Protection: Prevent processes from interfering with each other
● Relocation: Allow programs to move in memory
● Execution of large programs using virtual memory
🔷 3. Types of Addresses
Address Type Description
Logical Address Generated by the CPU (used by
programs)
Physical Actual location in RAM
Address
Logical ≠ Physical if execution-time binding is used.
🔷 4. Address Binding (3 Stages)
1. Compile-time: Hard-coded physical address
2. Load-time: Relative address resolved when program loads
3. Execution-time: Final address determined during runtime (needs MMU)
🔷 5. Memory Management Unit (MMU)
● Hardware that translates logical → physical addresses
● Uses relocation register (base) to add to logical address
● Ensures memory protection and isolation
🔷 6. Dynamic Loading & Linking
● Dynamic Loading: Only loads parts of program when needed (saves memory)
● Dynamic Linking: Loads external modules (like DLLs) at runtime
🔷 7. Swapping
● Temporarily moves inactive processes to disk (backing store)
● Frees up memory for active processes
● Roll-out, roll-in for priority scheduling
✅ Memory Allocation Techniques
🔷 8. Contiguous Memory Allocation
Memory is given in one single block.
Type Description
Fixed Partitioning Memory divided into equal-sized blocks
Variable Partitioning Divided as needed (more flexible)
🔸 Partitioning Strategies:
● First-fit: Use first hole big enough
● Best-fit: Use smallest hole that fits
● Worst-fit: Use largest hole (leaves big leftovers)
🔷 9. Fragmentation
Type Description
External Enough free memory exists, but it’s scattered
Internal Block allocated is bigger than needed, leaving unused space
inside
💡 Compaction can reduce external fragmentation (if relocation is dynamic)
🔷 10. Paging (Non-Contiguous Allocation)
Breaks memory into fixed-sized blocks.
Logical Memory Physical Memory
Pages Frames
🔸 Page Table:
● Keeps mapping from page → frame
● Address = (page number, offset)
🔸 Example:
If page size = 4 KB and logical address = 13:
● Binary = 1101
● Page = 11 → Frame 10
● Offset = 01 → Result = 10 01 = 1001
🔷 11. TLB (Translation Lookaside Buffer)
● Fast hardware cache that stores recent page translations
● Speeds up paging by avoiding repeated memory access
🔷 12. Segmentation
Memory is divided into logical segments, not fixed sizes.
🔸 Examples:
● Code segment
● Data segment
● Stack
● Global variables
🔸 Address = (Segment Number, Offset)
Each segment has:
● Base → starting physical address
● Limit → size of segment
Provides logical grouping and protection for different parts of a program.
📝 How to Explain in Your Own Words
“Memory management allows the OS to control how RAM is used by different
programs. It ensures that each program runs safely, gets enough memory, and uses
memory efficiently. Techniques like paging and segmentation help run larger or
more flexible programs. Paging breaks memory into equal parts, while
segmentation divides it based on logical sections like functions or data.”
Would you like:
● Help solving the numerical example on allocation strategies?
● A diagram-based explanation?
● Or a short note version for your revision?
Great! Let’s walk through Operating Systems – Lecture 5: Process Synchronization in a
clear, step-by-step way so you can deeply understand it and explain it in your own words.
✅ Lecture 5 – Process Synchronization
🔷 1. Why Is Synchronization Needed?
When multiple processes or threads share data (like a variable or file), concurrent access
can lead to problems like data inconsistency or race conditions.
🔷 2. Race Condition
A race condition happens when:
● Two or more processes access shared data at the same time
● The final result depends on the execution order
📌 Example: If both producer and consumer try to change the same count variable
without synchronization, the outcome becomes unpredictable.
🔷 3. Critical Section Problem
The critical section is the part of code where a process accesses shared data.
🔸 Goal: Ensure only one process at a time executes its critical section.
🔷 4. Three Requirements for a Good Solution
1. Mutual Exclusion – Only one process in the critical section at a time
2. Progress – If no process is in its critical section, someone waiting must be allowed in
3. Bounded Waiting – Limit the number of times other processes can enter their sections
before one waiting process gets its turn
🔷 5. Software Solutions
✅ 1. Turn Variable
● Two processes take turns using a shared variable turn
● Simple, but no guarantee of fairness if more processes are added
✅ 2. Flag Variable
● Each process uses a flag to say “I want to enter”
● Risk of deadlock if both flags are true and no one backs off
🔷 6. Peterson’s Solution (for 2 Processes)
Combines turn and flag[] to solve the critical section problem.
do {
flag[i] = true;
turn = j;
while (flag[j] && turn == j);
// critical section
flag[i] = false;
// remainder section
} while (true);
✅ Satisfies all 3 conditions (mutual exclusion, progress, bounded waiting)
🔷 7. Synchronization Hardware Solutions
Sometimes the OS uses hardware support for synchronization:
🔸 Options:
● Disable interrupts – Simple, but not for multiprocessors
● Atomic Instructions:
○ TestAndSet()
○ Swap()
These prevent race conditions by performing actions in one uninterruptible step.
🔷 8. Locks
Simple mechanism using a lock variable:
do {
acquire(lock);
// critical section
release(lock);
// remainder
} while (true);
❌ If implemented incorrectly, it can cause race conditions.
🔷 9. Semaphores
Semaphores are variables used for process synchronization. They use two atomic operations:
● wait(S) – Decrements S if S > 0, otherwise blocks
● signal(S) – Increments S and unblocks waiting processes if needed
🔷 10. Types of Semaphores
Type Description
Counting Integer > 1, used for managing access to a pool of resources (e.g.,
Semaphore printers)
Binary Semaphore Only 0 or 1; similar to a lock/mutex (mutual exclusion)
🔷 11. Advantages of Semaphores
✅ Ensures mutual exclusion
✅ More efficient than busy waiting
✅ Machine-independent (can work on any OS)
🔷 12. Disadvantages of Semaphores
❌ Complex to implement correctly
❌ Can cause deadlocks if misused
❌ Can lead to priority inversion (low-priority thread blocks high-priority one)
📝 How to Explain It in Your Own Words:
“Process synchronization is used to manage access to shared resources in a
multiprogramming system. Without it, race conditions and data inconsistency can
occur. The critical section problem arises when multiple processes try to access
shared data at the same time. It can be solved using software solutions like
Peterson’s algorithm or hardware-based techniques like atomic instructions.
Semaphores are commonly used to synchronize processes without busy waiting.”
Would you like:
● A short MCQ quiz on this topic?
● A solved example of Peterson’s algorithm or semaphore code?
● Help summarizing it in paragraph form for your assignment?
Perfect! Let’s break down Operating Systems – Lecture 10: File-System Interface in a clear,
easy-to-understand way so you can write it in your own words and prep for exams smoothly.
✅ Lecture 10 – File-System Interface
🔷 1. What is a File System?
A file system organizes and manages data on storage devices like hard drives, SSDs, or
USBs.
To the user, it provides a simple way to create, read, write, and manage files.
🔷 2. File Concept
A file is a logical storage unit that:
● Holds data or programs
● Appears as a sequence of bytes or records
File contents can be:
● Text
● Binary
● Executable programs
🔷 3. File Structure
Structure Type Description
None Just a byte stream
Simple Records Fixed/variable length lines
Complex Formatted documents, load files (rarely used
Structures today)
🔷 4. File Attributes
Every file has metadata like:
● Name
● Type
● Size
● Location on disk
● Access permissions
● Timestamps (created, modified)
🔷 5. File Operations
Basic file operations supported by OS:
● Create
● Open / Close
● Read / Write
● Reposition (seek)
● Delete
● Truncate (clear contents)
🔷 6. Open Files & Tables
OS uses two tables to manage open files:
Table Scope
Per-process table For each
process
System-wide For all open files
table
These track:
● File position
● Access rights
● Location on disk
● Number of users
🔷 7. File Locking
Locks are used to prevent data corruption when multiple processes access the same file.
Lock Type Description
Mandatory Enforced by OS
Advisory Programs choose to check
locks
✅ Access Methods
🔷 8. Access Methods
Type Description
Sequential Read/write in order (e.g., text files)
Direct (Random) Jump to any block directly (e.g.,
databases)
✅ Directory Structure
🔷 9. Directory Structures
Directories help organize and manage files.
Type Description
Single-level One big folder – naming conflicts
Two-level One directory per user
Tree-structured Allows subfolders (real-world example)
🔷 10. Operations on Directories
● Create/Delete/Rename files
● List contents
● Traverse folders
● Search for a file by name
🔷 11. Mounting a File System
Mounting means connecting a file system (e.g., USB, remote drive) to the current directory tree.
● Makes files on external devices accessible
● Needs a mount point (e.g., /mnt/usb)
✅ File Sharing & Protection
🔷 12. File Sharing
● Files can be shared among multiple users
● OS uses UIDs (User IDs) and GIDs (Group IDs) to define access permissions
User Type Rights
Owner Full rights (read/write/execute)
Group Shared access
Others Limited/controlled access
🔷 13. Remote File Sharing
Method Examples
Manual FTP, SCP
Automatic NFS (UNIX), CIFS (Windows)
Web-base HTTP, cloud storage
d
🔁 In remote systems, failures can occur (e.g., network down), so consistency and state
tracking are important.
🔷 14. Consistency Semantics
Defines what happens when multiple users access a file:
Model Behavior
UNIX Changes visible immediately
AFS Changes visible only after file is closed
(session-based)
🔷 15. Protection & Access Control
Protection ensures files aren’t misused.
Access Rights:
● Read (R)
● Write (W)
● Execute (X)
Permission Format (UNIX style):
● Example: chmod 761 game →
○ Owner: rwx
○ Group: rw-
○ Others: --x
🔷 16. Access Control Lists (ACLs)
ACLs list who can do what with a file — more flexible than simple rwx bits.
📝 In Your Own Words:
“The file system interface is what lets users and applications interact with storage
devices. It includes operations like creating, reading, and deleting files. Files are
organized using directories, which can be simple or complex (like trees). The OS
also ensures secure sharing of files, either locally or across networks. Access
methods (sequential, direct) and protection mechanisms like permissions and file
locks ensure safe and efficient file use.”
Let me know if you want:
● A summary note
● Short questions or MCQs
● Help with a diagram (e.g., tree-structured directory)?