Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
21 views34 pages

A) Operating System - Definition & Main Functions

An Operating System (OS) is system software that manages hardware resources and provides an environment for program execution. Key functions include process management, memory management, file system management, device management, security, and user interface. System calls allow user-level processes to request services from the kernel, facilitating operations like process control, file management, and inter-process communication.

Uploaded by

darshanawale02
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views34 pages

A) Operating System - Definition & Main Functions

An Operating System (OS) is system software that manages hardware resources and provides an environment for program execution. Key functions include process management, memory management, file system management, device management, security, and user interface. System calls allow user-level processes to request services from the kernel, facilitating operations like process control, file management, and inter-process communication.

Uploaded by

darshanawale02
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

a) Operating System — Definition & Main Functions

Definition (≈ 1 mark)
An Operating System (OS) is system software that acts as an intermediary between
computer hardware and the user/programs, managing all hardware resources and providing a
convenient, efficient environment in which programs can be executed.

Main Functions (explain any six well for full marks, ≈ 5 marks)

# Function Detailed Explanation (include at least six for 6-mark weightage)


• Creates, schedules, suspends and terminates processes/threads.•
Enforces CPU-scheduling algorithms (e.g., Round-Robin, Priority) to
Process
1 maximise CPU utilisation & responsiveness.• Provides inter-process
Management
communication (pipes, shared memory, message queues) and deadlock
handling.
• Keeps track of main-memory allocation (who is using which block).•
Implements techniques like paging, segmentation and virtual
Memory
2 memory—allowing programs to execute even when they are larger than
Management
physical RAM.• Handles swapping and prevents illegal memory access,
thereby ensuring protection and efficient utilisation.
• Creates, organises, reads/writes and deletes files/directories.• Maintains
File-System metadata (permissions, timestamps, size, location in disk
3
Management blocks/inodes).• Enforces access rights and supports different allocation
strategies (contiguous, linked, indexed).
• Provides a uniform device-driver interface so that user-level programs
Device & I/O work independently of specific hardware details.• Schedules I/O
4
Management operations (buffering, caching, spooling) to overlap CPU and I/O
activities, thus boosting throughput.
• Authenticates users (login, biometrics) and authorises actions using
Security &
5 access-control lists or capability tables.• Isolates processes (user mode
Protection
vs kernel mode) and enforces encryption, auditing and firewall rules.
Resource • Decides who gets what resource, when (CPU cycles, memory pages,
6 Allocation & disk bandwidth).• Keeps usage statistics for billing or performance
Accounting tuning in multi-user or cloud systems.
User Interface Supplies shells, desktops and system utilities that let humans interact
7
(CLI/GUI) comfortably with the underlying machine.

b) System Calls — Meaning, Types & Examples

Definition (≈ 1 mark)
A system call is a controlled entry-point through which a user-level process requests a
service from the kernel, switching from user mode to kernel mode to execute privileged
operations (e.g., I/O, memory management).

Classification of System Calls (explain any five categories with examples, ≈ 5 marks)
Typical POSIX/Linux Examples & Brief
Category Purpose
Role
fork() duplicates current process; execve()
Create, terminate, and
overlays a new program image; exit()
1. Process Control manage
terminates; waitpid() synchronises with
processes/threads.
child; pthread_create() (threads).
open(), creat() (create/obtain file
2. File Manipulate
descriptor); read(), write() (I/O); lseek()
Management files/directories.
(reposition); close(); unlink() (delete).
ioctl() configures device parameters;
3. Device (I/O) Access and control
read()/write() on special files; mmap() maps
Management peripheral devices.
device memory.
4. Information Obtain or alter misc. getpid(), gettimeofday(), uname();
Maintenance kernel data. settimeofday(); alarm() to schedule signals.
Exchange data between pipe(), shmget() (shared memory),
5. Communication
processes, local or msgsnd()/msgrcv() (message queues);
(Inter-Process)
remote. sockets: socket(), bind(), send(), recv().
chmod(), chown(), setuid(), umask() set
6. Protection & Control access
default rights; capset() to adjust Linux
Security permissions.
capabilities.

How a System Call Works (mention briefly if space allows)

1. User process fills registers with syscall number + arguments.


2. Executes a trap/interrupt instruction (e.g., int 0x80 or syscall).
3. CPU switches to kernel mode, jumps to syscall handler.
4. Kernel validates parameters, performs requested service.
5. Returns result/error code; CPU switches back to user mode.

Here are the full 6-mark answers for both questions from the subject Operating System:

a) Explain Process States with a State Transition Diagram

Introduction (1 mark)
A process is a program in execution. During its lifetime, a process goes through various
states that define what it is currently doing (e.g., waiting, running, etc.). The Operating
System uses a Process Control Block (PCB) to track the current state of each process.

Five-State Model of a Process:


State Description
State Description

1. New The process is being created and has not yet been admitted to the ready queue.

The process is waiting to be assigned to a CPU. It is ready to run but is not


2. Ready
running.

The process is currently being executed by the CPU. Only one process can be in
3. Running
this state at a time per CPU core.

4. Waiting /
The process is waiting for an I/O event or some resource to become available.
Blocked

5. Terminated /
The process has finished execution and is being removed from the system.
Exit

Process State Transition Diagram (2 marks)


+--------+ admit +--------+
| New | ------------> | Ready |
+--------+ +--------+
|
dispatch |
v
+--------+
|Running |
+--------+
/ | \
/ | \
I/O request / interrupt \ exit
/ | \
v v v
+--------+ +--------+ +--------+
|Waiting | | Ready | | Termi- |
+--------+ +--------+ | nated |
+--------+

Transitions (2 marks)
Transition Trigger

New → Ready Process admitted by OS scheduler.

Ready → Running CPU scheduler selects this process for execution.

Running → Waiting Process requests I/O or waits for a resource.

Running → Ready Time slice expires or process is preempted.

Waiting → Ready I/O or event is completed.


Transition Trigger

Running → Terminated Process finishes or is forcefully ended.

Exam Tip: Label the diagram neatly and explain transitions with keywords like admit,
dispatch, exit, I/O wait, interrupt to fetch full marks.

b) Compare FCFS, SJF, RR, and Priority Scheduling with Gantt Chart and
Calculation

Let’s assume the following set of processes for comparison:

Process Arrival Time Burst Time Priority (Lower = Higher)

P1 0 ms 5 ms 2

P2 1 ms 3 ms 1

P3 2 ms 8 ms 3

P4 3 ms 6 ms 4

1. FCFS (First-Come, First-Served)

Non-preemptive. Based on arrival order.

Gantt Chart:

| P1 | P2 | P3 | P4 |
0 5 8 16 22

Calculations:

Process Waiting Time Turnaround Time

P1 0 5

P2 4 7

P3 6 14

P4 13 19
Average Waiting Time = (0+4+6+13)/4 = 5.75 ms
Average Turnaround Time = (5+7+14+19)/4 = 11.25 ms

2. SJF (Shortest Job First)

Non-preemptive. Based on shortest burst time.

Order: P1 → P2 → P4 → P3
Gantt Chart:

| P1 | P2 | P4 | P3 |
0 5 8 14 22

Calculations:

Process Waiting Time Turnaround Time

P1 0 5

P2 4 7

P4 5 11

P3 6 14

Avg Waiting = (0+4+5+6)/4 = 3.75 ms


Avg Turnaround = (5+7+11+14)/4 = 9.25 ms

3. Round Robin (RR)

Preemptive. Time Quantum = 4 ms

Execution Order (partial):


P1(0–4) → P2(4–7) → P3(7–11) → P4(11–15) → P1(15–16) → P3(16–20) → P4(20–22)

Gantt Chart:

|P1|P2|P3|P4|P1|P3|P4|
0 4 7 11 15 16 20 22

Waiting Time:

Process Waiting Time


Process Waiting Time

P1 11 ms

P2 3 ms

P3 10 ms

P4 13 ms

Avg Waiting Time = (11+3+10+13)/4 = 9.25 ms

4. Priority Scheduling

Non-preemptive. Lower number = higher priority

Order: P2 → P1 → P3 → P4
Gantt Chart:

| P2 | P1 | P3 | P4 |
1 4 9 17 23

Calculations:

Process Waiting Time Turnaround Time

P2 0 3

P1 3 8

P3 7 15

P4 14 20

Avg Waiting = (0+3+7+14)/4 = 6 ms


Avg Turnaround = (3+8+15+20)/4 = 11.5 ms

Final Comparison Table:

Criteria FCFS SJF RR Priority

Avg WT (ms) 5.75 3.75 9.25 6.00


Criteria FCFS SJF RR Priority

Avg TAT (ms) 11.25 9.25 — 11.5

Preemptive No No Yes Optional

Starvation No Yes No Yes

Fairness Low Medium High Low

Note: Turnaround Time (TAT) = Waiting Time + Burst Time


Waiting Time = Start Time – Arrival Time (or calculated cumulatively based on Gantt chart)

a) What is a Deadlock? Explain Necessary Conditions with an Example

Definition of Deadlock (1 mark)

A deadlock is a situation in an operating system where a set of processes become


permanently blocked because each process is waiting for a resource held by another, and
none of them can proceed.

This results in circular waiting, and no process can make progress, even though all of them
are technically "alive."

Necessary Conditions for Deadlock (Coffman Conditions – 4 marks)

All four conditions must hold simultaneously for a deadlock to occur. These are:

Condition Description

1. Mutual At least one resource must be held in a non-shareable mode (only one process can
Exclusion use it at a time).

2. Hold and A process is holding at least one resource and is waiting to acquire additional
Wait resources that are currently being held by other processes.

3. No Resources cannot be forcibly taken away; they must be released voluntarily by the
Preemption process.
Condition Description

A circular chain of two or more processes exists, where each process is waiting for
4. Circular Wait
a resource held by the next process in the chain.

Example (1 mark)

Let’s consider two processes P1 and P2 and two resources R1 and R2.

 P1 holds R1 and requests R2


 P2 holds R2 and requests R1

This leads to:

 Mutual exclusion: R1 and R2 are non-shareable


 Hold and wait: P1 and P2 each hold one resource and wait for the other
 No preemption: Resources are not taken away
 Circular wait: P1 → R2 → P2 → R1 → P1 (cycle formed)

Result: Deadlock occurs.

Exam Tip: Always define deadlock, name all four conditions, and illustrate with a simple 2-
process, 2-resource example to secure full marks.

b) Explain Peterson’s Solution to the Critical Section Problem

Critical Section Problem Recap (1 mark)

When multiple processes access shared resources, a Critical Section (CS) is the code
segment where the resource is accessed. To avoid race conditions, only one process must
enter the CS at a time.

Peterson's solution is a software-based algorithm for two processes that ensures:

 Mutual Exclusion
 Progress
 Bounded Waiting
Peterson’s Solution Algorithm (3 marks)

Let the two processes be P0 and P1.

// Shared Variables
boolean flag[2]; // flag[i] = true means Pi wants to enter
int turn; // indicates whose turn it is

// Code for process Pi (i = 0 or 1)


flag[i] = true;
turn = j; // j = 1 - i
while (flag[j] == true && turn == j)
; // busy wait

// Critical Section

flag[i] = false; // Exit section

Explanation (2 marks)
Property How it is achieved in Peterson’s algorithm

Mutual Only one process can enter the CS because the other process will wait if flag[j]
Exclusion == true && turn == j.

If no process is in the CS, and one wants to enter, it will not be prevented by the
Progress
other.

Bounded
No process waits indefinitely. Turn alternates, ensuring fairness.
Waiting

Working Example

Suppose both P0 and P1 want to enter their critical sections:

1. P0 sets flag[0] = true and turn = 1.


2. P1 sets flag[1] = true and turn = 0.

Now both are trying to enter:

 For P0: it sees flag[1] = true and turn = 1 → must wait.


 For P1: it sees flag[0] = true but turn = 0 → also waits.

But since the last turn was set by P1, P0 gets a chance, enters CS, and sets flag[0] =
false upon exit.

Then P1 can proceed.


Note: Peterson’s solution is mainly of theoretical importance as it assumes atomicity of
simple operations like reading/writing shared variables.

Here are the full 6-mark answers for both questions:

a) Differentiate between Paging and Segmentation (Tabular Format)

Paging Segmentation

Memory is divided into fixed-size blocks called Memory is divided into variable-size blocks called
pages segments

Pages are of equal size Segments are of unequal (logical) size

It is mainly for physical memory management It reflects logical division (code, stack, data)

Segment table maps segments to memory


Page table maps logical pages to physical frames
locations

No external fragmentation, but may have internal No internal fragmentation, but may have external
fragmentation fragmentation

Programmer has no control over pages Programmer can define segments

Tip: In exams, keep the points short, specific, and clearly contrasting. Stick to six key points
for full marks.

b) Explain FIFO and LRU Page Replacement Algorithms with Examples

1. FIFO (First-In, First-Out) Page Replacement (3 marks)

Concept:

 The oldest loaded page is removed first when a new page needs to be loaded and
memory is full.
 Easy to implement using a queue.
Example:
Given Page Reference String: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
Assume 3 frames

Step-by-step:

Step Page Frame Contents Page Fault?

1 1 1 Yes

2 2 1, 2 Yes

3 3 1, 2, 3 Yes

4 4 2, 3, 4 Yes

5 1 3, 4, 1 Yes

6 2 4, 1, 2 Yes

7 5 1, 2, 5 Yes

8 1 1, 2, 5 No

9 2 1, 2, 5 No

10 3 2, 5, 3 Yes

11 4 5, 3, 4 Yes

12 5 3, 4, 5 Yes

Total Page Faults: 10

2. LRU (Least Recently Used) Page Replacement (3 marks)

Concept:

 Replaces the page that has not been used for the longest time in the past.
 Uses time-stamp or stack for tracking usage.

Same Reference String: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5


Frames = 3
Step-by-step:

Step Page Frame Contents Page Fault?

1 1 1 Yes

2 2 1, 2 Yes

3 3 1, 2, 3 Yes

4 4 2, 3, 4 Yes

5 1 3, 4, 1 Yes

6 2 4, 1, 2 Yes

7 5 1, 2, 5 Yes

8 1 1, 2, 5 No

9 2 1, 2, 5 No

10 3 2, 5, 3 Yes

11 4 5, 3, 4 Yes

12 5 3, 4, 5 Yes

Total Page Faults: 10 (Same as FIFO in this case, but may vary in other strings)

Key Differences Between FIFO and LRU (Optional to Add)


FIFO LRU

Removes the oldest page Removes the least recently used page

Easier to implement More accurate, but harder to implement

May suffer from Belady’s anomaly Less prone to Belady’s anomaly

Here are the full 6-mark answers for both questions from the Operating System subject:
a) Explain File Allocation Methods: Contiguous, Linked, and Indexed

1. Contiguous Allocation (2 marks)

Concept:

 In contiguous allocation, each file occupies a set of consecutive blocks on the disk.

Example:
If a file requires 5 blocks and starts at block 10, it will occupy blocks 10 to 14.

Diagram:

[10][11][12][13][14] ← All blocks are allocated to the file

Advantages:

 Simple to implement.
 Supports fast sequential and direct access.

Disadvantages:

 Causes external fragmentation.


 File size must be known in advance.
 Hard to extend a file if adjacent blocks are occupied.

2. Linked Allocation (2 marks)

Concept:

 Each file is a linked list of disk blocks. Blocks can be scattered anywhere on disk.

Example:
A file is stored in blocks 9 → 16 → 1 → 10

Diagram:

[9] → [16] → [1] → [10] → NULL

Advantages:

 No external fragmentation.
 Files can easily grow dynamically.

Disadvantages:
 Slow random access (no direct access).
 Extra overhead for pointers in each block.
 Risk of broken links or corruption.

3. Indexed Allocation (2 marks)

Concept:

 An index block contains all the pointers to the actual blocks of the file.

Example:
If file has data blocks at 5, 9, 13, 20, the index block holds those addresses.

Diagram:

[Index Block]
↓ ↓ ↓ ↓
[5] [9] [13] [20]

Advantages:

 Supports both direct and random access.


 Eliminates external fragmentation.

Disadvantages:

 Needs extra space for index blocks.


 For very large files, multi-level indexing may be needed.

Exam Tip: Explain each method clearly, with diagrams and at least one
advantage/disadvantage for 6/6 marks.

b) Explain FCFS, SSTF, and SCAN Disk Scheduling Algorithms with


Diagram

Assume:

 Initial disk head position: 53


 Request Queue: 98, 183, 37, 122, 14, 124, 65, 67
1. FCFS (First-Come, First-Served) — 2 marks

Explanation:

 Services requests in the order they arrive, regardless of position.

Order:
53 → 98 → 183 → 37 → 122 → 14 → 124 → 65 → 67

Head Movement Calculation:

|53-98| + |98-183| + |183-37| + |37-122| + |122-14| + |14-124| + |124-65| +


|65-67| = 640 tracks

Diagram:

← 53 → 98 → 183 → 37 → 122 → 14 → 124 → 65 → 67

Disadvantage: Long total head movement; does not minimize seek time.

2. SSTF (Shortest Seek Time First) — 2 marks

Explanation:

 Chooses request closest to current head position.

Order from 53:

 53 → 65 → 67 → 37 → 14 → 98 → 122 → 124 → 183

Diagram:

53 → 65 → 67 → 37 → 14 → 98 → 122 → 124 → 183

Total Head Movement:

|53-65| + |65-67| + |67-37| + |37-14| + |14-98| + |98-122| + |122-124| +


|124-183| = 236 tracks

Advantage: Less head movement than FCFS.

Disadvantage: May cause starvation for far-away requests.

3. SCAN (Elevator Algorithm) — 2 marks

Explanation:
 Head moves in one direction (e.g., toward 0), servicing all requests, then reverses.

Assume direction: left (toward 0)


From 53 → 37 → 14 → (reach 0) then reverse → 65 → 67 → 98 → 122 → 124 → 183

Order:
53 → 37 → 14 → 0 → 65 → 67 → 98 → 122 → 124 → 183

Diagram:

← 53 ← 37 ← 14 ← 0 ↑ 65 → 67 → 98 → 122 → 124 → 183

Head Movement:

|53-37| + |37-14| + |14-0| + |0-65| + |65-67| + |67-98| + |98-122| + |122-


124| + |124-183| = 208 tracks

Advantage: More fair and uniform service.

Disadvantage: Initial movement toward end (0) even if few requests are there.

Comparison Table (optional addition):

Algorithm Approach Starvation Performance

FCFS Arrival order No Poor

SSTF Closest request Yes Better

SCAN Moves in order No Best overall

Most Repeated & Important Questions (Unit-wise)


Unit 1 :

Below are full-length, 6-mark-quality answers for the requested Operating-System


questions.
(When you’re ready, I’ll turn everything into a single PDF in Times New Roman as earlier.)

1. What is an Operating System? Explain its Functions

Definition
An Operating System (OS) is system software that manages computer hardware, provides a
set of services to application programs, and acts as a user-friendly interface between users
and the machine.

Major Functions (state and elaborate any six for a 6-mark answer)

# Function Key Points to Cover

Creation, termination, scheduling (FCFS, RR, etc.), context switching,


1 Process Management
IPC, deadlock handling.

Allocation & de-allocation, paging/segmentation, virtual memory,


2 Memory Management
swapping, protection.

Directory & file creation, naming, metadata, access rights, allocation


3 File-System Management
(contiguous, linked, indexed).

Device & I/O Device drivers, buffering/caching, spooling, I/O scheduling to overlap
4
Management CPU & I/O.

User authentication, access control lists, isolation of address spaces,


5 Security & Protection
auditing.

Resource Allocation & Fair distribution of CPU cycles, memory pages, I/O bandwidth; usage
6
Accounting statistics for billing/tuning.

Command-line shells, GUIs, system utilities for configuration and


7 User Interface Provision
monitoring.

Tip: Start with a crisp two-line definition, then discuss any six distinct functions with 2-3
sub-points each.

2. What are System Calls? Classify with Examples

Definition

A system call is a controlled entry point by which a user-mode process requests a


kernel-mode service (e.g., I/O, memory, process control). Execution switches from user
mode to kernel mode via a software interrupt/trap instruction, ensuring safe access to
privileged instructions.

Classification with Typical POSIX Examples

Category Purpose Example Calls One-line Role


Category Purpose Example Calls One-line Role

Manage fork(), execve(), exit(), Create, replace image,


Process Control
processes/threads waitpid() terminate, synchronise.

open(), read(), write(), Descriptor handling and


File Management Manipulate files/dirs
lseek(), close(), unlink() file I/O.

Device (I/O) Access hardware ioctl(), read()/write() on Set device parameters,


Control devices device files, mmap() perform raw I/O.

Information getpid(), gettimeofday(), Retrieve IDs, time,


Kernel & process info
Maintenance uname() system name.

pipe(), shmget(), msgsnd(), Data exchange between


Communication IPC & networking
socket(), sendto() processes or hosts.

Change rights or
Protection Permissions & IDs chmod(), setuid(), capset()
privileges.

Mention that all modern OSes expose dozens of such syscalls, each identified by a unique
number in the syscall table.

3. Architecture of the UNIX Operating System


+-----------------------------+
| User Programs | → cat, gcc, ls, editors …
+--------------+--------------+
|
+--------------v--------------+
| Shell | → sh, bash, csh (command-line interpreter)
+--------------+--------------+
|
+--------------v--------------+
| System Call Interface | → read(), write(), fork(), exec(), …
+--------------+--------------+
| Kernel Proper |
| • Process Scheduler |
| • Memory Manager |
| • VFS & File-system code |
| • IPC (pipes, signals) |
| • Device Drivers |
+--------------+--------------+
| Hardware | → CPU, RAM, Disk, Terminals
+-----------------------------+

Key Points to Explain

1. User Programs run in user mode; they rely on the shell or other utilities to invoke
services.
2. The Shell reads commands, interprets scripts, and uses fork+exec to launch programs.
3. System Call Interface acts as the boundary: switches to kernel mode on a trap,
validates args.
4. Kernel Proper contains schedulers, memory manager, VFS layer, device drivers, and
IPC primitives.
5. Everything-is-a-file philosophy: devices, pipes, and sockets share a common file
descriptor API.
6. Separation of user vs kernel mode ensures protection and stability.

4. Types of Operating-System Structures


Structure Core Idea Advantages Drawbacks Example OSes

Entire OS (file sys, Hard to maintain; a


Early UNIX, Linux
scheduler, drivers) Fast, direct function calls, bug in any module
Monolithic (traditionally),
compiled into one simple build pipeline. can crash whole
MS-DOS.
large kernel image. system.

Minimal kernel (IPC,


Higher syscall
scheduling, basic High modularity, easier to QNX, seL4,
overhead due to
memory); other extend or replace Minix 3, early
Microkernel user–kernel
services (file sys, components; increased macOS (XNU
switches; more
drivers) run as fault isolation. hybrid).
complex IPC design.
user-space servers.

Performance hit
OS divided into
Clear abstraction due to layer THE system, early
hierarchical layers;
Layered boundaries; easier crossing; rigid Windows NT
each layer uses only
verification/testing. placement may be (hybrid layered).
lower-level services.
tricky.

Below are full-length, 6-mark-style answers for every point you asked.
(When you’re ready for the combined PDF in Times New Roman, just say the word.)

1. Process States with Diagram

A program goes through well-defined states during its lifetime. The classic five-state model
is enough for most exam answers.

+-----------+ admit +-----------+


| New | --------------> | Ready |
+-----------+ +-----------+
|
dispatch |
v
+-----------+ +-----------+
| Running | ---> | Waiting |
+-----------+ I/O | (Blocked) |
| exit +-----------+
v
+-----------+
| Terminated|
+-----------+

Inter-state events
• New → Ready : admitted by long-term scheduler
• Ready → Running : CPU scheduler selects process
• Running → Waiting : I/O or event wait
• Waiting → Ready : I/O completes (signal/interrupt)
• Running → Ready : time-slice expires (pre-emptive OS)
• Running → Terminated : finishes / killed

Exam hint: Name each state, draw arrows neatly, and label at least four event triggers
(admit, dispatch, I/O wait, interrupt, exit) to lock in full marks.

2. Compare FCFS, SJF, RR, Priority (Non-pre-emptive) Using Gantt Charts

Assumed process set (same burst/arrival values used earlier so answers stay consistent):

Process Arrival Burst Priority (1 = highest)

P1 0 5 2

P2 1 3 1

P3 2 8 3

P4 3 6 4

(a) FCFS

Gantt chart

| P1 | P2 | P3 | P4 |
0 5 8 16 22

Average waiting = (0 + 4 + 6 + 13)/4 = 5.75 ms


Average turnaround = (5 + 7 + 14 + 19)/4 = 11.25 ms

(b) SJF (Non-pre-emptive)


Order : P1 → P2 → P4 → P3

| P1 | P2 | P4 | P3 |
0 5 8 14 22

Average waiting = 3.75 ms Average TAT = 9.25 ms

(c) RR (Time-quantum = 4 ms)


|P1|P2|P3|P4|P1|P3|P4|
0 4 7 11 15 16 20 22

Average waiting = 9.25 ms (turnaround omitted in many syllabi for RR but can be derived
similarly).

(d) Priority (Non-pre-emptive)

Order : P2 → P1 → P3 → P4

| P2 | P1 | P3 | P4 |
1 4 9 17 23

Average waiting = 6 ms Average TAT = 11.5 ms

Quick comparative snapshot

Algorithm Avg Waiting Pre-emptive? Starvation Risk

FCFS 5.75 ms No No

SJF 3.75 ms No Yes

RR (q = 4) 9.25 ms Yes No

Priority 6 ms No Yes (low-priority jobs)

3. Process Control Block (PCB) – Definition & Fields

PCB is the kernel-resident data structure that stores all information required to manage a
particular process.
Field Purpose

Process ID / Parent ID Unique identifiers for tracking & hierarchy (fork trees).

Process State New, Ready, Running, Waiting, Terminated.

CPU Registers &


Saved context so the process can resume exactly where it left off.
Program Counter

CPU Scheduling Info Priority, scheduling queue pointers, time-slice counters.

Memory Management Info Base & limit registers, page tables, segment tables, frame list.

Accounting Info CPU time used, job/billing numbers, user/group IDs.

List of open files, outstanding I/O requests, resource locks, signal


I/O & File Descriptors
masks.

Mention at least five distinct fields and their roles for a rock-solid 6-mark answer.

4. Process vs Thread – Six Short Differences


Process Thread

Shares address space with peer threads of same


Owns separate address space.
process.

Switching requires heavy context


Lightweight switch (registers & stack only).
save/restore (full PCB, MMU info).

Inter-process communication (pipes, sockets) Natural communication via shared memory (needs
is comparatively slow. synchronisation).

Crash may bring down the entire process with all its
Failure usually crashes only that process.
threads.

Has its own PCB, kernel resources, and file Shares most kernel resources; each thread has a TCB
table copy. (Thread Control Block).

Creation is cheaper & faster (pthread_create(),


Creation is costlier (fork() / execve()).
clone).
5. Types of Schedulers
Alternate
Scheduler When • Why Outcome
Name

Runs infrequently; decides which new


Balances I/O-bound vs
Long-term Job jobs/processes are admitted to memory
CPU-bound mix; too many
Scheduler Scheduler from the job queue. Controls degree of
jobs → thrashing.
multiprogramming.

Runs very frequently (milliseconds); Implements algorithms like


Short-term CPU
selects which READY process gets the CPU RR, SJF, Priority; context
Scheduler Scheduler
next. switches follow its choice.

Runs occasionally; performs


Controls degree of
suspend/resume by swapping processes
Medium-term multiprogramming
Swapper out of RAM to disk and back. Reduces
Scheduler dynamically; improves
memory pressure; part of virtual memory
overall throughput.
policy.

Here are the full 6-mark detailed answers for all questions from Unit 3 – Deadlock and
Synchronization in your Operating System syllabus:

1. Define Deadlock. Explain All Four Necessary Conditions

Definition:

A deadlock is a situation in a multiprogramming environment where two or more processes


are unable to proceed because each is waiting for a resource held by another. None of the
involved processes can complete.

Four Necessary Conditions (Coffman Conditions):

All of the following must be true simultaneously for a deadlock to occur:

# Condition Explanation

Mutual At least one resource is held in a non-shareable mode—only one process can use
1
Exclusion it at a time.

A process is holding at least one resource and waiting to acquire additional


2 Hold and Wait
resources held by others.
# Condition Explanation

Resources cannot be forcibly removed from a process; they must be released


3 No Preemption
voluntarily.

A circular chain exists: P1 waits for a resource held by P2, P2 for P3, ..., Pn waits
4 Circular Wait
for P1.

Example:

Let P1 hold resource R1 and wait for R2; P2 holds R2 and waits for R1.
This forms a circular chain: P1 → R2 → P2 → R1 → P1 ⇒ Deadlock.

2. What is a Critical Section? Explain Peterson’s Solution

Definition:

The critical section is a part of the code where a process accesses shared resources (like
variables, files, etc.), and only one process should execute in it at a time to avoid race
conditions.

Peterson’s Solution (for 2 processes, say P0 and P1):

Shared variables:

boolean flag[2]; // flag[i] is true if Pi wants to enter CS


int turn; // Whose turn is it

Algorithm for Pi (i = 0 or 1, j = 1 − i):

flag[i] = true;
turn = j;
while (flag[j] && turn == j)
; // wait

// Critical Section

flag[i] = false; // Exit section

Properties Ensured:

 Mutual Exclusion: Only one process enters CS at a time.


 Progress: No process is blocked unnecessarily.
 Bounded Waiting: No process waits indefinitely.
3. Semaphore: Types and Use in Process Synchronization

Definition:

A semaphore is a synchronization tool used to control access to shared resources by


multiple processes. It uses an integer variable and atomic operations.

Types of Semaphores:

Type Description

Binary Semaphore Value is only 0 or 1 (acts like a mutex lock). Used for mutual exclusion.

Counting Value can be any non-negative integer. Used to control access to a resource with
Semaphore multiple instances.

Operations:

Operation Meaning

wait(S) or P(S) Decreases S by 1. If S < 0, the process is blocked.

signal(S) or V(S) Increases S by 1. If there are blocked processes, one is unblocked.

Example – Mutual Exclusion Using Binary Semaphore:


semaphore S = 1;

wait(S);
// critical section
signal(S);

Semaphores solve the critical section problem and avoid busy waiting in process
synchronization.

4. Banker’s Algorithm for Deadlock Avoidance

Goal:

Avoid deadlock by ensuring that resource allocation always leads to a safe state.
Data Structures:

 Available[]: Available instances of each resource.


 Max[][]: Maximum demand of each process.
 Allocation[][]: Currently allocated resources.
 Need[][] = Max - Allocation

Algorithm Steps:

1. Safety Algorithm:
o Work ← Available
o Finish[i] ← false for all i
o Find a process Pi such that:
 Finish[i] == false
 Need[i] ≤ Work
o If found:
 Work += Allocation[i]
 Finish[i] = true
o Repeat until all processes are finished or no such Pi exists.
2. If all Finish[i] == true: the system is in a safe state.

Resource Request Algorithm:

When a process Pi requests Request[i]:

 If Request[i] ≤ Need[i] and Request[i] ≤ Available


 Pretend to allocate and run safety algorithm.
 If safe, grant the request. Else, Pi must wait.

Example:

If a process requests 1 unit of R1, 2 of R2, the algorithm checks:

 If request is within its declared max.


 If resources are available.
 If post-allocation state is safe.

5. Resource Allocation Graph and Detection Methods

Resource Allocation Graph (RAG):


 Nodes: Processes (circles), Resources (squares).
 Edges:
o Request edge: P → R (waiting for resource)
o Assignment edge: R → P (resource assigned to process)

Deadlock Detection using RAG:

 If a cycle exists in the RAG:


o With single instances per resource → Deadlock.
o With multiple instances → May or may not be deadlock.

Deadlock Detection Algorithm (for multiple instances):

1. Maintain Available[], Allocation[][], Request[][]


2. Initialize Finish[i] = false for processes with Allocation ≠ 0.
3. Find a process where:
o Finish[i] == false
o Request[i] ≤ Available[]
4. If found:
o Available += Allocation[i]
o Finish[i] = true
5. Repeat until no such process exists.
6. If any Finish[i] == false → those processes are in deadlock.

Advantages of RAG:

 Helps visualize resource dependencies.


 Useful in debugging and designing deadlock avoidance strategies.

Here are the full 6-mark answers for all key questions in Unit 4 – Memory Management of
your Operating Systems syllabus:

1. Paging vs Segmentation (with Diagrams)


Feature Paging Segmentation

Division Type Fixed-size blocks Variable-size blocks

Units Pages (logical), Frames (physical) Segments like code, stack, heap

Size All pages are of equal size Segments are of unequal size
Feature Paging Segmentation

Address Format Page number + offset Segment number + offset

Fragmentation May cause internal fragmentation May cause external fragmentation

Access No logical meaning to pages Each segment represents a logical unit

Paging Diagram:
Logical Address

+------------+
| Page No. | Offset |
+------------+

Page Table

+------------+
| Frame No. | Offset |
+------------+

Physical Memory Accessed

Segmentation Diagram:
Logical Address

+----------------+
| Segment No. | Offset |
+----------------+

Segment Table

+------------------+
| Base + Offset |
+------------------+

Physical Memory Accessed

2. Logical vs Physical Address


Aspect Logical Address (Virtual) Physical Address (Real)

Generated by CPU during program execution MMU (Memory Management Unit)

Used in User-level programs Actual RAM access

Mapping Logical → Physical via page/segment table Already mapped to memory location
Aspect Logical Address (Virtual) Physical Address (Real)

Accessibility Cannot access memory directly Accesses the real location in RAM

Security Provides protection and isolation Exposed only after address translation

Example 0x0003:0040 (segment:offset) 0x0040A3 (actual location in memory)

3. FIFO, LRU, Optimal Page Replacement (with Example)

Given Page Reference String:


7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2
Assume 3 frames

A) FIFO (First-In, First-Out)

 Replace the oldest page first (queue-like).

Gantt Chart (Page in Frame Order):

7, 0, 1 → page fault
2 → 7 replaced
0 → no fault
3 → 0 replaced
0 → 1 replaced
4 → 2 replaced
2 → 3 replaced
3 → 0 replaced
0 → no fault
3 → no fault
2 → 4 replaced

Page Faults: 10

B) LRU (Least Recently Used)

 Replace page that was least recently used.

Track usage timestamps or stack of recently used pages.

Page Faults: 9 (as fewer older pages are reused)


C) Optimal Algorithm

 Replace the page that will not be used for the longest time in the future.

Page Faults: 7
(Minimum possible faults for this reference string with 3 frames)

4. Demand Paging and Page Fault Explanation

Demand Paging:

 Technique in virtual memory systems where pages are not loaded into RAM until
they are needed.
 Reduces memory usage and load time.

Page Fault:

Occurs when:

 A process tries to access a page that is not currently in physical memory.


 The OS pauses the process, loads the required page from secondary storage,
updates the page table, and resumes execution.

Steps of Handling a Page Fault:

1. Check if the memory reference is valid.


2. If valid but not in memory:
o Trigger a trap to OS (page fault interrupt).
o Find a free frame.
o Load the required page from disk.
o Update page table.
o Resume process from where it was interrupted.

Performance Impact:

 Frequent page faults slow down the system (thrashing).


 Goal is to minimize page faults using proper replacement and allocation algorithms.
5. Fragmentation: Internal vs External
Type Internal Fragmentation External Fragmentation

Fixed-size memory allocation (e.g., Variable-size memory allocation (e.g.,


Cause
paging) segmentation)

Unused
Wasted inside allocated block Wasted between allocated blocks
Space

Allocating 100 bytes in a 128-byte 3 blocks of 200 KB free, but none is contiguous
Example
page → 28 bytes wasted to allocate a 300 KB process

Can Be
Using smaller block size Compaction or paging
Solved By

Occurs In Paging, fixed partitions Segmentation, variable partitions

Below are complete, exam-length answers for every point in Unit 5 –


File and Disk Management.
(When you’re ready for the single Times New Roman PDF of all units, just let me know.)

1. File-Allocation Methods

Explain what, how, pros, cons—that is what a 6-mark answer needs.

Method Core Idea Diagram (conceptual) Advantages Drawbacks

• External
File occupies one • Simple• Fast
fragmentation• File
Contiguous continuous run [Start … End] sequential & direct
size must be known;
of blocks. access
hard to extend

Each block points


• No external • Slow random access
to next; blocks
Linked [9]→[16]→[1]→NULL fragmentation• Easy (must follow links)•
may be
growth Pointer overhead
anywhere.

A special index • Extra space for


• Direct & sequential
block holds block Index → {5, 9, index• Multi-level
Indexed access both fast• No
numbers of the 13, 20} index needed for huge
external frag.
file. files
2. Directory Structures
Structure How It Works Strengths Weaknesses / Use-Cases

Single-Level All files in one directory. • Very simple • Name clashes; no grouping.

One dir per user under a • No sub-subdirectories; poor


Two-Level • Users isolated
master. sharing.

Hierarchy of arbitrary • Natural grouping; • Must manage permissions &


Tree
depth (UNIX style). pathnames navigation.

Acyclic Tree plus shared • Allows sharing (libraries, • Must avoid cycles; link
Graph subtrees using links. common data) maintenance overhead.

Tip: Sketch tiny diagrams in the margin—exam evaluators love that.

3. Inode & File-System Implementation

Inode (Index Node)


A small, fixed-size data structure used in UNIX-like file systems.

Key Fields

 Metadata: owner UID/GID, permissions, timestamps (mtime, atime, ctime).


 Size: file length in bytes.
 Block Pointers: typically 12 direct + single, double, triple indirect.
 Link Count: number of directory entries pointing to this inode.
 Type & flags: regular file, dir, symlink, device, etc.

How It Works

1. Directory entry stores just the filename + inode number.


2. To open a file, OS reads the inode, follows its block pointers to fetch data.
3. Block allocator (e.g., ext4’s buddy allocator) chooses free blocks, updates bitmaps &
inode fields.
4. Because data and metadata are separate, the same data blocks can be referenced by
multiple pathnames (hard links) without duplication.

4. Disk-Scheduling Algorithms

Assume initial head at 53; queue: 98, 183, 37, 122, 14, 124, 65, 67.
Movement
Algorithm Short Sketch Pros Cons
Pattern

Serve in arrival Long seek


FCFS 53→98→183→37→… Simple, fair
order. time.

Always next Reduces Starvation of


SSTF 53→65→67→37→14…
closest request. average seek far cylinders.

Sweep in one Good


SCAN End cylinders
direction to end 53→37→14→0 then 65→67→… throughput
(Elevator) waited longer.
then reverse. & fairness

Sweep in one
Uniform
direction only; Large jump
C-SCAN 53→65→67→98→…183→4999→0→14… response
jump to start overhead.
time
without service.

Similar
Like SCAN/C-SCAN Avoids
trade-offs to
LOOK / but stop at last unnecessary
53→37→14 then reverse 65→67→… SCAN/CSCAN
C-LOOK request before full-disk
but slightly
turning/jumping. sweep
less travel.

In answers, draw a line with tick marks (0- 4999) and plot head paths—it instantly earns
clarity marks.

5. Buffering vs Spooling
Aspect Buffering Spooling

Using memory buffers to hold data Putting I/O jobs on disk (queue) so devices
Definition
during I/O to match speed mismatch. (printers, plotters) work asynchronously.

Storage
Main memory (sometimes cache). Secondary storage (magnetic disk).
Location

Transient; released after I/O


Lifetime Persists until device finishes entire job.
completes.

Smooth out short-term speed


Share a single device among multiple
Purpose differences; allow overlap of CPU &
processes; provides scheduling & re-ordering.
I/O.
Aspect Buffering Spooling

Double buffering for disk reads, Print queue: many users “print” → files
Example
network packets. spooled → printer works one by one.

You might also like