Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
12 views12 pages

Operating Systems Units 1 & 2 - Application-Based

The document provides comprehensive answers to application-based questions on Operating Systems Units 1 and 2, targeting second-year university students. It covers key concepts such as system calls, batch processing, time-sharing, kernel roles, RTOS, distributed systems, and process/thread management, with examples and diagrams for clarity. The content is structured in a clean Markdown format, ready for PDF export.

Uploaded by

work.sushant07
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views12 pages

Operating Systems Units 1 & 2 - Application-Based

The document provides comprehensive answers to application-based questions on Operating Systems Units 1 and 2, targeting second-year university students. It covers key concepts such as system calls, batch processing, time-sharing, kernel roles, RTOS, distributed systems, and process/thread management, with examples and diagrams for clarity. The content is structured in a clean Markdown format, ready for PDF export.

Uploaded by

work.sushant07
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Operating Systems Units 1 & 2 – Application-Based Answers (PDF-ready)

Below is the complete content of “OS_Unit1_Unit2_Answers.md” in clean Markdown, with simple


ASCII diagrams you can keep or replace with your own figures. Save this as
OS_Unit1_Unit2_Answers.md and export to PDF using any Markdown-to-PDF tool.

Operating Systems — Unit 1 and Unit 2 Application


Answers
Audience: 2nd-year university students
Style: Simple language, examples, and diagrams

Unit 1: Introduction to Operating Systems

1) Why are system calls necessary in an operating system?


System calls are the interface between user programs and the operating system (OS). They let
applications request services from the OS safely, like opening files, creating processes, or using
the network.
Without system calls, user programs would need direct hardware access, which is unsafe
and non-portable.
System calls switch the CPU from user mode to kernel mode so the OS can execute
privileged operations securely.
Example: Opening a file
A text editor calls the OS “open” system call to access a file.
The OS checks permissions, locates the file on disk, and returns a file descriptor (a small
integer).
The editor then uses “read” and “write” system calls with that descriptor.
Common system calls: open, read, write, close, fork, exec, wait, socket, bind, connect, ioctl.
Diagram: user program → system call → OS

User Program (App)


|
| open("notes.txt")
v
System Call Interface
|
| Trap/Interrupt -> Switch to Kernel Mode
v
Operating System (Kernel)
|
| Check permissions, find file, allocate fd=3
v
Return to User Mode <-- file descriptor 3

2) Why is batch processing best suited for transactional operations?


Batch processing runs a group of similar tasks together without user interaction. It’s ideal for
transactions like payroll, billing, or data import.
Benefits:
Efficiency: Process large volumes in one go (reduced overhead).
Automation: Scheduled runs (e.g., nightly), less human error.
Resource utilization: Better throughput; jobs can be optimized for I/O and CPU.
Reliability: Repeatable, auditable runs.
Why it fits transactional tasks:
Transactions often repeat with similar structure (e.g., compute salary, apply tax).
No need for immediate user input once data is collected.
Diagram: batch workflow

Input Queue (transactions) --> Batch Processor --> Output Reports/Updates


[Payroll.csv] [Nightly] [Pay slips, DB updates]
[Invoices.csv]

3) Why is a time-sharing OS referred to as a multitasking OS?


Time-sharing systems divide CPU time into small time slices (quantum) and give each process a
slice in turn. This creates the illusion that many programs run at the same time (multitasking).
The CPU rapidly switches between processes.
The scheduler enforces fairness and responsiveness.
Example:
Running a browser, music player, and code editor: each gets CPU slices in rotation, so the
user experiences smooth multitasking.
Diagram: time slices
Time: |--P1--|--P2--|--P3--|--P1--|--P2--|--P3--|
Quantum: 10ms 10ms 10ms 10ms 10ms

4) Why is the kernel not considered the “brain” of the OS?


The kernel manages resources (CPU, memory, devices) and provides core services, but it
doesn’t decide what tasks a user should do or make high-level choices on their behalf.
Kernel = resource manager + protection + low-level services.
User-level components (shells, apps) drive the system’s “behavior” and decisions.
The kernel enforces policies but is not the “user’s brain.”
Diagram: roles

+-------------------------------+
| User Applications (decisions) |
+---------------+---------------+
|
v
+---------------+---------------+
| Kernel (resource management) |
| - Scheduling |
| - Memory mgmt |
| - I/O, Filesystems |
+---------------+---------------+
|
v
+---------------+---------------+
| Hardware (CPU, RAM, Devices) |
+-------------------------------+

5) Why is an RTOS important for time-constrained jobs?


A Real-Time Operating System (RTOS) guarantees that tasks meet strict deadlines
(deterministic behavior).
Features: predictable scheduling (often priority-based preemptive), bounded interrupt
latency, fast context switching.
Hard real-time: missing a deadline can be catastrophic (e.g., pacemaker).
Soft real-time: occasional misses degrade quality but not safety (e.g., multimedia playback).
Example:
Industrial robot arm: sensor reading must be processed within 2ms; actuator updated within
5ms; RTOS ensures this timing.
Diagram: deadlines
Timeline (ms): 0 2 5 8 10
Tasks: [Sensor]->[Control]->[Actuator]->[Log]

Each task must finish within its deadline window.

6) How do distributed operating systems emphasize transparency and scalability?


Transparency: The system hides the complexity of multiple machines; resources appear
unified.
Access transparency (same API), location transparency (don’t care where data lives),
migration transparency (process/data can move), replication transparency.
Scalability: Can grow by adding nodes without major redesign; load balancing and fault
tolerance are built in.
Diagram: single-system view over many nodes

+-----------+ +-----------+ +-----------+


| Node A |-----| Node B |-----| Node C |
+-----------+ +-----------+ +-----------+
\ | /
\ | /
\ | /
+-------------------------+
| Distributed OS View |
| (Unified resources) |
+-------------------------+

7) How is the mobility of an OS exemplified by macOS?


macOS runs across different CPU architectures (Intel x86_64 and Apple Silicon ARM64) while
keeping a consistent user experience.
Apple transitioned macOS from Intel to ARM (Apple Silicon) with tools like Rosetta 2 (binary
translation) and Universal apps (fat binaries containing both architectures).
Same UI, apps, and services across Mac devices with different hardware.
Diagram: architecture transition

[macOS Intel] --Universal Binary--> [macOS ARM]


\ /
Rosetta 2 (translates) ------
8) Analyze operations of system calls: fork(), socket(), ioctl(), and pipe()
fork(): Creates a new process by duplicating the calling process (child gets a copy of
memory space).
Example: A shell uses fork() to create a child, then exec() to run a command.
socket(): Creates an endpoint for network communication (TCP/UDP).
Example: Server creates a socket(), bind(), listen(); client socket() + connect().
ioctl(): Device-specific control operations beyond read/write (e.g., set terminal modes,
network interface flags).
Example: Set a serial port’s baud rate via ioctl().
pipe(): Creates a unidirectional communication channel between related processes.
Example: ls | grep “.txt” uses a pipe: output of ls becomes input to grep.
Diagram: process communication

Shell
|-- fork() --> Child
| |
| `-- exec("ls")
|
`-- fork() --> Child2
|
`-- exec("grep .txt")

Pipe:
[ls stdout] --> [pipe] --> [grep stdin]

Sockets:
[Client] <---- TCP/IP ----> [Server]

9) Evaluate the “thali” analogy for monolithic kernels


Monolithic kernel: All core OS services (scheduling, memory, drivers, filesystems, networking)
run in kernel space as one large unit.
Thali analogy:
Like a thali (many dishes on one plate), a monolithic kernel has many services tightly
integrated in one space.
Pros: High performance (no extra message-passing overhead), fast calls between services.
Cons: A bug in one component can affect the whole kernel; harder to maintain modular
isolation.
Diagram: monolithic “thali”

+---------------- Monolithic Kernel (one big plate) ----------------+


| Sched | MM | FS | Net | Drivers | IPC | VFS | Cache | Security |
+-------------------------------------------------------------------+

10) Why does Android prefer a monolithic architecture?


Android uses the Linux kernel (monolithic) for performance and device support.
Performance: In-kernel drivers and subsystems give low overhead for I/O, memory, and
scheduling.
Ecosystem: Huge set of drivers (SoCs, modems, sensors) already exist in Linux.
Efficiency: Tight integration improves power and latency on mobile hardware.
Diagram: Android stack over monolithic kernel

Apps (Java/Kotlin)
|
Android Framework & Services
|
Android Runtime (ART)
|
HAL (Hardware Abstraction Layer)
|
Linux Monolithic Kernel (drivers, fs, net, sched, mm)

11) Why is a simple layered architecture foundational for OS design?


Layered architecture organizes the OS into layers, each with specific responsibilities and clear
interfaces.
Advantages:
Modularity and clarity.
Easier debugging and maintenance.
Abstraction barriers: upper layers don’t need hardware details.
Security: lower layers can enforce protections.
Diagram: layered OS

+-------------------------+
| User Applications |
+-------------------------+
| System Libraries/Runtime|
+-------------------------+
| OS Services (FS, Net) |
+-------------------------+
| Kernel Core (sched, mm) |
+-------------------------+
| Hardware |
+-------------------------+

12) How does the Nintendo Switch utilize a microkernel architecture?


The Switch’s OS (Horizon) uses microkernel-like ideas: critical services are minimized in kernel
space, and many services run as user-space servers communicating via IPC.
Benefits:
Reliability and isolation: a failure in a user-space service is less likely to crash the whole
system.
Modularity: services can be updated or restarted independently.
Security: least-privilege separation.
Diagram: microkernel separation

+------------------------------+
| User Apps / Games |
+--------------+---------------+
| IPC
+--------------v---------------+
| User-space Services (FS, UI,|
| Audio, Net, etc.) |
+--------------+---------------+
| IPC
+--------------v---------------+
| Microkernel (sched, IPC, |
| minimal drivers) |
+--------------+---------------+
|
Hardware

Unit 2: Process and Thread Management

1) “Program -> Process -> Thread” — chronology


Program: a passive file (code on disk).
Process: a running instance of a program with its own memory and resources.
Thread: a unit of execution inside a process; multiple threads share process memory.
Example:
A browser program (file) starts → becomes a process.
It creates threads: UI thread, network thread, rendering thread.
Diagram: transitions
Program (file) --load/exec--> Process --create--> Threads (T1, T2, T3)

2) Why is the 5-state model more efficient than the 2-state model?
2-state model: Ready and Running only; too simplistic.
5-state model: New, Ready, Running, Waiting/Blocked, Terminated.
Gives finer control and visibility:
Waiting separates I/O-bound processes.
New/Terminated handle creation/cleanup.
Schedulers can make better decisions; improves throughput and responsiveness.
Diagram: 5-state vs 2-state

2-state: Ready <--> Running

5-state:
+----> Ready ----> Running ----> Terminated
| ^ |
New | v
+-------- Waiting

3) “Many-to-One” kernel model is risky for thread execution — analyze


Many-to-One: Many user threads map to a single kernel thread.
Risks:
If one user thread blocks (e.g., a blocking I/O call), the entire process blocks.
No true parallelism on multicore systems; only one kernel-scheduled entity.
A user-level scheduler can’t preempt a thread when inside the kernel.
Diagram: mapping

User Threads: T1 T2 T3 T4
\ | /
\ | /
Single Kernel Thread (K1)
4) Impacts of multithreading on systems (with examples)
Benefits:
Concurrency: Overlap I/O and computation.
Responsiveness: UI stays responsive by moving heavy work to background threads.
Resource sharing: Threads share memory; lower overhead than processes.
Examples:
Web server: thread per connection or thread pool handles many clients concurrently.
Video editor: separate threads for decoding, effects, and UI.
Game engine: rendering thread, physics thread, AI thread.
Trade-offs:
Synchronization complexity (locks, races, deadlocks).
Debugging is harder.
Context switching between threads still has cost.
Diagram: threads in a process

Process Memory (Code/Data/Heap)


|-- Thread A (UI)
|-- Thread B (I/O)
|-- Thread C (Worker)
Shared address space; separate stacks and registers.

5) “Long-Term Scheduler operates slowly” — explain


Schedulers:
Long-term (admission): Decides which jobs (programs) to bring into memory as processes;
runs infrequently to control degree of multiprogramming.
Because its decisions are occasional (e.g., when new jobs arrive), it “operates slowly”
compared to others.
Effect:
Controls system load; prevents memory thrashing by limiting how many processes are
admitted.
Diagram: role

Job Queue --(Long-term)--> Ready Queue --(Short-term)--> CPU


^
|
(Mid-term: swap out/in)
6) Impacts of context switching on performance
Context switch: Saving the state of one thread/process and loading another’s.
Impacts:
Overhead: CPU time spent switching instead of running user code.
Cache/TLB misses: New thread may not benefit from previous caches.
Frequent switches reduce throughput; too few switches can hurt responsiveness.
Diagram: context switch

[Save P1 registers, PC, stack] -> [Load P2 state] -> Run P2

7) Impacts of Round Robin on CPU efficiency


Round Robin: Each ready process gets a fixed time quantum in a cycle.
Pros:
Fairness and good responsiveness for interactive tasks.
Simple to implement.
Cons:
Too small quantum → many context switches (overhead).
Too large quantum → behaves like FCFS (poor response).
Not optimal for CPU-bound tasks with long bursts.
Diagram: RR cycle

Ready Queue: P1 -> P2 -> P3 -> P4 -> ...


Timeline: |P1|P2|P3|P4|P1|P2|P3|P4| ...
Quantum: fixed

8) SJF is both preemptive and non-preemptive


Shortest Job First (SJF): Prefer the job with the shortest CPU burst.
Non-preemptive SJF: Once a job starts, it runs to completion or blocking.
Preemptive SJF (Shortest Remaining Time First, SRTF): A new arrival with a shorter
remaining time can preempt the running job.
Trade-offs:
Minimizes average waiting time (optimal under perfect knowledge).
Requires knowing or estimating CPU burst length.
Diagram: preemptive vs non-preemptive

Non-preemptive:
P1(6) starts -> runs full -> P2(3) -> P3(2)

Preemptive (SRTF):
Start P1(6), then P2(3) arrives -> preempt to P2
Then P3(2) arrives -> preempt to P3

9) Roles of Short-term, Mid-term, and Long-term schedulers


Long-term (job/admission): Controls how many processes enter the ready pool; affects
degree of multiprogramming (runs infrequently).
Mid-term (swapper): Temporarily removes processes from memory (swap out) and later
brings them back (swap in) to relieve memory pressure (runs occasionally).
Short-term (CPU scheduler): Chooses which ready process/thread gets the CPU next (runs
very frequently).
Diagram: three schedulers

Jobs --> [Long-term] --> Ready Queue --> [Short-term] --> CPU
^ |
| v
[Mid-term] (swap out/in to/from backing store)

10) “SJF is superior to FCFS yet has starvation issue” — inspect


Why SJF is superior:
SJF minimizes average waiting time by always running the shortest job first.
FCFS can suffer from the convoy effect: one long job blocks many short ones.
Starvation issue:
If short jobs keep arriving, long jobs may wait indefinitely (starvation).
Mitigation: Aging (increasing priority of waiting jobs over time) or hybrid policies.
Diagram: FCFS vs SJF

FCFS order: P1(20), P2(2), P3(3) -> long average wait


SJF order: P2(2), P3(3), P1(20) -> lower average wait
But long P1 could starve if more short jobs arrive continuously.
11) Compare single-threaded vs multi-threaded
Single-threaded:
One sequence of execution.
Simple design; no synchronization issues.
Poor responsiveness for I/O or heavy tasks.
Multi-threaded:
Multiple threads within a process; concurrent tasks.
Better responsiveness and resource use.
Requires careful synchronization; risk of races/deadlocks.
Use-cases:
Single-thread: small scripts, simple CLI tools.
Multi-thread: servers, GUI apps, media processing, games.
Diagram: comparison

Single-threaded:
[Work A -> Work B -> Work C]

Multi-threaded (shared memory):


Thread1: Work A -----
Thread2: Work B -----
Thread3: Work C -----

How to export this to PDF


Option A: Paste this Markdown into VS Code, install “Markdown PDF” extension, then
“Markdown PDF: Export (pdf)”.
Option B: Use an online Markdown-to-PDF converter.
Option C: Convert to HTML (using any Markdown viewer) and print to PDF.
If a version with richer vector diagrams is preferred, ask for a version with Mermaid diagrams or
embedded SVGs, and this content can be upgraded accordingly.

You might also like