Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
9 views33 pages

Write Any Two Services Provided by The OS.

The document outlines key concepts related to operating systems, including services provided by the OS, system calls, processes, safe states, dispatchers, semaphores, and deadlocks. It discusses various OS structures, synchronization problems like the Dining Philosophers, and methods for deadlock recovery. Additionally, it highlights the importance of efficient resource management and process synchronization in maintaining system stability and performance.

Uploaded by

sahilbhole199
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views33 pages

Write Any Two Services Provided by The OS.

The document outlines key concepts related to operating systems, including services provided by the OS, system calls, processes, safe states, dispatchers, semaphores, and deadlocks. It discusses various OS structures, synchronization problems like the Dining Philosophers, and methods for deadlock recovery. Additionally, it highlights the importance of efficient resource management and process synchronization in maintaining system stability and performance.

Uploaded by

sahilbhole199
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

1

1.​ Write any two services provided by the OS.​


—> An operating system provides several important services to users and programs.
Two major services are Program Execution and File System Manipulation. In Program
Execution, the OS loads programs into memory, executes them, and manages their
termination. It handles various tasks like memory allocation, scheduling, and resource
access during execution. In File System Manipulation, the OS provides mechanisms to
create, open, read, write, and delete files. It also manages file permissions and
organization on storage devices. Apart from these, other services include input/output
operations, error detection, communication among processes, resource allocation, and
security management to protect system resources from unauthorized access.

2.​ What is meant by System Call?​


—> A System Call is a mechanism that allows user-level programs to request services
from the operating system. It acts as an interface between a running program and the
OS to perform operations that require privileged access to hardware resources. For
example, operations like reading from a file, writing data to a disk, creating processes, or
communicating with hardware are done through system calls. Without system calls, user
applications cannot directly access hardware or critical OS resources. Examples of
system calls include open(), read(), write(), fork(), and exit(). System calls play a critical
role in ensuring system security, stability, and smooth program execution.​

3.​ What is Process?​


—> A process is defined as a program in execution. It consists of the program code,
current activity, a set of resources like CPU registers, open files, and memory assigned
by the operating system. Each process is uniquely identified by a Process ID (PID) and
can exist in different states such as New, Ready, Running, Waiting, or Terminated. The
operating system manages processes using a data structure known as the Process
Control Block (PCB), which holds important information about the process's state and
resources. Processes are fundamental to multitasking, allowing multiple programs to run
simultaneously and efficiently manage system resources.​

4.​ Define a safe state.​


—> A system is said to be in a safe state if there exists a sequence of execution of all
processes such that each process can complete its execution successfully even if all
processes request their maximum resources. In a safe state, the system can avoid
deadlocks by carefully allocating resources. The Banker's Algorithm is widely used to
determine if a system is in a safe state. In simple words, if a safe sequence exists, the
system guarantees that no deadlock will occur. If a system is unable to find any safe
sequence for resource allocation, it is considered unsafe. Thus, a safe state ensures
smooth and deadlock-free system operation.​

5.​ Define Dispatcher​


—> A Dispatcher is a component of the operating system that is responsible for
transferring control of the CPU to the process selected by the short-term scheduler. It
2

performs important tasks such as saving the state of the currently running process and
loading the saved state of the new process. Dispatcher also switches the CPU mode
from kernel mode to user mode before executing the user process. It performs context
switching, ensuring a smooth transition between processes. The efficiency of a
dispatcher directly affects the overall system performance, and a fast dispatcher helps in
reducing the CPU idle time. Thus, the dispatcher plays a critical role in process
management.​

6.​ What are semaphores?​


—> Semaphores are synchronization tools used to control access to shared resources
by multiple processes in a concurrent system. They help prevent race conditions by
ensuring that only a limited number of processes can access a resource at a time. A
semaphore is an integer variable that can only be accessed using two atomic operations:
wait() and signal(). wait() decreases the value of the semaphore, and signal() increases
it. If a process tries to wait() on a semaphore that is already zero, it gets blocked until the
semaphore becomes positive. Semaphores are widely used in process synchronization,
mutual exclusion, and avoiding deadlocks in operating systems.​

7.​ What do you mean by Rollback?​


—> Rollback is a recovery technique used in database systems and operating systems
to undo or reverse a set of operations when a failure or error occurs. When a process or
transaction fails, the system rollback all changes made by that process to bring the
system back to a consistent state. Rollback ensures that partial operations do not corrupt
the system's integrity. In transaction management, rollback is used to undo changes if a
transaction cannot be completed successfully. It plays a critical role in error recovery,
deadlock handling, and maintaining the consistency of data. Rollback helps systems
remain stable even in the case of unexpected failures.

8.​ What is meant by Address Binding?​


—> Address Binding refers to the process of mapping logical addresses generated by a
program to physical addresses in memory. Programs typically use logical addresses
because they are independent of physical memory locations. Address binding can
happen at different stages: compile-time, load-time, or execution-time. If binding is done
at compile-time, then the final memory location must be known beforehand. Load-time
binding allows flexibility by assigning memory addresses when the program is loaded
into memory. Execution-time binding provides the most flexibility by allowing addresses
to change during program execution, using hardware support like the Memory
Management Unit (MMU). Address binding is essential for efficient memory
management.​

9.​ List various operations on File.​


—> Various operations can be performed on files in an operating system. These include
creating a new file, opening an existing file, and reading data from a file. Users can also
write new data to a file or append data to the end of a file. Files can be deleted when
3

they are no longer needed. Another operation is repositioning within a file (also known
as seek), which moves the file pointer to a specific location. Files can also have
permissions changed, such as read, write, and execute access. Lastly, copying or
renaming files are also common file operations handled by the operating system.​

10.​What do you mean by Deadlock?​


—> Deadlock is a situation in operating systems where two or more processes are
unable to proceed because each process is waiting for a resource that the other process
holds. In simple words, deadlock is a standstill where processes are stuck, causing the
system to freeze or become unresponsive. Deadlock occurs when four conditions are
true simultaneously: mutual exclusion, hold and wait, no preemption, and circular wait. It
can happen in resource allocation like printer usage, file locks, or memory access.
Deadlock prevention, avoidance, detection, and recovery techniques are used by
operating systems to handle such problems and maintain system stability.​

11.​What is Context Switch?​


—> A context switch is the process of saving the state of a currently running process so
that the CPU can switch to executing another process. It involves storing the process’s
information such as program counter, registers, and memory management information
into its Process Control Block (PCB) and loading the information of the next process to
be executed. Context switching is essential for multitasking, allowing multiple processes
to share the CPU effectively. However, it adds overhead to the system because time is
spent saving and loading process states instead of doing productive work. Fast context
switching improves system responsiveness and performance.​

12.​What is Page Frame?​


—> A page frame is a fixed-size block of physical memory into which pages from the
logical memory of a process are loaded. In a paging memory management system, both
main memory and logical memory are divided into blocks of the same size — logical
blocks are called pages, and physical blocks are called frames. When a process is
executed, its pages are loaded into any available page frames in memory. The operating
system maintains a page table to map pages to frames. Page frames help efficiently
utilize memory and reduce fragmentation, allowing processes to be executed even if
they are not entirely loaded into memory.​

13.​What is meant by rotational latency in disk scheduling?​


—> Rotational latency refers to the time delay experienced while waiting for the desired
sector of a disk to rotate under the read/write head. It is an important component of total
disk access time, along with seek time and transfer time. The disk continuously spins,
and after the read/write head is positioned over the correct track, it must wait for the
correct sector to appear under it. Faster disk rotation speeds result in lower rotational
latency. Disk scheduling algorithms like SSTF (Shortest Seek Time First) and SCAN
consider rotational latency to optimize disk performance and reduce overall access time.​
4

14.​Define the critical section.​


—> A critical section is a part of a program where shared resources such as data
structures, files, or devices are accessed. Since multiple processes may try to access
shared resources simultaneously, critical sections must be carefully managed to prevent
race conditions. Only one process should be allowed in the critical section at a time to
ensure data consistency and system stability. Entry and exit from the critical section
must be controlled using synchronization mechanisms like mutexes, semaphores, or
monitors. Proper handling of critical sections is essential in concurrent programming and
operating system design to maintain reliable and predictable system behavior.​

15.​State Belady’s anomaly.​


—> Belady’s Anomaly is a surprising phenomenon in operating systems where
increasing the number of page frames results in an increase in the number of page
faults, instead of reducing them. This behavior is generally observed in certain page
replacement algorithms like FIFO (First In, First Out). Normally, it is expected that more
memory should decrease page faults, but due to the nature of some algorithms, certain
memory access patterns cause worse performance. Belady’s Anomaly demonstrates
that not all page replacement algorithms behave intuitively and highlights the need for
more efficient algorithms like LRU (Least Recently Used) and Optimal Page
Replacement.

16.​List any 4 characteristics of an Operating system.​


—> An operating system has several important characteristics. ​
- Firstly, it acts as a resource manager by managing hardware and software resources.
- Secondly, it provides a user interface that allows interaction between the user and the
hardware. ​
- Thirdly, it enables multitasking, allowing multiple programs to run simultaneously by
efficiently sharing CPU time. ​
- Fourthly, the OS ensures system security by protecting data and resources from
unauthorized access. ​
​ In addition to these, it handles file management, error detection, device control,
and communication between different software and hardware components, making it an
essential part of any computing system.

17.​What is the role of an Operating System?​


—> The operating system plays a crucial role as the intermediary between the user and
the computer hardware. It manages all hardware resources such as the CPU, memory,
storage devices, and input/output devices. uIt provides an environment where users can
execute programs conveniently and efficiently. The OS handles tasks like process
management, memory management, device management, and file management. It
ensures security, provides networking capabilities, and manages system errors. By
abstracting the complexities of the hardware, the operating system enables users and
application programs to interact with the computer system easily and reliably.
5

18.​Explain Operating system structure.​


—> The structure of an operating system (OS) is critical in determining how efficiently it
can manage hardware resources and execute various system tasks. An operating
system can be structured in several ways, depending on its functionality and design
goals. Some common structures include monolithic, microkernel, layered, and modular
designs.
1.​ Monolithic Structure: In a monolithic OS, the entire operating system runs as a single
program in kernel space. It has direct access to hardware and system resources, and all
OS services, such as process management, file management, and device handling, are
tightly integrated. This makes monolithic systems efficient but difficult to maintain and
scale, as any changes to one component can potentially affect the entire system.​

2.​ Microkernel Structure: Microkernel architecture divides the operating system into
smaller, isolated components, with only essential services like memory management,
process scheduling, and communication provided in the kernel. Other services, such as
device drivers and file systems, are implemented as user-level processes. This structure
improves system stability and modularity, but communication between components can
incur overhead, making it slower than monolithic systems.​

3.​ Layered Structure: In a layered OS, the system is divided into layers, each providing
services to the layer above it while receiving services from the layer below. The top layer
is the user interface, and the bottom layer interacts directly with the hardware. This
structure improves modularity and maintainability, as each layer can be developed and
tested independently, but it might introduce performance inefficiencies due to inter-layer
communication.​

4.​ Modular Structure: In a modular OS, the kernel is designed as a collection of modules,
where each module is responsible for a specific task (e.g., file system, network
protocols). These modules can be loaded or unloaded dynamically based on the
system’s needs. This structure allows flexibility and extensibility, providing a balance
between performance and maintainability.​

19.​Explain ‘Dining Philosopher’ synchronization problem.​


—> The Dining Philosophers Problem is a classic synchronization problem used to
demonstrate the challenges of managing concurrency in multi-threaded systems,
particularly when sharing limited resources. The problem involves five philosophers
sitting at a round table. Each philosopher thinks deeply and occasionally gets hungry. To
eat, they need two forks placed between them and their neighbors. However, there are
only five forks available, and each philosopher needs both forks to eat. The key
challenge is to develop a strategy that allows all philosophers to eat without leading to a
deadlock or a situation where some philosophers starve because others are always
eating.
The problem highlights several issues:-
6

1.​ Deadlock: If all philosophers simultaneously pick up one fork and wait for the second
one, no one will ever be able to eat. This situation is called a deadlock.​

2.​ Starvation: If one philosopher is constantly unable to acquire both forks due to the
actions of others, they may starve while others eat.​

To solve the Dining Philosophers Problem, different synchronization techniques are used:
1.​ Resource Hierarchy Solution: Philosophers pick up the lower-numbered fork first,
which prevents circular wait conditions. This strategy ensures that deadlock does not
occur but still allows for resource sharing.​

2.​ Locking and Semaphores: Another solution involves using semaphores or mutexes to
manage the fork resources. Philosophers must acquire a lock on both forks before eating
and release them afterward.​

3.​ Chandy/Misra Solution: This approach uses a token system and can avoid both
deadlock and starvation, allowing philosophers to eat in a fair manner.​

20.​Explain different methods for recovery from a deadlock.​


—> Deadlock recovery is essential in an operating system to ensure that processes can
continue their execution even when a deadlock occurs. There are several methods for
recovery from a deadlock, each designed to break the cycle of resource contention and
allow processes to proceed. These methods can be broadly categorized into four main
approaches: process termination, resource preemption, and combined strategies.
1.​ Process Termination:​
In this method, one or more processes involved in the deadlock are terminated to break
the circular wait. This can be done in two ways:​
a) Abort all deadlocked processes: This is the simplest approach but can lead to
significant overhead, especially if many processes are involved in the deadlock.​
b) Abort one process at a time: This is a more selective approach, where the system
terminates one process at a time and checks if the deadlock is resolved after each
termination. The process that is selected for termination could be chosen based on
criteria such as priority, resource usage, or the amount of computation completed.​

2.​ Resource Preemption:​


Resource preemption involves temporarily taking resources from processes involved in
the deadlock and reallocating them to other processes to allow progress. This can be
done in two ways:​
Preempt resources from processes: A process’s resources (e.g., memory, CPU time)
are taken away and given to other processes that are not part of the deadlock cycle.
Preempted processes are then put on hold until their required resources are available
again.​
Rollback: In some cases, the system might choose to rollback a process to a safe state
7

where the resources it was using are released, and the process can be restarted.​

3.​ Combination of Methods: In some cases, a combination of process termination and


resource preemption is used to handle deadlock recovery more effectively. For instance,
the system may first terminate the least important processes and then preempt
resources from remaining processes to avoid further deadlock occurrences.​

4.​ Timeouts: In some systems, a process is automatically aborted or rolled back if it is


waiting too long for resources. This approach helps in systems where deadlock
situations are rare but still possible.​

21.​What is Fragmentation? Explain types of Fragmentation in detail.​


—> Fragmentation refers to the inefficient use of memory that occurs when free memory
is broken into small, non-contiguous blocks, making it difficult to allocate large blocks of
memory to processes. There are two main types of fragmentation: internal fragmentation
and external fragmentation.
1.​ Internal Fragmentation:Internal fragmentation occurs when memory is allocated in
fixed-sized blocks or pages, and a process does not use the entire block or page
assigned to it. This leads to wasted memory within the allocated block. For example, if a
system allocates a 4 KB memory block to a process that only requires 2 KB, the
remaining 2 KB becomes wasted and cannot be used by other processes. While the
memory block is technically available, its unused portion remains inaccessible. Internal
fragmentation is typically addressed by using smaller allocation units or better memory
management strategies like paging.​

2.​ External Fragmentation: External fragmentation occurs when free memory is scattered
across different locations in the system, but there is insufficient contiguous space to
allocate memory for a new process. Even though there may be enough total free
memory, the lack of a contiguous block means that the operating system cannot allocate
memory efficiently. External fragmentation is most commonly found in systems using
dynamic memory allocation strategies, such as those with variable-sized memory blocks.
One approach to combat external fragmentation is memory compaction, where the OS
reorganizes memory so that free space is gathered into one large block.​

22.​List and explain system calls related to Process and Job control.​
—> System calls are the interface between a running program and the operating system
kernel, allowing programs to request various services. Process and job control system
calls are particularly important in managing the lifecycle of processes and their execution
on the CPU. Some of the common system calls related to process and job control
include:
1.​ fork(): The fork() system call is used to create a new process. It duplicates the calling
process, creating a child process. The child process is identical to the parent process
except for the process ID. After a fork(), both the parent and child processes continue
8

execution from the point where fork() was called, and each can execute different tasks.​

2.​ exec():The exec() system call replaces the current process image with a new process
image. When a process calls exec(), it loads a new program into its address space,
effectively replacing the existing program with a new one. This is commonly used after
fork() to run a different program in the child process.​

3.​ wait():The wait() system call is used by a parent process to wait for its child process to
finish execution. It ensures that the parent process can obtain information about the
child’s termination status, such as whether it terminated normally or was terminated by a
signal. This is important for proper process synchronization.​

4.​ exit(): The exit() system call is used by a process to terminate itself. It takes an integer
exit status code, which is returned to the parent process. This helps the parent process
understand the reason for the child’s termination (whether it completed successfully or
encountered an error).​

5.​ kill(): The kill() system call sends a signal to a process to request its termination or to
notify it of an event. The process may choose to handle the signal in various ways, such
as terminating, ignoring it, or handling the signal with a custom handler. Signals are a
form of inter-process communication (IPC).​

6.​ nice(): The nice() system call is used to adjust the priority of a process. It can increase
or decrease the "niceness" of a process, influencing how much CPU time the process
gets compared to other processes. This allows for better control over process scheduling
and resource allocation.​

23.​State and explain Critical Section Problem.​


—> The Critical Section Problem is a fundamental problem in the field of concurrent
programming and process synchronization. It arises when multiple processes share
resources (such as memory, CPU, or files) and need to execute critical sections of their
code that access these resources. A critical section refers to a part of a process that
accesses shared resources and must not be executed concurrently by more than one
process to avoid inconsistent or incorrect results. The problem is to design a mechanism
that ensures mutual exclusion, meaning only one process can execute its critical section
at any given time.
The Critical Section Problem consists of three primary requirements:
1.​ Mutual Exclusion: Only one process can be inside its critical section at any time. If one
process is executing in its critical section, others must wait for it to finish before entering
their critical sections.​

2.​ Progress: If no process is in its critical section and one or more processes wish to enter
their critical section, the selection of the process to enter the critical section must be
9

made without unnecessary delay. It should not be blocked forever.​

3.​ Bounded Waiting: There must be a limit to how many times other processes are
allowed to enter the critical section before a particular process gets its turn to execute.
This prevents starvation, where a process is indefinitely postponed in favor of others.​

24.​Explain different methods for recovery from deadlock.​


—> Deadlock occurs in a system when a set of processes are blocked because each
process is holding a resource and waiting for another resource held by another process.
Recovery from deadlock involves breaking the deadlock cycle and allowing processes to
continue. The main methods for deadlock recovery are:
1.​ Process Termination:​
Abort all deadlocked processes: In this approach, all processes involved in the
deadlock are terminated. This is a simple method but can cause significant loss of work
and resources.​
Abort one process at a time: Rather than aborting all processes, one process is
terminated at a time, and the system checks whether the deadlock is resolved after each
termination. This allows more selective recovery.
2.​ Resource Preemption:​
Preempt resources from processes: This method involves taking resources from one
or more processes involved in the deadlock and allocating them to other processes.
Preempted processes are typically rolled back to a safe state and restarted later.​
Rollback: A process is rolled back to a point before it enters the deadlock, and its
resources are freed. This allows the system to break the deadlock cycle and continue
processing.
3.​ Combination of Methods: A combination of process termination and resource
preemption is often used to resolve deadlocks more efficiently. The system may first
terminate one or more processes to reduce the complexity, and then preempt resources
to ensure smooth progress.
4.​ Timeouts: In some systems, a process may be automatically aborted or rolled back if it
waits too long for a resource. The operating system uses a timeout mechanism to detect
deadlock conditions and recover from them.​

25.​What is meant by Shortest Seek Time First? Explain in detail.​


—> Shortest Seek Time First (SSTF) is a disk scheduling algorithm used to minimize the
total seek time of a disk arm by selecting the next disk request that is closest to the
current position of the disk head. The goal is to reduce the time the disk arm spends
moving between different requests, thereby increasing the overall performance of the
disk.
How SSTF Works:
1.​ Request Queue: When a disk request is made, it is placed in a queue. Each request
specifies the cylinder (track) on which data is to be read or written.​
10

2.​ Disk Head Position: The disk has a read/write head that moves across a series of
concentric tracks on a disk. The current position of the disk head is noted.
3.​ Selecting the Next Request: The SSTF algorithm selects the request that is closest to
the current position of the disk head. This minimizes the distance the head has to travel
and reduces seek time.
4.​ Execution: After serving the closest request, the disk head moves to the corresponding
cylinder and serves the next closest request, and the process repeats.​
Example:
Consider a disk with 100 cylinders and the following disk request queue: 40, 10, 60, 90, 20.
●​ Assume the disk head is currently at cylinder 50.
●​ The nearest request is at cylinder 40, so the head will move there first.
●​ After serving the request at 40, the nearest request is at 60, and so on.
Advantages of SSTF:
1.​ Reduces average seek time as it minimizes the distance between consecutive requests.
2.​ Fairer than some other algorithms like FCFS (First-Come-First-Serve), as it prioritizes
requests based on proximity.
Disadvantages of SSTF:
1.​ Starvation: Some requests can be delayed indefinitely if there are always closer requests
to the current position of the disk head.
2.​ Non-Optimal: SSTF does not guarantee the minimum total seek time in the long run, as
it may result in the head moving back and forth over a limited portion of the disk.​

26.​Define the terms : ​


1. Logical Address - A logical address is the address generated by the CPU during a
program's execution. It is also referred to as a virtual address because it is part of the
program's address space. The logical address does not correspond directly to a physical
location in memory but is instead mapped to a physical address by the memory
management unit (MMU). Logical addresses are used by programs to access variables,
functions, and other data.​
2. Physical Address - A physical address refers to an actual location in the system's
main memory (RAM). It represents a physical address on the hardware and corresponds
to a specific location in memory where data is stored. When a program accesses
memory, the logical address is translated to a physical address by the operating
system’s memory manager, which then retrieves the data from the appropriate physical
location in memory.​

27.​Explain Resource Allocation Graph in detail.​


—> A Resource Allocation Graph (RAG) is a directed graph used to represent the
allocation of resources in a system and the processes that are requesting them. It is a
graphical tool that helps detect and prevent deadlock by visually showing how resources
are allocated and requested by processes.


11


Components of a Resource Allocation Graph:
1.​ Processes: Represented by circles (nodes) in the graph. Each process is connected to
the resources it holds and requests.
2.​ Resources: Represented by squares (nodes) in the graph. Each resource is connected
to the processes that request it and those that currently hold it.
3.​ Edges (Arrows):
Request Edge: An arrow from a process node to a resource node indicates that the process is
requesting the resource.
Assignment Edge: An arrow from a resource node to a process node indicates that the
resource has been allocated to that process.​

How RAG Helps Detect Deadlock:


●​ Deadlock: A deadlock occurs if and only if there is a cycle in the Resource Allocation
Graph. A cycle indicates that a set of processes are each holding resources while
waiting for other resources held by the other processes in the cycle, thus causing a
deadlock.
●​ No Deadlock: If there is no cycle in the graph, the system is in a safe state and
deadlock has not occurred.​

Example:
Consider a system with two processes (P1 and P2) and two resources (R1 and R2).
●​ Process P1 requests resource R1, and Process P2 requests resource R2.
●​ Resource R1 is allocated to P1, and resource R2 is allocated to P2.
●​ Now, if P1 requests R2 and P2 requests R1, the graph will form a cycle, indicating a
deadlock situation.
Deadlock Prevention using RAG:
●​ If the Resource Allocation Graph is acyclic, it indicates that no deadlock can occur.
●​ The graph can also be used for deadlock avoidance by ensuring that allocation requests
that could lead to cycles are not granted.​

28.​What are differences between Preemptive and Non-Preemptive scheduling?​


—> Preemptive Scheduling and Non-Preemptive Scheduling are two types of CPU
scheduling algorithms used by operating systems to manage the execution of processes.
They differ in how processes are interrupted or allowed to continue their execution.
Preemptive Scheduling:
1.​ Definition: In preemptive scheduling, the operating system can suspend or "preempt" a
running process in favor of another process, even if the first process has not finished its
execution. This happens based on a predefined policy or when a higher-priority process
becomes ready to execute.​

2.​ Interruptibility: Processes can be interrupted during their execution and rescheduled by
the OS, leading to more responsive systems, especially for time-sharing.​
12

3.​ Context Switching: Preemptive scheduling involves frequent context switching, as the
system can switch between processes before the current process has completed its
execution.​

4.​ Fairness: This type of scheduling ensures fairness as all processes, regardless of their
length, get a chance to execute.​

5.​ Examples: Round Robin (RR), Shortest Job First (SJF) with preemption, and Priority
Scheduling are common preemptive scheduling algorithms.​

6.​ Efficiency: Preemptive scheduling is suitable for systems where responsiveness is


crucial, such as interactive or real-time systems.​

7.​ Complexity: The operating system needs to manage additional complexity due to
frequent context switching and resource allocation.​

Non-Preemptive Scheduling:
1.​ Definition: In non-preemptive scheduling, once a process starts executing, it runs to
completion unless it voluntarily yields control, such as when it is waiting for I/O
operations.​

2.​ Interruptibility: Processes cannot be interrupted mid-execution; they are allowed to run
until they complete or switch to a waiting state.​

3.​ Context Switching: Non-preemptive scheduling results in less frequent context


switching since a process is only switched out when it finishes execution or enters a
waiting state.​

4.​ Fairness: Processes that are CPU-bound can monopolize the CPU, causing other
processes to wait, leading to possible starvation.​

5.​ Examples: First-Come-First-Serve (FCFS), Shortest Job First (SJF) without preemption.​

6.​ Efficiency: Non-preemptive scheduling is simpler but may be inefficient in situations


where a process hogs the CPU and causes delays for others.​

7.​ Complexity: This method is simpler to implement and requires less overhead than
preemptive scheduling.​

29.​‘Operating system is like a manager of the computer system’. Explain.​


—> The statement "Operating System is like a manager of the computer system" is a
metaphor that helps explain the critical role the operating system (OS) plays in managing
and coordinating the resources and processes in a computer system. Just as a manager
13

oversees and coordinates various tasks in an organization to ensure smooth operation,


the OS does the same within a computer.
1.​ Resource Management: Like a manager allocates tasks and resources to employees
based on priorities, the OS allocates system resources (like CPU time, memory, and I/O
devices) to different processes based on their needs and priorities.​

2.​ Process Coordination: The OS manages processes, ensuring that they are executed in
the correct order, prevents conflicts, and allows multiple processes to run concurrently
without interfering with each other. This is similar to a manager delegating tasks to
workers and ensuring everyone is working efficiently.​

3.​ Scheduling: Just as a manager decides when and how tasks should be performed, the
OS schedules processes for execution, prioritizing critical tasks while ensuring that less
important tasks get executed as well.​

4.​ Security and Access Control: Like a manager protects the integrity of the company by
controlling access to sensitive information, the OS secures the system by managing user
access, controlling file permissions, and ensuring that data is protected from
unauthorized access.​

5.​ Error Handling: A manager is responsible for resolving conflicts and addressing any
issues that arise in the workplace. Similarly, the OS handles errors, manages system
faults, and ensures that programs continue to run smoothly even in case of failures.​

6.​ User Interface: Just as a manager communicates with employees and stakeholders, the
OS provides the user interface, allowing users to interact with the system through
command-line interfaces or graphical user interfaces (GUI).​

7.​ Efficiency and Optimization: A manager aims to improve the efficiency of the
workplace, just as the OS optimizes the system's performance by managing memory,
CPU usage, and I/O operations effectively.​

30.​What is Scheduling? Compare short term scheduler with medium term scheduler.​
—> Scheduling is the process by which the operating system decides which process or
task should be executed by the CPU at any given time. It is a crucial function of the OS,
enabling multitasking and the efficient use of system resources. The scheduler
determines the order of execution for processes and manages the execution flow based
on scheduling algorithms.
There are different types of schedulers used in an operating system, and two of the most
important are the short-term scheduler and medium-term scheduler.
Short-Term Scheduler (CPU Scheduler):
1.​ Function: The short-term scheduler is responsible for selecting processes from the
ready queue to execute on the CPU. It decides which process will run next when the
14

CPU becomes available.​

2.​ Frequency: The short-term scheduler is invoked frequently, typically many times per
second (milliseconds).​

3.​ Action: It selects processes that are in the ready state and allocates CPU time to them
based on the chosen scheduling algorithm (e.g., Round Robin, FCFS, SJF).​

4.​ Duration: The duration of the scheduling decision is very short (milliseconds to
seconds).​

5.​ Goals: The short-term scheduler aims to maximize CPU utilization, reduce wait times for
processes, and ensure fair distribution of CPU time among processes.​

6.​ Example: In a time-sharing system, the short-term scheduler ensures that each user or
process gets a fair share of the CPU time, allowing interactive use of the system.​

Medium-Term Scheduler:
1.​ Function: The medium-term scheduler is responsible for managing processes that are
in the swapped out or blocked state, and it decides which processes should be swapped
in or out of the main memory.​

2.​ Frequency: The medium-term scheduler is invoked less frequently than the short-term
scheduler but more often than the long-term scheduler.​

3.​ Action: It handles swapping processes between the main memory and secondary
storage (usually a disk) to optimize memory usage. If a process is not currently active, it
may be swapped out to free up memory for other processes.​

4.​ Duration: The medium-term scheduler runs at a slower pace than the short-term
scheduler (minutes or longer).​

5.​ Goals: The goal of the medium-term scheduler is to improve system performance by
balancing memory usage and ensuring that the most critical or active processes remain
in memory for quick execution.​

6.​ Example: In systems with limited physical memory, the medium-term scheduler might
swap out a background process to make room for a high-priority task, such as a user
request.​
15

Comparison:
●​ Focus: The short-term scheduler focuses on CPU scheduling, while the medium-term
scheduler deals with memory management and swapping processes in and out of the
main memory.
●​ Time Interval: The short-term scheduler is called much more frequently (milliseconds),
while the medium-term scheduler operates at a less frequent pace (minutes).
●​ Interaction with Processes: The short-term scheduler works with processes in the ready
queue, whereas the medium-term scheduler handles processes that are not actively
running but may need to be swapped into memory.
●​ Impact on System: The short-term scheduler has an immediate effect on the system’s
responsiveness and CPU utilization, whereas the medium-term scheduler has a
longer-term effect on overall system performance and memory management.​

31.​Draw and explain process control block (PCB).​


—> A Process Control Block (PCB) is a vital data structure maintained by the operating
system for every process. It contains important information that is needed to manage the
execution of processes effectively. Whenever a process is created, the OS creates a
corresponding PCB to store all necessary data for process management, scheduling,
and execution control. The PCB is crucial during context switching because it allows the
operating system to save and restore process information when switching between
processes.
The main components of a PCB are:
●​ Pointer: It points to the next PCB in the list. It is used to maintain a list of all PCBs.
●​ Process State: Indicates the current state of the process, such as New, Ready, Running,
Waiting, or Terminated.
●​ Process Number (PID): A unique identification number assigned to each process.
●​ Program Counter: Contains the address of the next instruction that needs to be executed
by the process.
●​ CPU Registers: Includes all processor registers that the process was using before it was
interrupted.
●​ Memory Limits: Information about the memory allocated to the process, such as base
and limit registers.
●​ List of Open Files: Includes information about files opened by the process like file
descriptors or pointers to file tables.​

Thus, the PCB acts like the "identity card" of a process, carrying all essential information for
managing a process during its entire life cycle. It is saved when the process is not running and
restored when the process is scheduled again. PCBs are stored in a special part of memory and
accessed frequently by the scheduler and resource manager of the operating system.
Neat Diagram of PCB:
16

Pointer Process state

Process Number

Program counter

Registers

Memory Limits

List of open files

32.​Compare multiprogramming with a multiprocessing system.​


—> Multiprogramming and multiprocessing are two distinct concepts in operating system
design, both aimed at improving the overall performance and efficiency of a system.
While they share the objective of maximizing resource utilization and system throughput,
they differ in how they achieve this. Below is a detailed comparison of both concepts:
Multiprogramming:
1.​ Definition: Multiprogramming is a technique that allows multiple programs (or
processes) to be loaded into memory and executed by the CPU. The operating system
manages these processes by switching between them to ensure the CPU is always
utilized, even if one process is waiting for I/O operations.​

2.​ CPU Utilization: In a multiprogramming system, multiple processes are loaded into
memory, but only one process can run at any given time on a single CPU. The CPU is
kept busy by switching between processes, minimizing idle time.​

3.​ Execution: The CPU executes one process at a time, and when a process requires I/O,
the CPU switches to another process, allowing it to continue executing. This technique is
primarily used in systems with a single processor.​

4.​ Memory Management: The operating system uses various memory management
techniques, such as partitioning or paging, to keep multiple programs in memory. The
OS ensures that there is enough memory for all the programs and that they don't
overwrite each other.​

5.​ Concurrency: Multiprogramming does not achieve true parallel execution. Although
multiple programs appear to be running simultaneously, they are actually taking turns on
the CPU (time-sharing), with only one process running at a time.​

6.​ Example: Traditional mainframe systems or early personal computers that run multiple
programs, where the OS switches between programs to maintain a high level of CPU
utilization.​
17

7.​ Advantages:
○​ Increases CPU utilization by keeping the processor busy.
○​ Reduces CPU idle time by overlapping I/O and computation.
○​ Improves system throughput by running multiple programs.​

8.​ Disadvantages:
○​ No true parallelism; only one process runs at a time.
○​ CPU time is shared, which might lead to slower individual process execution.
○​ Managing multiple programs requires efficient memory and resource allocation
techniques.
Multiprocessing:
1.​ Definition: Multiprocessing is a system that uses more than one CPU (or processor) to
execute multiple processes simultaneously. It involves multiple processors working
together to execute different parts of a program or different programs concurrently,
achieving true parallelism.​

2.​ CPU Utilization: Multiprocessing systems utilize multiple CPUs or cores, which allows
them to execute processes in parallel. This means that multiple processes can run at the
same time on different processors, leading to improved system performance and faster
execution times.​

3.​ Execution: In a multiprocessing system, multiple processes can be executed


simultaneously on different CPUs, leading to true concurrency. These systems can
perform real-time operations, as processes are distributed across processors, allowing
for more efficient handling of tasks.​

4.​ Memory Management: Each processor in a multiprocessing system may have its own
local memory (in some cases) or share a common global memory. The operating system
must ensure that memory access is properly managed and synchronized across the
multiple processors.​

5.​ Concurrency: Multiprocessing achieves true parallel execution by utilizing multiple


processors. Multiple tasks or parts of a task can run in parallel, leading to faster
execution and improved performance, especially for compute-intensive tasks.​

6.​ Example: Modern multi-core processors in personal computers, supercomputers, and


server systems that support multiple CPUs or cores working together to execute tasks.​

7.​ Advantages:
○​ True parallelism leads to faster processing and better performance.
○​ Can handle more intensive and computationally heavy workloads by distributing
tasks across processors.
18

○​ Ideal for real-time systems, scientific computing, and other applications requiring
high processing power.​

8.​ Disadvantages:
○​ Requires a more complex system architecture and sophisticated operating
system support.
○​ The system must handle synchronization between processors and memory
management.
○​ Not all applications are designed to take advantage of multiprocessing, meaning
some processes may not benefit from multiple CPUs.​

33.​Draw and explain the process state diagram.​


—> In an operating system, a process passes through different states from its creation
to its completion. These states help the operating system manage the execution of
processes efficiently. The transition between these states is controlled by the process
scheduler, depending on various events like scheduling, I/O operations, and interrupts.
The major states of a process are:
●​ New: The process is being created.
●​ Ready: The process is waiting to be assigned to a processor.
●​ Running: Instructions are being executed on the processor.
●​ Waiting (Blocked): The process is waiting for some event (such as I/O completion) to
occur.
●​ Terminated: The process has finished execution.​

The transitions between these states occur as follows:


●​ From New to Ready when the process is admitted into the system.
●​ From Ready to Running when the scheduler selects the process for execution.
●​ From Running to Waiting if the process needs to wait for a resource.
●​ From Running to Ready if it is preempted by the scheduler for time-sharing systems.
●​ From Waiting to Ready when the event for which the process was waiting occurs.
●​ From Running to Terminated when the process completes its execution.​

These states and transitions ensure that processes are efficiently managed, and CPU utilization
is maximized by quickly switching between processes whenever necessary.
19

Process State Diagram:

34.​Compare internal and external fragmentation.​


—> Internal fragmentation and external fragmentation are two types of memory wastage
problems that occur in memory management. Both affect the efficient utilization of
memory resources but happen due to different reasons.
Internal fragmentation occurs when memory is allocated in fixed-sized blocks and the
allocated memory may be slightly larger than the requested memory. The unused space within
the allocated block leads to internal fragmentation. In other words, memory is wasted inside an
allocated partition because of the difference between the allocated and the actual required
memory.
External fragmentation occurs when free memory is separated into small blocks and is
scattered throughout the system. When memory is allocated and deallocated dynamically, small
holes or gaps are created. Even though there may be enough total free memory to satisfy a
request, it may not be contiguous, leading to failure of allocation.
Key differences: Internal fragmentation happens inside a block, whereas external
fragmentation happens between blocks. Internal fragmentation generally occurs in systems with
fixed-sized partitioning, while external fragmentation occurs in systems with dynamic memory
allocation.
Example: In internal fragmentation, if a process requires 27 KB and is allocated a 32 KB block,
5 KB remains unused. In external fragmentation, after many allocations and deallocations, free
spaces like 4 KB, 8 KB, 2 KB may not fit a 10 KB process even if total free space is available.
Thus, both types of fragmentation decrease memory efficiency, but are managed using
techniques like paging (to eliminate external fragmentation) and better partitioning schemes (to
reduce internal fragmentation).​

35.​Explain semaphores and its types.​


—> A semaphore is a synchronization tool used to manage concurrent processes and
20

prevent race conditions in an operating system. It is a variable or abstract data type used
to control access to a common resource by multiple processes in a concurrent system
such as a multitasking operating system. Semaphores are used to signal and wait for
resource availability and ensure that processes execute in a synchronized manner.​
Semaphores use two atomic operations: wait (P operation) and signal (V operation).
The wait operation decreases the semaphore value, while the signal operation increases
it. If the value is positive, the process continues; otherwise, it waits.​
Types of semaphores:
1.​ Binary Semaphore: Can only take values 0 and 1, used for mutual exclusion. Only one
process can access the critical section at a time.
2.​ Counting Semaphore: Can take non-negative integer values and is used for managing
multiple instances of a resource. Binary semaphores are also known as mutex locks.
Counting semaphores are used when multiple resources of the same type are available,
like multiple printers or identical devices.​
Semaphores help in process synchronization and prevent issues like deadlocks and
starvation if used properly.​

36.​What is a deadlock? Explain various deadlock handling techniques.​


—> A deadlock is a situation in an operating system where a group of processes is
blocked because each process is holding a resource and waiting for another resource
held by some other process. In simple words, a deadlock is a state where processes are
stuck, waiting for each other indefinitely.​
Deadlock can occur when four conditions hold simultaneously: Mutual Exclusion, Hold
and Wait, No Preemption, and Circular Wait.​
Deadlock handling techniques are:
1.​ Deadlock Prevention: Proactively ensures that at least one of the necessary conditions
for deadlock does not hold. For example, by preventing hold and wait by making a
process request all resources at once.​

2.​ Deadlock Avoidance: Requires information about future resource requests. The
Banker's Algorithm is a common example of deadlock avoidance that checks the
system’s safe state before resource allocation.​

3.​ Deadlock Detection and Recovery: Allows deadlocks to occur but periodically checks
for them. When detected, it recovers by aborting one or more processes or preempting
resources.​

4.​ Ignoring Deadlock: Some systems, like many UNIX systems, ignore deadlocks under
the assumption that they are rare. It is known as the Ostrich algorithm. Handling
deadlocks is crucial for maintaining the system’s stability and ensuring that processes
continue to make progress.​

37.​What are different types of directory structure? Explain​


—> Directory structure organizes files in a computer system. Different types of directory
21

structures are designed based on user needs and system efficiency.​


The main types are:
1.​ Single-Level Directory: All files are stored in a single directory. It is simple but leads to
problems when the number of files increases, causing naming conflicts.​

2.​ Two-Level Directory: Each user has their own separate directory under the master
directory. It solves naming conflicts but does not allow for subdirectories.​

3.​ Tree-Structured Directory: Allows directories to contain subdirectories. It is


hierarchical, providing better organization and easy navigation.​

4.​ Acyclic Graph Directory: Allows sharing of files and directories among users by using
links without forming any cycles. Useful for collaboration.​

5.​ General Graph Directory: Even more flexible than an acyclic graph, but can cause
complications like cycles, which require careful management to avoid infinite loops. Each
directory structure type offers different levels of complexity, access control, and efficiency
based on the system's requirements and the users' needs.​

38.​Explain linked allocation in files.​


—> Linked allocation is a file allocation method used by operating systems to organize
and store files on disk. In this method, each file is represented as a linked list of disk
blocks scattered anywhere on the disk.​
Each block contains a pointer to the next block, thus maintaining a chain or link of
blocks. Only the starting address (pointer) of the first block is stored in the directory.​
Advantages:
●​ No external fragmentation, since any free block on the disk can be utilized.
●​ Files can easily grow dynamically, simply by adding a new block and linking it to the
previous block.
Disadvantages:
●​ Random access is difficult and inefficient, as to access a block in the middle, all previous
blocks must be traversed sequentially.
●​ The pointer itself takes up some space in each block, leading to slight internal overhead.
Linked allocation is particularly suited for sequential access files where performance for
random access is not critical.​

39.​Compare paging and segmentation.​


—> Paging and segmentation are memory management techniques used by operating
systems to manage and allocate memory.​
Paging:
●​ Divides the physical memory into fixed-sized blocks called frames and logical memory
into blocks of the same size called pages.
●​ Pages are mapped to frames through a page table.
●​ It eliminates external fragmentation but may suffer from internal fragmentation.
22

●​ All pages are of the same size, leading to efficient use of memory.​
Segmentation:
●​ Divides memory into segments of variable length based on logical divisions like code,
stack, data, etc.
●​ Each segment has a segment number and offset, managed through a segment table.
●​ Segmentation reflects the user's view of memory, making it easier for programmers.
●​ Leads to external fragmentation, as free memory may not be contiguous. ​
Key differences: Paging is concerned with fixed-size divisions, while segmentation is
based on logical variable-sized divisions. Paging is hardware managed, whereas
segmentation is more user-view oriented.​
In modern systems, both techniques are often combined to gain the advantages of both.​

40.​Explain file structure with the help of a diagram.​


—> In an operating system, a file structure defines how information is logically stored
and organized within a file. Files are the basic units of storage and data management,
and their structure determines how efficiently information can be accessed, modified,
and interpreted.​
There are mainly three types of file structures:
1.​ Stream of Bytes:​
In this structure, a file is treated as a continuous stream of bytes with no specific
structure imposed by the operating system. It is the responsibility of the application
programs to interpret the bytes correctly. Text files and executable files are examples of
streams of bytes.​
Example: A text document where each character is treated as a byte.​

2.​ Sequence of Records:​


In this structure, a file is organized as a collection of records, where each record
contains a fixed or variable number of fields. This is commonly used in applications like
databases where structured data needs to be stored.​
Example: A payroll file where each record contains employee ID, name, salary, etc.​

3.​ Tree of Records:​


In this structure, records are organized hierarchically in the form of a tree, where each
record may have pointers to other related records. This structure supports faster
searching and efficient management of complex relationships between records.​
Example: Directory structures in a file system.
23

Diagram:​

41.​Calculate average turn around time and average waiting time for all set of
processes using the FCFS algorithm.​

Processes Burst Time Arrival Time

P1 5 1

P2 6 0

P3 2 2

P4 4 0
—> ​
Step 1: Arrange processes according to Arrival Time.
At time 0, both P2 and P4 have arrived. In FCFS, the process that arrives first is executed first.
Since both P2 and P4 arrive at the same time, we execute P2 first (based on the given order).​
Thus, the execution order is:​
P2 → P4 → P1 → P3
Step 2: Construct Gantt Chart.
The Gantt Chart based on the order is:
Time 0 6 10 15 17

Process P P4 P1 P3
2
Explanation:
●​ P2 starts at 0, runs for 6 units (ends at 6).
24

●​ P4 starts at 6, runs for 4 units (ends at 10).


●​ P1 starts at 10, runs for 5 units (ends at 15).
●​ P3 starts at 15, runs for 2 units (ends at 17).​

Step 3: Calculate Completion Time (CT), Turn Around Time (TAT), and Waiting Time (WT).
Formulas used:
●​ Turn Around Time (TAT) = Completion Time (CT) - Arrival Time (AT)
●​ Waiting Time (WT) = Turn Around Time (TAT) - Burst Time (BT)​

Process AT BT C TAT = CT - AT WT = TAT - BT


T

P2 0 6 6 6 0

P4 0 4 10 10 6

P1 1 5 15 14 9

P3 2 2 17 15 13

Step 4: Calculate Average Turn Around Time and Average Waiting Time.
●​ Average Turn Around Time (TAT) = (6 + 10 + 14 + 15) / 4​
= 45 / 4​
= 11.25 units
●​ Average Waiting Time (WT) = (0 + 6 + 9 + 13) / 4​
= 28 / 4​
= 7 units​

Process Arrival Time Burst Time Completion Time Turn Around Time Waiting Time

P2 0 6 6 6 0

P4 0 4 10 10 6

P1 1 5 15 14 9

P3 2 2 17 15 13

42.​Consider the following page reference string:​


4,6,7,8,4,6,9,7,8,4,6,7,9.​
THe number of frames is 3. Show page trace and calculate page fault for the
following page replacement schemes.​
1. FIFO ​
2. LRU​
—> 1. FIFO (First-In-First-Out) Page Replacement:
25

●​ In FIFO, the oldest page in memory is replaced first.​

Step Reference Frames (3) Page Fault (Y/N)

1 4 4-- Yes

2 6 46- Yes

3 7 467 Yes

4 8 8 6 7 (4 Yes
replaced)

5 4 8 4 7 (6 Yes
replaced)

6 6 6 4 7 (8 Yes
replaced)

7 9 9 4 7 (6 Yes
replaced)

8 7 947 No

9 8 8 4 7 (9 Yes
replaced)

10 4 847 No

11 6 6 4 7 (8 Yes
replaced)

12 7 647 No

13 9 9 4 7 (6 Yes
replaced)

Total Page Faults using FIFO = 10


2. LRU (Least Recently Used) Page Replacement:
●​ In LRU, the least recently used page is replaced first.​

Step Reference Frames (3) Page Fault (Y/N)

1 4 4-- Yes

2 6 46- Yes

3 7 467 Yes
26

4 8 8 6 7 (4 Yes
replaced)

5 4 4 6 7 (8 Yes
replaced)

6 6 467 No

7 9 9 6 7 (4 Yes
replaced)

8 7 967 No

9 8 8 6 7 (9 Yes
replaced)

10 4 4 6 7 (8 Yes
replaced)

11 6 467 No

12 7 467 No

13 9 9 6 7 (4 Yes
replaced)

Total Page Faults using LRU = 10​

43.​Assume there are total 0-199 tracks that are present on each surface of the disk. If
the request queue is 68, 172, 4, 178, 130, 40, 118 and 136 the initial position of the
head is 25. Apply FCFS disk scheduling algorithm & calculate total head
movement.​
The FCFS Disk Scheduling algorithm serves the disk requests in the order in
which they arrive.
—> Steps to Calculate Total Head Movement:
1.​ Start at the initial head position, which is 25.
2.​ Serve the requests in the order they are given.​

Step-by-Step Movement:
●​ Initial Head Position = 25
●​ First request = 68 → Move from 25 to 68 → Head movement = |68 - 25| = 43
●​ Second request = 172 → Move from 68 to 172 → Head movement = |172 - 68| = 104
●​ Third request = 4 → Move from 172 to 4 → Head movement = |172 - 4| = 168
●​ Fourth request = 178 → Move from 4 to 178 → Head movement = |178 - 4| = 174
●​ Fifth request = 130 → Move from 178 to 130 → Head movement = |178 - 130| = 48
●​ Sixth request = 40 → Move from 130 to 40 → Head movement = |130 - 40| = 90
●​ Seventh request = 118 → Move from 40 to 118 → Head movement = |118 - 40| = 78
27

●​ Eighth request = 136 → Move from 118 to 136 → Head movement = |136 - 118| = 18
Total Head Movement Calculation:
●​ Total Head Movement = 43 + 104 + 168 + 174 + 48 + 90 + 78 + 18
●​ Total Head Movement = 723​

44.​Consider the following set of processes with the length of the CPU burst time
given in milliseconds.​

Process Burst Time

P1 10

P2 1

P3 2

P4 1

P5 5
All processes arrive at time () in the order P1, P2, P3, P4, P5.
1.​ Draw Gantt chart using SJF method
2.​ Calculate average turnaround time and average waiting time.
—> ​
Step 1: Process Scheduling Order Using SJF
The Shortest Job First (SJF) method schedules processes based on the burst time, with the
shortest burst time getting executed first.
●​ At time 0, the processes have the following burst times:​
P1 = 10, P2 = 1, P3 = 2, P4 = 1, P5 = 5.
●​ Shortest burst time = 1, so P2 and P4 will be selected first, but since P2 arrived first, it
will be executed first.
●​ After P2, we have P4 (with burst time 1), followed by P3 (with burst time 2), then P5 (with
burst time 5), and finally P1 (with burst time 10).
So, the order of execution is P2, P4, P3, P5, P1.
Step 2: Gantt Chart
The Gantt chart for SJF scheduling will look as follows:
Process P P4 P3 P5 P1
2

Time 0 1 2 4 9
Step 3: Calculate Turnaround Time (TAT) and Waiting Time (WT)
1.​ Turnaround Time (TAT) = Completion Time - Arrival Time (Arrival Time is 0 for all
processes).
○​ TAT for P2 = 1 - 0 = 1
○​ TAT for P4 = 2 - 0 = 2
○​ TAT for P3 = 4 - 0 = 4
28

○​ TAT for P5 = 9 - 0 = 9
○​ TAT for P1 = 19 - 0 = 19​

2.​ Waiting Time (WT) = Turnaround Time - Burst Time.


○​ WT for P2 = 1 - 1 = 0
○​ WT for P4 = 2 - 1 = 1
○​ WT for P3 = 4 - 2 = 2
○​ WT for P5 = 9 - 5 = 4
○​ WT for P1 = 19 - 10 = 9
Step 4: Calculate Average Turnaround Time and Average Waiting Time
●​ Average Turnaround Time (TAT_avg) = (1 + 2 + 4 + 9 + 19) / 5 = 35 / 5 = 7 ms​
Average Waiting Time (WT_avg) = (0 + 1 + 2 + 4 + 9) / 5 = 16 / 5 = 3.2 ms

Gantt Chart:​

P2 P4 P3 P5 P1

0 1 2 4 9 19

●​ Average Turnaround Time (TAT_avg) = 7 ms


●​ Average Waiting Time (WT_avg) = 3.2 ms

45. Assume there are a total 200 tracks present on the disk. If the request queue is:
84, 145, 89, 168, 128, 100, 68 and initial position of head is 125.​
Apply FCFS disk scheduling algorithm and calculate total head movement.​
—> FCFS Disk Scheduling:
The FCFS (First-Come, First-Served) algorithm handles disk requests in the order they arrive.
Given:
●​ Initial Head Position = 125
●​ Request Queue = [84, 145, 89, 168, 128, 100, 68]
Step 1: Process Scheduling Order
The requests will be served in the order they arrive:
●​ 84, 145, 89, 168, 128, 100, 68
Step 2: Calculate Head Movement for Each Request
1.​ Move from initial position (125) to 84:
○​ Head movement = |125 - 84| = 41 tracks
2.​ Move from 84 to 145:
○​ Head movement = |145 - 84| = 61 tracks
3.​ Move from 145 to 89:
○​ Head movement = |145 - 89| = 56 tracks
4.​ Move from 89 to 168:
○​ Head movement = |168 - 89| = 79 tracks​
29

5.​ Move from 168 to 128:


○​ Head movement = |168 - 128| = 40 tracks
6.​ Move from 128 to 100:
○​ Head movement = |128 - 100| = 28 tracks
7.​ Move from 100 to 68:
○​ Head movement = |100 - 68| = 32 tracks

Step 3: Total Head Movement


●​ Total head movement = 41 + 61 + 56 + 79 + 40 + 28 + 32
●​ Total head movement = 337 tracks

46. Consider the following page reference string 9,2,3,4,2,5,2,6,4,5,2,5,4,3,4,2,3,9,2,3 The


number of page frames is 4. Calculate the page faults for the given page replacement
scheme using FIFO ​
—> FIFO (First-Come-First-Served) Page Replacement Algorithm:
●​ FIFO replaces the oldest page in the memory when a page fault occurs.
●​ We have 4 page frames and will replace pages in the order they were added.
Step 1: Initialize the frames
We have 4 frames, initially empty: [ ]
Step 2: Process the page reference string
Now, let's go step by step and fill the frames as per the FIFO algorithm.
1.​ Page Reference 9:
○​ Page Fault.
○​ Frames: [9, - , - , -]
○​ Total page faults: 1​

2.​ Page Reference 2:


○​ Page Fault.
○​ Frames: [9, 2, - , -]
○​ Total page faults: 2​

3.​ Page Reference 3:


○​ Page Fault.
○​ Frames: [9, 2, 3, -]
○​ Total page faults: 3​

4.​ Page Reference 4:


○​ Page Fault.
○​ Frames: [9, 2, 3, 4]
○​ Total page faults: 4​

5.​ Page Reference 2:


○​ No Page Fault (2 is already in frames).
○​ Frames remain: [9, 2, 3, 4]
30

○​ Total page faults: 4​

6.​ Page Reference 5:


○​ Page Fault.
○​ Replace 9 (oldest).
○​ Frames: [5, 2, 3, 4]
○​ Total page faults: 5​

7.​ Page Reference 2:​

○​ No Page Fault (2 is already in frames).


○​ Frames remain: [5, 2, 3, 4]
○​ Total page faults: 5​

8.​ Page Reference 6:


○​ Page Fault.
○​ Replace 3 (oldest).
○​ Frames: [5, 2, 6, 4]
○​ Total page faults: 6​

9.​ Page Reference 4:


○​ No Page Fault (4 is already in frames).
○​ Frames remain: [5, 2, 6, 4]
○​ Total page faults: 6​

10.​Page Reference 5:
○​ No Page Fault (5 is already in frames).
○​ Frames remain: [5, 2, 6, 4]
○​ Total page faults: 6​

11.​Page Reference 2:
○​ No Page Fault (2 is already in frames).
○​ Frames remain: [5, 2, 6, 4]
○​ Total page faults: 6​

12.​Page Reference 5:
○​ No Page Fault (5 is already in frames).
○​ Frames remain: [5, 2, 6, 4]
○​ Total page faults: 6​

13.​Page Reference 4:
○​ No Page Fault (4 is already in frames).
○​ Frames remain: [5, 2, 6, 4]
31

○​ Total page faults: 6​

14.​Page Reference 3:
○​ Page Fault.
○​ Replace 6 (oldest).
○​ Frames: [5, 2, 3, 4]
○​ Total page faults: 7​

15.​Page Reference 4:
○​ No Page Fault (4 is already in frames).
○​ Frames remain: [5, 2, 3, 4]
○​ Total page faults: 7​

16.​Page Reference 2:
○​ No Page Fault (2 is already in frames).
○​ Frames remain: [5, 2, 3, 4]
○​ Total page faults: 7​

17.​Page Reference 3:
○​ No Page Fault (3 is already in frames).
○​ Frames remain: [5, 2, 3, 4]
○​ Total page faults: 7​

18.​Page Reference 9:
○​ Page Fault.
○​ Replace 5 (oldest).
○​ Frames: [9, 2, 3, 4]
○​ Total page faults: 8​

19.​Page Reference 2:
○​ No Page Fault (2 is already in frames).
○​ Frames remain: [9, 2, 3, 4]
○​ Total page faults: 8​

20.​Page Reference 3:
○​ No Page Fault (3 is already in frames).
○​ Frames remain: [9, 2, 3, 4]
○​ Total page faults: 8
Final Answer:
●​ Total Page Faults = 8 using FIFO page replacement algorithm.​

47. Spooling
—> Spooling stands for Simultaneous Peripheral Operations On-Line and refers to a method of
managing data by storing it temporarily in a buffer or queue before sending it to a device for
32

processing, typically a printer or disk. It allows multiple tasks to be processed sequentially


without blocking the system. The spooling process involves storing data (like print jobs, emails,
or file outputs) in a queue, allowing the system to continue with other tasks while waiting for the
device to be available. The primary advantage of spooling is efficient utilization of resources,
particularly the CPU and peripheral devices. Spooling decouples the time-sensitive operations
(like printing) from the execution of programs, preventing any conflict between jobs. For
instance, in printing, the print jobs are stored in a spool file before the printer starts printing. In
case the printer is busy, the job will be queued and processed when the printer becomes
available. This improves system performance, prevents the CPU from being idle while waiting
for peripheral devices, and ensures that tasks are handled in an orderly fashion.

48. Dining Philosophers Problem


—> The Dining Philosophers Problem is a classical synchronization problem used to illustrate
the challenges of managing concurrency and shared resources in computer systems. The
problem consists of five philosophers seated at a round table. Each philosopher has a plate of
food and a fork between each pair of philosophers. Philosophers alternate between thinking and
eating. In order to eat, a philosopher needs two forks—one from each side. However, if all the
philosophers try to pick up the forks simultaneously, they can end up in a deadlock situation
where no philosopher can proceed to eat because each one is holding one fork and waiting for
the other. This results in starvation, where some philosophers may never get a chance to eat.
To solve this, several synchronization techniques are employed, such as using semaphores,
mutexes, or locks to control access to the forks. These solutions aim to ensure that the
philosophers can safely pick up forks without causing deadlock or starving others. One solution
is to allow a philosopher to pick up both forks simultaneously (if available), or only one at a time,
and wait for the other to be free. Another approach involves using a protocol that allows only a
certain number of philosophers to pick up forks at any given time to avoid contention. The
Dining Philosophers Problem highlights the challenges of designing safe and efficient
multi-threaded programs, where shared resources must be managed carefully to avoid conflicts
and ensure fair access.

49. Contiguous Memory Allocation
Contiguous Memory Allocation is a memory management technique in which the operating
system allocates a single contiguous block of memory for each process. This is one of the
simplest and oldest memory allocation methods. In this scheme, the entire process is loaded
into a single contiguous block of memory, starting at a specific location, and it occupies that
space throughout its lifetime. This method is simple and efficient, as it allows the operating
system to easily manage memory by assigning fixed-size memory chunks to processes. The
main advantage of contiguous memory allocation is its simplicity in both implementation and
access, as processes can directly access the memory without any complex address translation.
However, this method has several drawbacks, one of the main ones being external
fragmentation. As processes are loaded and removed from memory, the free memory space
becomes fragmented into smaller chunks, making it difficult to find large enough contiguous
blocks of free memory. This can lead to inefficient memory use. Another issue is internal
fragmentation, where a process may not use the entire allocated memory, wasting available
33

space. To reduce fragmentation, techniques such as compaction, which involves rearranging


the memory contents, are sometimes employed. Overall, while contiguous memory allocation is
easy to implement and access, its efficiency can be limited, especially in systems with dynamic
memory usage patterns.

50. Interrupts
—> An interrupt is a mechanism that allows the operating system to temporarily stop the current
process and handle an event or request. The current execution is suspended, and control is
transferred to an interrupt service routine (ISR), which handles the event or request. Once the
interrupt is processed, control is returned to the interrupted process. Interrupts play a crucial
role in managing asynchronous events, allowing the system to respond to hardware or software
signals without having to continuously check for them.
Interrupts can be broadly categorized into two types: hardware interrupts and software
interrupts. Hardware interrupts are generated by external devices, such as keyboards, mice, or
network cards, to signal the CPU to process an event (like a key press or a data packet arrival).
Software interrupts are triggered by programs to request a service from the operating system,
such as system calls for file operations or memory management.
The primary advantage of interrupts is that they allow the CPU to perform other tasks while
waiting for events to occur, ensuring efficient utilization of system resources. Without interrupts,
the CPU would have to waste cycles constantly polling devices for events. Interrupts allow the
CPU to switch between processes and respond quickly to high-priority tasks, such as handling
I/O operations or network traffic. However, managing interrupts also requires careful handling to
prevent conflicts, ensure data consistency, and avoid issues like interrupt storms, where too
many interrupts can overwhelm the system.

You might also like