Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
8 views34 pages

SEM Imp - Operating System

The document discusses various concepts related to operating systems, including the differentiation between logical and physical address spaces, preemptive vs non-preemptive scheduling, and the definition and functions of operating systems. It covers topics like thrashing, memory management techniques such as demand paging, and the implications of multi-programming and multi-tasking. Additionally, it explains race conditions in the Producer-Consumer problem, process control blocks (PCBs), and the actions taken by the operating system during page faults.

Uploaded by

devjitdutta898
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views34 pages

SEM Imp - Operating System

The document discusses various concepts related to operating systems, including the differentiation between logical and physical address spaces, preemptive vs non-preemptive scheduling, and the definition and functions of operating systems. It covers topics like thrashing, memory management techniques such as demand paging, and the implications of multi-programming and multi-tasking. Additionally, it explains race conditions in the Producer-Consumer problem, process control blocks (PCBs), and the actions taken by the operating system during page faults.

Uploaded by

devjitdutta898
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

Operating System (BCAC 302)

GROUP-B
1. Differentiate Logical and physical address space ***

Aspect Logical Address Space Physical Address Space

Definition Addresses generated by a process, also known Actual physical addresses in RAM where data
as virtual addresses. is stored.

Visibility Seen by the process and used by the CPU Not directly visible to the process and not
during program execution. used by the CPU directly.

Size Larger and can span the entire range of Smaller and limited to the size of physical
available memory. RAM.

Protection Process sees its own logical address space; OS controls access to physical addresses,
isolation between processes. ensuring memory protection.

Role Used for memory management, process Represents the actual hardware memory,
isolation, and portability. where data is stored.

2. Explain with examples the difference between preemptive and non-preemptive priority
scheduling. ***

Aspect Preemptive Priority Scheduling Non-preemptive Priority Scheduling

Interruption Allows higher-priority tasks to interrupt Once a task is scheduled, it runs until
lower-priority ones. completion or voluntarily yields the CPU.

Example In a preemptive system, if a higher-priority In a non-preemptive system, the higher-


task becomes available, it can interrupt the priority task has to wait until the lower-
execution of a lower-priority task. priority task finishes its execution.

Responsiveness Offers better responsiveness and ensures May lead to longer response times for
that critical tasks are executed promptly. higher-priority tasks if lower-priority tasks
are long-running.

Example Imagine a real-time system where an In batch processing, where it's essential to
Scenario emergency brake task must be executed complete lower-priority tasks before
immediately when an event occurs. moving to higher-priority ones, non-
Preemptive priority scheduling would be preemptive priority scheduling may be
suitable. used.

3. What is an Operating System? What are the functions of Operating System?


An Operating System (OS) is system software that acts as an intermediary between the hardware and software
applications, managing computer hardware resources and providing various services to users and applications. Its
main functions include:

1. Process Management:
 Creating, scheduling, and terminating processes.
 Managing process synchronization and communication.
2. Memory Management:
 Allocating and deallocating memory to processes.
 Handling memory protection and virtual memory.
3. File System Management:
 Managing files, directories, and file operations.
 Providing file access control and security.
4. Device Management:
 Managing input and output devices.
 Handling device drivers and I/O operations.
5. Security and Access Control:
 Ensuring user and data security.
 Implementing access control mechanisms.
6. User Interface:
 Providing a user-friendly interface for interaction.
 Managing command interpretation and GUI components.
7. Networking:
 Facilitating network communication and protocols.
 Managing network connections and resources.
8. Error Handling:
 Detecting and handling hardware and software errors.
 Ensuring system reliability and fault tolerance.

4. What is “thrashing”?
Thrashing is a situation in which a computer's operating system spends a significant amount of time swapping data
between main memory (RAM) and secondary storage (usually a hard drive) due to excessive paging or swapping. It
occurs when the system is overloaded with too many processes or when the processes' memory requirements
exceed the available physical RAM. As a result, the system becomes extremely slow, and the CPU spends more time
swapping data in and out of RAM than executing useful tasks, leading to a severe drop in performance.

Thrashing is detrimental to system performance and can be mitigated by optimizing resource allocation, reducing
the number of running processes, or increasing the amount of physical memory available to the system.

5. Differentiate between external fragmentation and internal fragmentation.

Aspect External Fragmentation Internal Fragmentation

Definition Unallocated free memory exists in the form Unallocated memory exists within allocated
of small, non-contiguous chunks between memory blocks but is not being used by the
allocated blocks. process.

Occurrence Occurs when processes are loaded and Occurs when memory allocation within a
removed from memory, leaving gaps that process is not perfectly aligned with memory
are too small to be used. block size, resulting in wasted space.

Impact Reduces the overall efficiency of memory Reduces the efficiency of individual memory
utilization as free space may be fragmented blocks, as part of them remains unused by
and unusable. the process.

Solution Compaction or memory compaction No specific solution is required as it is a


techniques can help reduce external characteristic of memory allocation within a
fragmentation by relocating processes. process.

Common in Typically found in dynamic memory Common in fixed-size memory allocation


allocation, like in paging and segmentation. schemes, like fixed-size partitions.

6. Define thread its life cycle.


A thread is a basic unit of CPU utilization within a process. It represents an independent sequence of instructions
that can be scheduled and executed by the CPU. Threads within a process share the same memory space and
resources, such as file handles and open sockets. The life cycle of a thread typically consists of the following states:

1. New: In this state, the thread is created, but it has not yet started executing.

2. Runnable (Ready): The thread is ready to run but is waiting for CPU time. It's in a queue and competing for
CPU time with other runnable threads.

3. Running: The thread is actively executing its instructions on the CPU.

4. Blocked (Waiting): The thread is waiting for an event, such as I/O completion, and is not using CPU time
during this period.

5. Terminated (Dead): The thread has finished its execution or has been explicitly terminated. At this point, it
no longer exists.

Thread management involves transitioning between these states and synchronization to ensure proper coordination
between threads sharing resources.

7. Explain Demand Paging in memory management scheme. What is multilevel Feedback


Queue?
 Demand Paging: Demand paging is a memory management scheme in which not all pages of a process are
loaded into main memory (RAM) at the start. Instead, only the pages that are needed for the current
execution of the program are loaded. This reduces the initial loading time and conserves memory. When a
page is accessed and it is not in memory, a page fault occurs, causing the operating system to fetch the
required page from secondary storage (usually a disk) into RAM. Demand paging allows for more efficient
memory usage but can lead to page faults, which can temporarily slow down program execution.

 Multilevel Feedback Queue: The multilevel feedback queue is a scheduling algorithm used in operating
systems to manage the execution of processes based on their priority and behavior. It employs multiple
queues with different priority levels. Processes start in the highest-priority queue and move to lower-priority
queues if they don't complete within a certain time quantum. This approach allows the scheduler to handle
both CPU-bound and I/O-bound processes effectively. Over time, processes that use less CPU time may get a
higher priority, ensuring fairness in resource allocation. It is a dynamic scheduling algorithm that can adapt
to changing process behavior and system load.

8. What is page fault? When does is occur?


A page fault is an event that occurs in a demand-paging memory management scheme when a program or process
tries to access a page of data that is not currently in physical memory (RAM). Page faults occur when a program
references a page that has been swapped out to secondary storage (typically a disk) to free up space in RAM. When
a page fault happens, the operating system must fetch the required page from secondary storage into RAM before
allowing the program to continue executing. Page faults can significantly slow down program execution, especially if
disk access is slow.

Page faults occur when:

 A process attempts to access a memory location that is not currently in RAM (not resident).

 The operating system's memory management unit detects that the required page is not present in RAM.

9. What are the necessary and sufficient conditions for deadlock to occur? What is
thrashing?
Necessary conditions for deadlock:

1. Mutual Exclusion: At least one resource must be non-shareable, meaning only one process can use it at a
time.

2. Hold and Wait: Processes must hold resources while waiting for additional ones, creating a situation where
they cannot release resources.

3. No Preemption: Resources cannot be preempted from processes; they must be released voluntarily.

4. Circular Wait: There must be a circular chain of two or more processes, each waiting for a resource held by
the next process in the chain.

Sufficient condition for deadlock: All four necessary conditions must be simultaneously true for a deadlock to occur.

Thrashing: Thrashing is a situation in virtual memory systems where the CPU spends the majority of its time
swapping pages between RAM and secondary storage (e.g., the hard drive) instead of executing actual program
instructions. It typically happens when the system is heavily overcommitted with too many processes, and there is
insufficient physical memory to accommodate their working sets. As a result, the system becomes extremely slow,
and overall throughput decreases significantly. Thrashing can be alleviated by reducing the degree of
multiprogramming, adding more physical memory, or optimizing page replacement algorithms.

10. What do you mean by Race Condition with respect to Producer – Consumer Problem?
Explain how Race Condition can be avoided.
In the context of the Producer-Consumer Problem, a race condition occurs when multiple threads (producers and
consumers) access shared data or resources concurrently without proper synchronization. This can lead to
unexpected and undesirable outcomes because the order of execution is unpredictable, and multiple threads may
interfere with each other's operations.

For example, in the Producer-Consumer Problem, if multiple producers are simultaneously adding items to a shared
buffer while multiple consumers are simultaneously removing items, a race condition may lead to problems such as
data corruption, lost data, or buffer overflows.

To avoid race conditions in the Producer-Consumer Problem (and similar scenarios), you can use synchronization
mechanisms such as mutexes (mutual exclusion) and semaphores. Here's how it can be done:

1. Mutex (Mutual Exclusion): Protect the shared buffer with a mutex. A producer should acquire the mutex
before adding an item, and a consumer should acquire the mutex before removing an item. This ensures
that only one thread can access the shared buffer at a time, preventing race conditions.

2. Semaphores: Use semaphores to control the number of items in the buffer. Create two semaphores: one to
track available slots in the buffer and another to track the number of items in the buffer. Producers will
increment the available slots semaphore and decrement the empty slots semaphore, and consumers will do
the opposite. This controls access to the buffer and prevents overflows or underflows.
By properly using mutexes and semaphores, you can synchronize producer and consumer threads, avoiding race
conditions and ensuring safe access to shared resources.

11. Explain PCB with a neat diagram. Write down the down the different process states
A Process Control Block (PCB) is a data structure used by the operating system to store information about a process.
It contains various pieces of information that help the operating system manage and control the process. Here is a
simplified diagram of a PCB:

Different Process States:

1. New: The process is being created or initialized.

2. Ready: The process is ready to run and waiting for CPU time.

3. Running: The process is currently executing on the CPU.

4. Blocked (Waiting): The process is waiting for an event (e.g., I/O completion) before it can continue.

5. Terminated (Exit): The process has finished execution or has been forcibly terminated.

The PCB contains information about each of these states, including the program counter (PC), CPU registers, priority,
memory information, open files, and other relevant data for process management.

12. Describe thrashing. Explain the demand paging in memory management scheme.
Thrashing: Thrashing is a situation in virtual memory systems where the CPU spends the majority of its time
swapping pages between RAM and secondary storage (e.g., the hard drive) instead of executing actual program
instructions. It typically occurs when the system is heavily overcommitted with too many processes, and there is
insufficient physical memory to accommodate their working sets. As a result, the system becomes extremely slow,
and overall throughput decreases significantly. Thrashing can be alleviated by reducing the degree of
multiprogramming, adding more physical memory, or optimizing page replacement algorithms.

Demand Paging: Demand paging is a memory management scheme used to efficiently use physical memory by
loading only the necessary pages of a program into RAM as they are needed. In this scheme, not all pages of a
process are loaded into main memory initially. Instead, only the pages that are required for the current execution of
the program are loaded. When a page is accessed and it is not in memory, a page fault occurs, causing the operating
system to fetch the required page from secondary storage (usually a disk) into RAM. Demand paging allows for more
efficient memory utilization but can lead to page faults, which can temporarily slow down program execution.

13. “Multi-programming implies multi-tasking, but the vice-versa is not true” – Explain.
 Multi-Programming: Multi-programming refers to the technique where multiple programs are loaded into
memory simultaneously and share the CPU. The CPU switches between these programs, giving the illusion of
concurrent execution, even though only one program is actively executing at any given time. Multi-
programming is primarily about optimizing CPU utilization and reducing idle time.

 Multi-Tasking: Multi-tasking, on the other hand, is a more advanced form of multi-programming. It not only
involves running multiple programs concurrently but also allows for true parallel execution of tasks. In multi-
tasking, each program or task runs in its own thread or process, and multiple tasks can execute
simultaneously on multi-core processors or in parallel on multi-processor systems. Multi-tasking provides
true concurrent execution and is about improving overall system responsiveness and user experience.

So, while multi-programming involves running multiple programs in a way that they share the CPU time, multi-
tasking encompasses multi-programming but goes beyond it by enabling true parallel execution. Therefore, multi-
programming implies multi-tasking, but multi-tasking is a more comprehensive concept that includes multi-
programming and extends to parallelism.

14. When does a page-fault occur?


A page fault occurs in a demand-paging memory management scheme when a program or process attempts to
access a page of data that is not currently in physical memory (RAM). Page faults occur when:

 A process references a memory location that is not in RAM (not resident).

 The operating system's memory management unit detects that the required page is not present in RAM.

When a page fault happens, the operating system must fetch the required page from secondary storage (typically a
disk) into RAM before allowing the program to continue executing. Page faults can significantly slow down program
execution, especially if disk access is slow.

15. Describe the action taken by the operating system when a page fault occurs.
When a page fault occurs in a demand-paging memory management scheme, the operating system needs to take
specific actions to resolve it. Here are the typical steps:

1. Page Fault Trap: When the CPU detects a page fault while trying to access a memory location that is not in
physical memory (RAM), it generates a page fault exception or interrupt. This signals the operating system
that a page fault has occurred.

2. Handling the Page Fault: The operating system's page fault handler takes control. It performs the following
actions:

a. Check Validity: Verify if the memory access that caused the page fault is legitimate and not due to a program error
(e.g., accessing an invalid address).

b. Locate the Page: Determine which page or data block is needed but is not in physical memory.

c. Fetch the Page: Retrieve the required page from secondary storage (typically a disk) into an available page frame
in RAM. This involves disk I/O operations to read the page from storage.

d. Update Page Table: Update the process's page table to indicate that the required page is now in physical memory
and is marked as valid.

e. Resume Execution: Return control to the interrupted process, allowing it to continue its execution from where it
left off. The instruction that caused the page fault is re-executed, now that the required page is in RAM.

The page fault handler ensures that the program can access the required data in a transparent and efficient manner.
If the system is heavily thrashing (experiencing frequent page faults), performance can degrade significantly.
16. Explain PCB with neat diagram. ***

17. Explain the demand paging in memory management scheme.


Demand paging is a memory management scheme used to efficiently utilize physical memory by loading only the
necessary pages of a program into RAM as they are needed. Here's how it works:

1. Initial Loading: When a program is initially loaded into memory, only a small portion of it, typically the
essential parts, is loaded into RAM. This reduces the initial loading time and conserves memory.

2. Page Fault Handling: When a process tries to access a page of data that is not currently in physical memory
(RAM), a page fault occurs. The operating system then:

 Identifies which page is needed.

 Retrieves the required page from secondary storage (e.g., a disk) into an available page frame in
RAM.

 Updates the process's page table to indicate that the page is now in physical memory and is valid.

 Allows the process to continue its execution from where it left off.

3. Page Replacement: If physical memory becomes full, the operating system must select a page to replace. It
often uses page replacement algorithms like LRU (Least Recently Used) or FIFO (First-In, First-Out) to choose
which page to evict from RAM and replace with the required page.

Demand paging improves memory utilization and overall system performance because it loads only the portions of a
program that are actively in use, rather than loading the entire program into memory at once.

18. Distinguish between ‘starvation’ and ‘deadlock’. **


Starvation:

 Definition: Starvation occurs in a resource allocation system when a process or a thread is unable to make
progress or receive the resources it needs to complete its execution due to resource allocation policies or
scheduling decisions.

 Cause: Starvation can happen when some processes or threads receive preferential treatment, repeatedly
acquiring resources, leaving others waiting for an extended period.

 Outcome: Starved processes may never complete their tasks, leading to unfair resource distribution and
potentially reduced system efficiency.

Deadlock:

 Definition: Deadlock is a specific situation where two or more processes or threads are unable to proceed
because each is waiting for a resource held by the other(s), resulting in a circular waiting condition.

 Cause: Deadlock arises when processes hold resources and wait for additional ones to be released, creating
a cycle where none of the processes can release their held resources.

 Outcome: Deadlock leads to a complete standstill in the affected processes, causing a significant disruption
to system operation and requiring intervention by the operating system to resolve the deadlock.

In summary, starvation is a condition where some processes are denied access to resources for an extended period,
while deadlock is a specific situation where processes are mutually blocked and cannot proceed due to circular
resource dependencies. Both are undesirable scenarios in resource management but have different causes and
implications.
19. What do you mean by critical section?
A critical section is a section of a program or code that accesses shared resources or variables that must not be
concurrently accessed by multiple threads or processes. It is a part of a program where data consistency and
integrity must be maintained, and concurrent access could lead to race conditions, data corruption, or incorrect
results.

The critical section problem refers to the challenge of coordinating and controlling access to these shared resources
to ensure that only one thread or process can execute the critical section at a time. Synchronization mechanisms like
mutexes, semaphores, or other locking mechanisms are typically used to enforce mutual exclusion and manage
critical sections effectively.

20. Describe thrashing. Explain the demand paging in memory management scheme.

21. What is virtual memory?


Virtual memory is a memory management technique used by modern operating systems to provide the illusion of a
larger, contiguous, and more extensive address space to programs than the physical memory (RAM) available in a
computer. It allows programs to access memory addresses that may not necessarily correspond to physical RAM
locations.

Key characteristics of virtual memory:

 Address Space: Each process is provided with its own virtual address space, which can be larger than the
actual physical memory.

 Page/File-Based: Virtual memory is typically organized into pages or blocks. Data can be stored in RAM or on
secondary storage (e.g., a hard disk) as needed.

 Page Faults: When a program accesses data that is not in physical memory, a page fault occurs, prompting
the operating system to bring the required data into RAM from secondary storage.

 Memory Protection: Virtual memory provides memory protection, preventing one process from accessing
the memory of another process, enhancing security and stability.

 Improved Resource Utilization: Virtual memory allows for efficient use of physical memory by swapping
data in and out as needed.

22. What is thread? Compare it with process.


Thread:

 A thread is the smallest unit of a CPU's execution.

 Threads within the same process share the same memory space and resources.

 Threads within a process are lighter in terms of resource overhead and context switching time.

 Threads are suitable for tasks that can be parallelized and require shared memory access.

 Threads are more efficient for multitasking within a single process.

Process:

 A process is a standalone program with its own memory space, resources, and state.
 Processes do not share memory space with other processes by default.

 Processes are heavier in terms of resource overhead and context switching time.

 Processes provide better isolation between tasks and are suitable for independent tasks or applications.

 Processes provide better fault tolerance, as a failure in one process typically does not affect others.

In summary, threads are lighter-weight units of execution that share resources within a process, while processes are
separate, independent programs with their own memory and resources. Threads are used for concurrent execution
within a single program, while processes are used for running separate, independent tasks or programs.

23. Explain multilevel feedback queue.


A multilevel feedback queue is a scheduling algorithm used in operating systems to manage the execution of
processes by assigning them to one of several priority queues and adjusting their priorities dynamically based on
their behavior. The multilevel feedback queue scheduling scheme is designed to optimize the execution of different
types of processes, such as interactive, batch, or real-time, by allowing processes to move between queues based on
their resource requirements and execution characteristics.

Here's how a multilevel feedback queue typically works:

1. Multiple Queues: The scheduler maintains multiple priority queues, each with a different priority level.
Typically, there are three or more queues, with the highest-priority queue assigned to time-sensitive or
interactive tasks and the lowest-priority queue for CPU-bound or batch jobs.

2. Initial Assignment: When a process enters the system or is created, it is initially assigned to the highest-
priority queue.

3. Priority Adjustment: The scheduler monitors the behavior of processes in each queue. If a process uses up
its time quantum without completing or if it waits for I/O, its priority may be lowered. Conversely, processes
that voluntarily yield the CPU or experience frequent I/O operations may have their priority increased.

4. Queue Migration: If a process's priority falls below a certain threshold, it is moved to a lower-priority queue.
Conversely, if a process's priority increases, it may be promoted to a higher-priority queue. This dynamic
adjustment helps optimize resource allocation for different types of processes.

5. Execution: The scheduler selects a process for execution from the highest-priority non-empty queue. If a
higher-priority queue becomes non-empty, the scheduler may preempt the currently executing process and
switch to the higher-priority process.

By using multiple queues and dynamically adjusting priorities, the multilevel feedback queue scheduler can provide
good response times for interactive tasks while efficiently utilizing CPU resources for CPU-bound tasks.

24. Explain the difference between process and program.

Aspect Process Program

Definition A process is a running instance of a A program is a set of instructions stored


program in execution. on disk or in memory.

Dynamic vs. Dynamic: Processes have a runtime Static: Programs are static, consisting only
Static state, including memory, CPU registers, of code and data instructions.
and other resources.

Resource Utilizes system resources like CPU, Doesn't utilize system resources until it's
Utilization memory, I/O devices, and files during loaded into memory and executed as a
execution. process.

Independence Processes are independent of each other Programs are independent of each other
and can run concurrently. but need to be executed as processes to
run concurrently.

Interaction Processes can interact with each other Programs do not inherently interact;
through inter-process communication interaction occurs when programs
(IPC) mechanisms. communicate through processes.

Lifecycle Has a lifecycle, including creation, Does not have a distinct lifecycle but is
execution, suspension, resumption, and loaded into memory for execution as
termination. needed.

Examples Running a web browser, word processor, A text editor program, a game application,
or spreadsheet application. or a compiler program.

In summary, a program is a static set of instructions, while a process is a dynamic instance of a program that is
loaded into memory and executed. Processes have their own memory space, resources, and runtime state, making
them capable of independent execution and interaction with other processes. Programs, on the other hand, are
passive until they are executed as processes.

25. What is the difference between a long-term schedulers and a short-term scheduler?

Aspect Long-Term Scheduler Short-Term Scheduler

Frequency of Infrequently (typically seconds to Very frequently (milliseconds or less).


Execution minutes).

Objective Selects processes from the job queue Selects the next process to execute from the
and loads them into memory to create ready queue.
new processes.

Focus Focuses on process selection for Focuses on CPU allocation and process
execution based on job characteristics execution order.
and system resources.

Number of Deals with a large number of processes, Deals with a relatively smaller number of
Processes often from a job pool. ready processes.

Time Horizon Operates on a longer time horizon, Operates on a very short time horizon,
optimizing for overall system optimizing for CPU efficiency and
throughput and resource utilization. responsiveness.

Examples Decides when to start a new interactive Decides which process currently in memory
user session or batch job. should run next, typically based on priorities
or time-sharing algorithms.

26. Differentiate Process vs Threads.

Aspect Process Thread


Definition A process is a standalone program with A thread is a lightweight unit of execution
its own memory space, resources, and within a process, sharing the same
state. memory space and resources.

Isolation Processes are isolated from each other, Threads within a process share the same
meaning one process cannot directly memory space and resources and can
access another's memory. communicate directly.

Creation Creating and managing processes is Creating and managing threads is more
Overhead more resource-intensive and time- efficient in terms of resource overhead
consuming. and time.

Communication Inter-process communication (IPC) is Threads within the same process can
required for processes to communicate communicate directly through shared
with each other. memory.

Fault Tolerance If one process fails or crashes, it does A failure in one thread can potentially
not directly affect other processes. affect the entire process and all its
threads.

Resource Processes have their own system Threads within a process share the same
Allocation resources, including file handles and system resources, reducing resource
sockets. duplication.

Example Web browsers, word processors, or any Multithreading in a web server, where
standalone application. each thread handles a client request.

27. Differentiate Single partition allocation vs multiple partition allocation.

Aspect Single Partition Allocation Multiple Partition Allocation

Memory Usage All processes occupy a single, Memory is divided into multiple
contiguous block of memory. partitions, each allocated to a separate
process.

Process Size Limited to the size of available Can accommodate a mix of large and
physical memory. small processes, subject to available
partitions.

Fragmentation Internal fragmentation can occur if a External fragmentation may occur due
process does not use all allocated to varying process sizes and allocation
memory. patterns.

Memory Allocation Less flexible in terms of More flexible as it can adapt to


Flexibility accommodating varying process sizes. different-sized processes but may lead
to fragmentation.

Wastage Can lead to wasted memory if the May lead to fragmentation, which can
process is smaller than the allocated waste memory due to non-contiguous
block. free space.

Implementation Simpler to implement due to fixed More complex to manage multiple


Complexity allocation strategy. partitions and allocate them
dynamically.

Example Older systems with limited memory Modern systems supporting multiple
processes concurrently with varying
where one process runs at a time. sizes.

28. What do you mean by process? Draw the block diagram of process control block. Write down the
different process states.
A process is a running instance of a program in execution. It is the smallest unit of execution in an operating system
and consists of its own memory space, resources, and execution context. Processes may run concurrently, and each
has its own program counter, registers, and memory allocation.

Block Diagram of a Process Control Block (PCB):

Different Process States:

1. New: The process is being created or initialized.

2. Ready: The process is ready to run and waiting for CPU time.

3. Running: The process is actively executing on the CPU.

4. Blocked (Waiting): The process is waiting for an event (e.g., I/O completion) before it can continue.

5. Terminated (Exit): The process has finished execution or has been forcibly terminated.

The Process Control Block (PCB) contains information about each of these states, including the program counter
(PC), CPU registers, scheduling information, memory management data, file and I/O information, and more. It is
crucial for the operating system to manage and control processes effectively.

GROUP-C
1. What is mutual exclusion problem concerning to concurrent process? Explain with example.
The mutual exclusion problem is a fundamental issue in concurrent computing, where multiple processes or threads
share resources, and there is a need to ensure that only one process at a time can access a particular resource or a
critical section of code. The goal is to prevent interference or conflicts that may arise when multiple processes
attempt to access the same resource simultaneously.

Example of Mutual Exclusion Problem:

Consider a scenario where two concurrent processes, Process A and Process B, need to access and update a shared
bank account. The account balance is a shared resource that must be protected to ensure data consistency and
integrity. Without proper mutual exclusion, a race condition may occur, leading to incorrect results. Here's how the
mutual exclusion problem arises and can be solved:

1. Without Mutual Exclusion:

 Process A and Process B both access the shared bank account concurrently without any
synchronization mechanism.

 Process A reads the current balance, $1000.

 Process B also reads the current balance, $1000.

 Process A calculates a new balance, subtracts $200 for a withdrawal, and writes the result back,
setting the balance to $800.

 Process B calculates a new balance, adds $300 for a deposit, and writes the result back, setting the
balance to $1300.

 The final balance should be $1100 ($1000 - $200 + $300), but it is $1300 due to the lack of mutual
exclusion.

2. With Mutual Exclusion (Using Locks):

 Both Process A and Process B are required to acquire a lock (mutex) before accessing the shared
bank account.

 Process A acquires the lock, enters the critical section, updates the balance, and releases the lock.

 Process B attempts to acquire the lock but is blocked until Process A releases it.

 Process A completes its operation and releases the lock.

 Process B now acquires the lock, enters the critical section, updates the balance, and releases the
lock.

 With proper mutual exclusion, the final balance is correctly calculated as $1100.

In this example, mutual exclusion is crucial to ensure that only one process can access the shared bank account at a
time, preventing conflicts and maintaining data consistency.

2. Describe critical section problem.


The critical section problem is a fundamental synchronization problem in concurrent computing. It pertains to
situations where multiple processes or threads share resources, and there is a need to ensure that only one process
at a time can execute a specific section of code, known as the critical section. The goal is to prevent race conditions
and conflicts that may arise when multiple processes attempt to access shared resources concurrently.

Key requirements of the critical section problem:

1. Mutual Exclusion: Only one process should be allowed to enter its critical section at any given time.

2. Progress: If no process is currently executing in its critical section, and one or more processes want to enter
their critical sections, then only those processes not in the remainder section can participate in deciding
which will enter the critical section next.

3. Bounded Waiting: There exists a bound on the number of times other processes can enter their critical
sections after a process has made a request to enter its critical section and before that request is granted.

Solutions to the critical section problem involve using synchronization mechanisms like semaphores, mutexes, or
locks to ensure that processes or threads can coordinate their access to shared resources and execute their critical
sections safely and in a controlled manner. These mechanisms help in preventing data corruption, race conditions,
and other concurrency-related issues.

3. What are the necessary conditions for deadlock?


Deadlock in a concurrent system can occur under the following necessary conditions, often referred to as the "Four
Coffin Conditions":

1. Mutual Exclusion: At least one resource must be non-shareable, meaning only one process can use it at a
time. If multiple processes are allowed to share a resource, deadlock cannot occur.

2. Hold and Wait: Processes must hold resources while waiting for additional ones, creating a situation where
they cannot release resources. In other words, a process must be holding at least one resource and waiting
for another.

3. No Preemption: Resources cannot be preempted from processes; they must be released voluntarily. If a
resource can be forcibly taken away from a process, deadlock is less likely.

4. Circular Wait: There must be a circular chain of two or more processes, each waiting for a resource held by
the next process in the chain. In other words, Process A is waiting for a resource held by Process B, Process B
is waiting for a resource held by Process C, and so on until one process in the chain is waiting for a resource
held by Process A.

If all four of these conditions hold simultaneously, a deadlock can occur. It's important to note that these conditions
are necessary but not sufficient; the presence of these conditions does not guarantee that a deadlock will always
happen, but they are prerequisites for it.

4. Differentiate between internal and external fragmentation. Compare Best fit and Worst fit searching
strategy.

Aspect Internal Fragmentation External Fragmentation

Definition Wasted memory within allocated Wasted memory outside allocated memory
memory blocks due to a portion of the blocks, leading to gaps between allocated
block being unused. blocks.

Cause Internal fragmentation occurs when External fragmentation occurs when free
allocated memory is larger than what memory exists, but it is fragmented into
the process needs. smaller, non-contiguous blocks.

Impact Reduces the efficiency of memory Reduces the overall memory available for
usage for individual allocated blocks. allocation, affecting system efficiency.

Memory Internal fragmentation is within External fragmentation is between allocated


Allocation allocated memory blocks, affecting the memory blocks, affecting memory allocation.
actual data storage.

Elimination Reducing block size to match the actual Memory compaction or memory allocation
data size can reduce internal algorithms can reduce external
fragmentation. fragmentation.

Comparison of Best Fit and Worst Fit Memory Allocation Strategies:

Aspect Best Fit Worst Fit

Search Strategy Allocates the smallest available block Allocates the largest available block
that fits the process's memory that fits the process's memory
requirements. requirements.

Fragmentation Tends to produce less external Can result in more external


fragmentation because small gaps are fragmentation because larger gaps are
used effectively. left behind.

Efficiency Often leads to efficient memory May lead to less efficient memory
utilization by minimizing wasted utilization due to larger gaps between
memory. allocated blocks.

Allocation Speed May require more time to search for Typically faster to allocate memory as it
the best-fit block among available free selects the largest available block
blocks. quickly.

Defragmentation Easier to defragment due to smaller Harder to defragment because of


Difficulty gaps and less external fragmentation. larger, scattered gaps.

In summary, Best Fit searches for the smallest available block that fits the process's requirements, minimizing
internal fragmentation but potentially causing some external fragmentation. Worst Fit searches for the largest
available block, which can lead to more external fragmentation but simpler allocation. The choice between the two
strategies depends on the specific memory allocation requirements and trade-offs in a given system.

5. What is demand paging?


Demand paging is a memory management technique used by modern operating systems to efficiently utilize physical
memory (RAM) by loading only the necessary pages of a program into RAM as they are needed. Instead of loading
the entire program into memory when it starts, demand paging loads only the portions of a program that are
actively being used or referenced by the currently executing processes.

Key features of demand paging:

 Page-Based: Memory is divided into fixed-size pages, and data is loaded into these pages as needed.

 Page Faults: When a process attempts to access a page that is not currently in RAM (a page fault occurs), the
operating system retrieves the required page from secondary storage (usually a disk) and loads it into an
available page frame in RAM.

 Efficient Use of Memory: Demand paging allows for more efficient use of physical memory by swapping data
in and out as needed, reducing the amount of memory required to run multiple processes simultaneously.

 Improved Responsiveness: It enhances system responsiveness by loading only the actively used portions of
a program into memory, enabling faster process startup times.

Demand paging is a significant improvement over earlier memory management techniques as it reduces memory
waste and allows for more efficient multitasking in modern computer systems.

6. What is critical section problem? What are the requirements that the solution to critical section
problem must satisfy?
The critical section problem is a fundamental synchronization problem in concurrent computing, particularly in
multi-process or multi-threaded systems. It pertains to situations where multiple processes or threads share
resources, and there is a need to ensure that only one process at a time can execute a specific section of code,
known as the critical section. The primary goal is to prevent race conditions and conflicts that may arise when
multiple processes attempt to access shared resources or data concurrently.

Requirements for a Solution to the Critical Section Problem:


1. Mutual Exclusion: Only one process can be inside the critical section at any given time. This ensures that
processes do not interfere with each other while accessing shared resources.

2. Progress: If no process is currently in its critical section and some processes want to enter, only those
processes not in their remainder section can participate in deciding which one will enter next. This ensures
that processes do not starve and eventually make progress toward entering their critical sections.

3. Bounded Waiting: There exists a bound on the number of times other processes can enter their critical
sections after a process has made a request to enter its critical section and before that request is granted.
This prevents processes from waiting indefinitely.

Solutions to the critical section problem involve using synchronization mechanisms like semaphores, mutexes, or
locks to ensure that processes or threads can coordinate their access to shared resources and execute their critical
sections safely and in a controlled manner. These mechanisms help in preventing data corruption, race conditions,
and other concurrency-related issues.

7. What is semaphore? How is it accessed? Explain the Dining philosopher’s problem and give the
solution of it using monitor.
 Semaphore: A semaphore is a synchronization primitive used in concurrent programming to control access
to a shared resource or critical section by multiple processes or threads. It consists of an integer variable and
two atomic operations: wait (P) and signal (V). The wait operation decrements the semaphore value and
waits if it becomes negative, while the signal operation increments the semaphore value.

 Accessing Semaphore: Semaphores are accessed through the wait and signal operations. When a process or
thread wants to enter a critical section, it performs a wait operation on the semaphore. When it exits the
critical section, it performs a signal operation to release the semaphore.

Dining Philosophers Problem:

The Dining Philosophers problem is a classic synchronization and concurrency problem that illustrates issues related
to resource allocation and deadlock prevention. It involves five philosophers sitting at a dining table, where each
philosopher alternates between thinking and eating. To eat, a philosopher needs two forks (resources), one on each
side of their plate.

The problem arises: If all philosophers simultaneously pick up their left forks and then attempt to pick up their right
forks, they can become deadlocked, with each philosopher holding one fork and waiting for another.

Solution using a Monitor:

A monitor is a high-level synchronization construct that encapsulates shared data and the operations that can be
performed on that data. It provides mutual exclusion and condition variables for synchronization. Here's a solution
to the Dining Philosophers problem using a monitor:

1. Create a monitor that encapsulates the shared forks (resources) and defines operations like pickup and
putdown for philosophers.

2. Each philosopher calls pickup to acquire the two forks on their sides. If both forks are not available, they
wait using a condition variable.

3. After eating, the philosopher calls putdown to release the forks. This operation notifies any waiting
philosophers that the forks are available.

This solution ensures that philosophers can only pick up forks if both are available, preventing deadlock. The monitor
enforces mutual exclusion and prevents multiple philosophers from attempting to pick up the same fork
simultaneously.
8. What do you mean by long-term, short-term, and medium-term scheduler?
 Long-Term Scheduler (Job Scheduler): The long-term scheduler is responsible for selecting processes from
the job queue and loading them into memory to create new processes. It determines which processes are
admitted to the system and allocates resources to them. This scheduler runs infrequently, typically in
seconds to minutes.

 Short-Term Scheduler (CPU Scheduler): The short-term scheduler selects the next process to execute from
the ready queue based on priority or time-sharing algorithms. It decides which process gets CPU time and
how long it runs. This scheduler operates very frequently, in milliseconds or less, to ensure efficient CPU
allocation and responsiveness.

 Medium-Term Scheduler: The medium-term scheduler is not present in all operating systems, but when
used, it deals with the swapping of processes in and out of memory. It can suspend processes, freeing up
memory for other processes. The medium-term scheduler runs less frequently than the short-term scheduler
but more frequently than the long-term scheduler. It helps manage memory utilization and system
performance.

9. What are the necessary conditions for Deadlock? Describe a system model for deadlock. Explain the
resource allocation graph for deadlock avoidance. Discuss different deadlock recovery techniques. **
Necessary Conditions for Deadlock (Four Coffin Conditions):

1. Mutual Exclusion: At least one resource must be non-shareable, meaning only one process can use it at a
time.

2. Hold and Wait: Processes must hold resources while waiting for additional ones.

3. No Preemption: Resources cannot be preempted from processes; they must be released voluntarily.

4. Circular Wait: There must be a circular chain of two or more processes, each waiting for a resource held by
the next process in the chain.

System Model for Deadlock:

 A system model for deadlock typically includes processes, resources (e.g., printers, CPUs), and the allocation
of resources to processes. The model also includes a wait-for graph or resource allocation graph to represent
the relationships between processes and resources.

Resource Allocation Graph (for Deadlock Avoidance):

 In a resource allocation graph, nodes represent processes and resources, and edges represent the allocation
of resources to processes and the wait-for relationships. There are two types of nodes: P-nodes (for
processes) and R-nodes (for resources).

 Four types of edges in the graph:

1. Request Edge (P -> R): Represents a process requesting a resource.

2. Assignment Edge (R -> P): Represents a resource being allocated to a process.

3. Request Edge (P -> P): Represents a process waiting for a resource held by another process.

4. Assignment Edge (P -> P): Represents a process holding a resource.

Deadlock Avoidance Techniques:

 Banker's Algorithm: It's a resource allocation algorithm that calculates if the system is in a safe state before
allocating resources to processes. If an allocation would lead to an unsafe state, it's delayed until it's safe.
 Resource Allocation Graph: Detects deadlock by checking for cycles in the resource allocation graph. If a
cycle exists, deadlock is possible.

 Wait-Die and Wound-Wait Schemes: Used in database systems to prevent deadlock by allowing processes
to wait or be aborted based on their age and priority.

 Timeouts and Reclamation: Timeouts can be set for resource requests, and resources can be forcefully
reclaimed from processes that exceed their allotted time.

10. Explain Belady’s anomaly for page replacement algorithm.


Belady's Anomaly is an interesting phenomenon in the context of page replacement algorithms used in virtual
memory management. It occurs when increasing the number of page frames (physical memory) available in a system
does not necessarily result in a reduction of page faults, but instead, the page fault rate may increase.

Belady's Anomaly contradicts the common intuition that more memory should lead to better performance. It can
happen with certain page replacement algorithms, such as the FIFO (First-In-First-Out) page replacement algorithm.

Explanation:

 In Belady's Anomaly, as the number of page frames increases, you might expect fewer page faults because
there's more room to keep frequently used pages in memory.

 However, for some page reference patterns, adding more page frames can result in the eviction of pages
that would have otherwise stayed in memory, causing additional page faults.

Belady's Anomaly illustrates that the performance of page replacement algorithms can be counterintuitive and that
adding more memory doesn't always guarantee improved system performance, depending on the specific algorithm
used.

11. Describe producer and consumers problem with an unbounded buffer with a sample program.

12. Write and explain the logic of the “Bully algorithm for election of a successor” in a distributed
system.
The Bully Algorithm is a leader election algorithm used in distributed systems to elect a coordinator or leader among
a group of processes. The algorithm ensures that a single process becomes the leader, and it can be used in
scenarios where only one process should perform certain tasks or make decisions.

Logic of the Bully Algorithm:

1. When a process realizes that the current leader is no longer responsive (e.g., crashed or failed), it initiates an
election by sending an "election" message to all processes with higher priorities.

2. If no higher-priority process responds within a timeout, the initiating process becomes the new leader and
sends a "coordinate" message to all lower-priority processes to inform them of its leadership.

3. If a higher-priority process responds, it stops the election and announces its leadership by sending a
"coordinate" message to all lower-priority processes.

4. The process with the highest priority wins the election and becomes the leader.

The Bully Algorithm ensures that a new leader is elected when the current leader becomes unavailable, maintaining
system continuity and coordination in a distributed environment.
13. What are the necessary conditions for deadlock? *******
The necessary conditions for deadlock in a concurrent system, often referred to as the "Four Coffin Conditions," are
as follows:

1. Mutual Exclusion: At least one resource must be non-shareable, meaning only one process can use it at a
time. If multiple processes are allowed to share a resource, deadlock cannot occur.

2. Hold and Wait: Processes must hold resources while waiting for additional ones, creating a situation where
they cannot release resources. In other words, a process must be holding at least one resource and waiting
for another.

3. No Preemption: Resources cannot be preempted from processes; they must be released voluntarily. If a
resource can be forcibly taken away from a process, deadlock is less likely.

4. Circular Wait: There must be a circular chain of two or more processes, each waiting for a resource held by
the next process in the chain. In other words, Process A is waiting for a resource held by Process B, Process B
is waiting for a resource held by Process C, and so on until one process in the chain is waiting for a resource
held by Process A.

If all four of these conditions hold simultaneously, a deadlock can occur. It's important to note that these conditions
are necessary but not sufficient; the presence of these conditions does not guarantee that a deadlock will always
happen, but they are prerequisites for it.

14. Explain the difference between process and program. Briefly discuss about process creation and
termination.
 Program: A program is a set of instructions written in a programming language that can be executed by a
computer. It is a static entity, typically stored on secondary storage (e.g., a hard disk), and doesn't have an
associated state. A program is the source code or binary code of an application.

 Process: A process is a dynamic entity that represents the execution of a program. It includes not only the
program's code but also its associated data, execution context, and system resources. Multiple processes
can run concurrently, each with its own memory space and system resources. Processes are the actual
instances of programs in execution.

Process Creation and Termination:

 Process Creation:

 Forking: In Unix-like operating systems, a new process can be created by using the fork() system call.
The new process is a copy of the parent process, and they run independently.

 Executing: After forking, the child process often uses the exec() system call to load a new program
into its address space, effectively replacing the program it inherited from the parent.

 Other Methods: Process creation can also occur through other mechanisms, such as CreateProcess()
in Windows or spawn() in Unix.

 Process Termination:

 Processes can terminate voluntarily by calling an exit system call (e.g., exit() in C).

 Processes can be terminated by the operating system due to errors or violations of system policies.

 Parent processes can also terminate child processes using specific system calls or signals.
15. What is Critical section problem? What are the requirements that the solution to critical section
problem must satisfy?
 Critical Section Problem: The critical section problem is a fundamental synchronization problem in
concurrent computing. It involves multiple processes trying to access a shared resource or a critical section
of code in a way that ensures mutual exclusion, progress, and bounded waiting.

 Requirements for a Solution:

1. Mutual Exclusion: Only one process should be allowed to enter its critical section at any given time.

2. Progress: If no process is currently executing in its critical section and some processes want to enter
their critical sections, only those processes not in their remainder section can participate in deciding
which one will enter next. This ensures that processes do not starve and eventually make progress
toward entering their critical sections.

3. Bounded Waiting: There exists a bound on the number of times other processes can enter their
critical sections after a process has made a request to enter its critical section and before that
request is granted. This prevents processes from waiting indefinitely.

16. What is semaphore? How is it accessed? Explain the dining Philosopher’s problem and give the
solution of it, using semaphore.
 Semaphore: A semaphore is a synchronization primitive used in concurrent programming to control access
to shared resources. It can be accessed using two atomic operations: wait (P) and signal (V). Semaphores can
be used to manage concurrent access to resources and solve synchronization problems.

 Dining Philosophers Problem: The Dining Philosophers problem is a classic synchronization problem where
several philosophers sit around a dining table with a bowl of spaghetti and forks. To eat, a philosopher needs
two forks, one on each side of their plate. Philosophers alternate between thinking and eating, but they
must avoid conflicts to prevent deadlock.

 Semaphore Solution: The Dining Philosophers problem can be solved using semaphores to control access to
forks. Each fork is represented by a semaphore. Philosophers acquire forks by performing wait operations on
the corresponding semaphore and release forks by performing signal operations.

Here's a simplified example of solving the Dining Philosophers problem using semaphores in C:
#include <stdio.h>
#include <pthread.h>
#include <semaphore.h>

#define NUM_PHILOSOPHERS 5

sem_t forks[NUM_PHILOSOPHERS];

void *philosopher(void *arg) {


int id = *(int *)arg;
int left_fork = id;
int right_fork = (id + 1) % NUM_PHILOSOPHERS;

while (1) {
// Thinking
printf("Philosopher %d is thinking.\n", id);

// Acquire forks
sem_wait(&forks[left_fork]);
sem_wait(&forks[right_fork]);

// Eating
printf("Philosopher %d is eating.\n", id);

// Release forks
sem_post(&forks[left_fork]);
sem_post(&forks[right_fork]);
}
}

int main() {
pthread_t philosophers[NUM_PHILOSOPHERS];
int ids[NUM_PHILOSOPHERS];

for (int i = 0; i < NUM_PHILOSOPHERS; i++) {


sem_init(&forks[i], 0, 1); // Initialize forks (semaphores)
ids[i] = i;
pthread_create(&philosophers[i], NULL, philosopher, &ids[i]);
}

for (int i = 0; i < NUM_PHILOSOPHERS; i++) {


pthread_join(philosophers[i], NULL);
sem_destroy(&forks[i]);
}

return 0;
}
In this program, each philosopher is represented by a thread. Semaphores (forks) control access to forks, ensuring
that two adjacent philosophers cannot pick up the same fork simultaneously, thus preventing deadlock.

17. What is system deadlock? Explain necessary conditions of deadlock? *****


 System Deadlock: System deadlock, often referred to simply as deadlock, is a state in which two or more
processes or threads are unable to proceed because they are each waiting for the other(s) to release
resources. In a deadlock, processes become stuck in a cyclic dependency, preventing any of them from
making progress. Deadlocks can occur in various systems, including operating systems, database systems,
and distributed systems.

Necessary Conditions for Deadlock (Four Coffin Conditions): To have a deadlock situation, the following four
necessary conditions must be met simultaneously:

1. Mutual Exclusion: At least one resource must be non-shareable, meaning only one process can use it at a
time. If multiple processes are allowed to share a resource, deadlock cannot occur.

2. Hold and Wait: Processes must hold resources while waiting for additional ones. In other words, a process
must be holding at least one resource and waiting for another.

3. No Preemption: Resources cannot be preempted from processes; they must be released voluntarily. If a
resource can be forcibly taken away from a process, deadlock is less likely.

4. Circular Wait: There must be a circular chain of two or more processes, each waiting for a resource held by
the next process in the chain. In other words, Process A is waiting for a resource held by Process B, Process B
is waiting for a resource held by Process C, and so on until one process in the chain is waiting for a resource
held by Process A.

These four conditions, when met together, create a situation where processes are deadlocked, and none can make
progress until one or more of the conditions are broken.

18. Explain the difference between process and program. Briefly discuss creation and termination.
 Program: A program is a set of instructions written in a programming language that can be executed by a
computer. It is a static entity, typically stored on secondary storage (e.g., a hard disk), and doesn't have an
associated state. A program is the source code or binary code of an application.

 Process: A process is a dynamic entity that represents the execution of a program. It includes not only the
program's code but also its associated data, execution context, and system resources. Multiple processes
can run concurrently, each with its own memory space and system resources. Processes are the actual
instances of programs in execution.

Process Creation and Termination:

 Process Creation:

 Forking: In Unix-like operating systems, a new process can be created by using the fork() system call.
The new process is a copy of the parent process, and they run independently.

 Executing: After forking, the child process often uses the exec() system call to load a new program
into its address space, effectively replacing the program it inherited from the parent.

 Other Methods: Process creation can also occur through other mechanisms, such as CreateProcess()
in Windows or spawn() in Unix.

 Process Termination:

 Processes can terminate voluntarily by calling an exit system call (e.g., exit() in C).

 Processes can be terminated by the operating system due to errors or violations of system policies.

 Parent processes can also terminate child processes using specific system calls or signals.

19. What is critical section problem? What are the requirements that the solution to critical section
problem must satisfy?
 Critical Section Problem: The critical section problem is a fundamental synchronization problem in
concurrent computing. It involves multiple processes trying to access a shared resource or a critical section
of code in a way that ensures mutual exclusion, progress, and bounded waiting.

Requirements for a Solution:

1. Mutual Exclusion: Only one process should be allowed to enter its critical section at any given time.

2. Progress: If no process is currently executing in its critical section and some processes want to enter their
critical sections, only those processes not in their remainder section can participate in deciding which one
will enter next. This ensures that processes do not starve and eventually make progress toward entering
their critical sections.

3. Bounded Waiting: There exists a bound on the number of times other processes can enter their critical
sections after a process has made a request to enter its critical section and before that request is granted.
This prevents processes from waiting indefinitely.

Solutions to the critical section problem involve using synchronization mechanisms like semaphores, mutexes, or
locks to ensure that processes or threads can coordinate their access to shared resources and execute their critical
sections safely and in a controlled manner. These mechanisms help in preventing data corruption, race conditions,
and other concurrency-related issues.

20. What is deadlock? Write down necessary conditions for deadlock? ****

21. What is process? Explain Process State and Process Control Block.
 Process: A process is a fundamental concept in operating systems and represents a program in execution. It
is a dynamic entity that includes the program code, data, execution context, system resources, and a
program counter. A process can run independently and can have multiple instances (multiple processes can
execute the same program concurrently).
 Process State: The process state represents the current condition or phase of a process during its execution.
Common process states include:

1. New: The process is being created but has not yet started executing.

2. Ready: The process is ready to execute but is waiting for the CPU to be assigned.

3. Running: The process is currently being executed by the CPU.

4. Blocked (or Waiting): The process is temporarily halted and is waiting for a particular event or
resource (e.g., I/O completion).

5. Terminated (or Exit): The process has finished execution and has been terminated.

 Process Control Block (PCB): The Process Control Block is a data structure used by the operating system to
manage and maintain information about each process. It contains various pieces of information about the
process, including:

 Process ID (PID)

 Program counter (PC)

 CPU registers

 Process state (e.g., running, ready, blocked)

 Priority

 Memory information (e.g., memory limits)

 Open file descriptors

 CPU scheduling information

 Pointers to parent and child processes

The PCB allows the operating system to save and restore the state of a process during context switches, manage
process scheduling, and keep track of process-related information. It plays a crucial role in process management and
ensures that processes can be managed effectively in a multitasking environment.

22. What is semaphore? How can semaphore be used to enforce mutual exclusion? Explain Producer-
Consumer problem. Explain Dining Philosopher problem.
 Semaphore: A semaphore is a synchronization primitive used in concurrent programming to control access
to shared resources. It can be used to enforce mutual exclusion, synchronization, and coordination among
multiple processes or threads. Semaphores can be accessed using two fundamental operations: wait (P) and
signal (V).

 Enforcing Mutual Exclusion with Semaphores: Semaphores can be used to ensure that only one process or
thread can access a critical section of code or a shared resource at a time. By initializing a semaphore to 1
(binary semaphore), you can create a mutual exclusion mechanism.

 Producer-Consumer Problem: The Producer-Consumer problem is a classic synchronization problem in


which two types of processes, producers and consumers, share a common, finite-size buffer. Producers add
data to the buffer, while consumers remove data from it. The problem is to ensure that producers and
consumers operate safely and efficiently without violating mutual exclusion or causing buffer overflows or
underflows.

 Dining Philosophers Problem: The Dining Philosophers problem is another classic synchronization problem
involving a group of philosophers sitting around a dining table. Each philosopher alternates between thinking
and eating. To eat, a philosopher needs two forks, one on each side of their plate. The problem is to design a
solution that prevents deadlocks, ensures fairness, and allows philosophers to take turns eating.

23. Differentiate process switching and context switching. **

Aspect Process Switching Context Switching

Definition Transition between processes Transition within a process

Involves Involves switching between different Involves switching between different


processes in a multi-process system. threads or tasks within a single process.

Granularity Occurs at the process level. Occurs at a finer level within a process.

Overhead Typically involves higher overhead due to Involves lower overhead as it switches
switching between separate memory between threads sharing the same
spaces. memory space.

Common in Common in multi-process systems where Common in multi-threaded applications


multiple independent processes run within a single process, where threads
concurrently. share resources.

State Involves preserving and restoring the Involves preserving and restoring the
Preservation entire process state, including CPU execution context of the current thread,
registers, program counter, and memory such as CPU registers and program
mappings. counter.

Frequency Occurs less frequently compared to Occurs more frequently, as threads


context switching. within a process may be scheduled to
run and yield the CPU.

Synchronization Requires synchronization mechanisms Requires synchronization mechanisms


like semaphores or message passing for for inter-thread communication, such as
inter-process communication. mutexes or condition variables.

Process switching involves transitioning between different processes, each with its own memory space and
resources. Context switching occurs within a process, typically between threads, where threads share the same
memory space but have their own execution contexts. Context switching is generally more efficient due to its lower
overhead.

24. Under which condition does page fault occur?


A page fault occurs in virtual memory systems when a program or process attempts to access a page of memory that
is currently not in physical RAM (main memory). In other words, a page fault occurs when a required page is not
present in the RAM, and the operating system needs to load the page from secondary storage (e.g., a hard disk) into
RAM before the program can continue its execution.

A page fault can happen under the following conditions:

1. Page Not in RAM: The referenced memory page is not currently resident in physical memory (RAM). This can
occur when a program initially starts, or when it accesses a part of its address space that has been swapped
out or not yet loaded into RAM.

2. Protection Violation: The program attempts to access a memory page for which it lacks proper permissions
(e.g., writing to a read-only page). In this case, the operating system may terminate the program or raise an
exception.
3. Address Space Exceeds Physical Memory: In a virtual memory system, the total addressable space may
exceed the available physical memory. When the system runs out of physical memory and needs to make
space for new pages, it may perform page replacement, swapping some pages out to secondary storage to
make room for others.

25. Explain mutual exclusion.

a) Write the first algorithm of mutual exclusion algorithm.

b) What are their problems?


Mutual exclusion is a fundamental concept in concurrent programming and operating systems. It refers to the
property that ensures only one process or thread can access a shared resource, critical section, or piece of data at
any given time. The goal of mutual exclusion is to prevent multiple processes or threads from concurrently modifying
shared data, which can lead to data corruption and race conditions.

a) First Algorithm of Mutual Exclusion (Peterson's Algorithm):

Peterson's Algorithm is one of the earliest mutual exclusion algorithms designed for two processes. It uses two
shared variables, flag[0] and flag[1], and a turn variable to coordinate access to a critical section. Here's the
algorithm in pseudocode:

Initialization:
flag[0] = flag[1] = false; // Initially, neither process is interested.
turn = 0; // Let process 0 go first.

Process 0:
flag[0] = true; // Process 0 is interested.
turn = 1; // Pass the turn to process 1.
while (flag[1] && turn == 1); // Wait while process 1 is interested and it's their turn.
// Critical Section (Process 0 accesses the shared resource)
flag[0] = false; // Process 0 is done.

Process 1:
flag[1] = true; // Process 1 is interested.
turn = 0; // Pass the turn to process 0.
while (flag[0] && turn == 0); // Wait while process 0 is interested and it's their turn.
// Critical Section (Process 1 accesses the shared resource)
flag[1] = false; // Process 1 is done.

b) Problems with Peterson's Algorithm:

 Limited to Two Processes: Peterson's Algorithm is designed for two processes only and cannot be directly
extended to handle more than two processes.

 Busy-Waiting: The algorithm involves busy-waiting (spinning) while a process waits for its turn and for the
other process to finish its critical section. This consumes CPU cycles and is not an efficient use of resources.

 Race Conditions: Although Peterson's Algorithm ensures mutual exclusion, it doesn't protect against race
conditions that can occur when both processes access non-critical sections simultaneously.

 Not Suitable for Modern Systems: For modern multi-core systems and more complex scenarios, other
synchronization primitives like semaphores and mutexes are typically used, as they offer more flexibility and
efficiency.
26) Define a process. Describe the life cycle of a process.
 Process Definition: In computing, a process is a fundamental concept that represents the execution of a
program in a computer system. It includes not only the program's code but also its associated data,
execution context, and system resources. A process runs independently, with its own memory space, and
can be seen as a program in execution.

 Life Cycle of a Process:

The life cycle of a process typically consists of several states, and a process can transition between these states
during its execution. The common process states include:

1. New: In this state, a process is being created but has not yet started executing. The operating system is
allocating resources and setting up the process's initial state.

2. Ready: A process enters the ready state when it is prepared to execute but is waiting for the CPU to be
assigned to it. Processes in the ready state are waiting in a queue to be scheduled for execution.

3. Running: When the CPU scheduler selects a process from the ready queue to execute, it enters the running
state. In this state, the process's instructions are being executed on the CPU.

4. Blocked (Waiting): A process may enter the blocked state when it is waiting for an event or resource, such as
I/O completion or user input. While blocked, the process is not using CPU time and remains in a blocked
queue.

5. Terminated (Exit): When a process completes its execution or is terminated by the operating system, it
enters the terminated state. In this state, the process's resources are released, and its exit status is typically
reported.

The life cycle of a process involves transitions between these states. For example, a process may transition from the
ready state to the running state when it is scheduled to execute and back to the ready state when it yields the CPU
or is interrupted. Similarly, a blocked process can transition back to the ready state when the event it is waiting for
occurs.

Processes can also be created, forked, or spawned, and they may communicate with each other through inter-
process communication mechanisms. The life cycle of a process is managed and controlled by the operating system
to ensure efficient resource allocation and execution.

27) Write the difference between partition allocation and multiple partition allocation.

Aspect Partition Allocation Multiple Partition Allocation

Allocation of A single partition is allocated to each Multiple partitions are available, and
Space process, which occupies the entire multiple processes can be allocated to
partition. different partitions simultaneously.

Size of Partitions are typically of fixed sizes. Partitions can have variable sizes to
Partitions accommodate processes of different sizes.

Wastage of Fixed-size partitions may lead to internal Variable-size partitions aim to minimize
Memory fragmentation, where some memory internal fragmentation, but some
within a partition remains unused. fragmentation may still occur.

Resource May lead to inefficient use of memory Offers better memory utilization as
Utilization resources, especially if processes are partitions can be sized to match the
small compared to partition size. memory requirements of processes more
closely.

Allocation Less flexible in terms of accommodating More flexible as it can handle processes of
Flexibility processes with varying memory different sizes without significant wastage.
requirements.

Fragmentation Commonly exhibits internal May exhibit external fragmentation


Types fragmentation (unused memory within a (unused memory between partitions) in
partition). the form of gaps.

Examples Fixed-size partitioning is commonly Variable-size partitioning is used in


found in older operating systems like modern systems with demand paging and
MS-DOS. virtual memory.

28) Under what conditions do page faults occur?


A page fault occurs in virtual memory systems when a program or process attempts to access a page of memory that
is currently not in physical RAM (main memory). Page faults can occur under the following conditions:

1. Page Not in RAM: The referenced memory page is not currently resident in physical memory (RAM). This can
occur when a program initially starts, or when it accesses a part of its address space that has been swapped
out or not yet loaded into RAM.

2. Protection Violation: The program attempts to access a memory page for which it lacks proper permissions
(e.g., writing to a read-only page). In this case, the operating system may terminate the program or raise an
exception.

3. Address Space Exceeds Physical Memory: In a virtual memory system, the total addressable space may
exceed the available physical memory. When the system runs out of physical memory and needs to make
space for new pages, it may perform page replacement, swapping some pages out to secondary storage to
make room for others.

In summary, page faults occur when the operating system needs to bring a page into physical memory from
secondary storage or when a program attempts to access memory it's not allowed to access. Handling page faults is
a crucial part of virtual memory management.

29) What is semaphore? Write down the algorithm, using semaphore to solve producerconsumer (finite
libber) problem.

30) Describe a system model for deadlock. Explain the combined approach to deadlock handling. Explain
Banker’s algorithm for deadlock avoidance.
 System Model for Deadlock:

 A system model for deadlock typically includes a set of processes, a set of resources (e.g., CPU,
memory, devices), and a set of resource types.

 Processes can request resources, use resources, and release resources.

 Resources are categorized into different types, and each resource type has a limited number of
instances.

 Processes can hold resources while waiting for additional resources to be allocated.

 Deadlock can occur when processes are unable to proceed because they are waiting for resources
that are held by other processes.

 Combined Approach to Deadlock Handling:


 A combined approach to deadlock handling involves prevention, avoidance, detection, and recovery
strategies.

 Prevention: Structurally eliminate one or more necessary conditions for deadlock (e.g., removing the
"circular wait" condition).

 Avoidance: Dynamically allocate resources to processes in a way that ensures that deadlock cannot
occur. This is typically done using resource allocation graphs or Banker's algorithm.

 Detection: Periodically check the system for the presence of a deadlock. If detected, take corrective
action, such as killing processes.

 Recovery: After detecting a deadlock, take steps to recover from it. This may involve killing
processes involved in the deadlock to free up resources.

 Banker's Algorithm for Deadlock Avoidance:

 Banker's algorithm is a resource allocation and deadlock avoidance algorithm.

 It keeps track of the maximum resource needs, current resource allocations, and available resources.

 It checks whether granting a resource request will result in a safe state (a state where deadlock
cannot occur).

 If a request will leave the system in a safe state, the resource is allocated; otherwise, the process
must wait.

 Banker's algorithm ensures that processes do not enter a state where they can be deadlocked.

31) What is semaphore? How can semaphore be used to enforce mutual exclusion? Explain readers and
writers problem. Explain dining philosopher problem.
 Semaphore: A semaphore is a synchronization primitive used in concurrent programming to control access
to shared resources. It provides two atomic operations: wait (P) and signal (V). Semaphores are used to
enforce mutual exclusion and coordinate access to resources.

 Enforcing Mutual Exclusion with Semaphores: Semaphores can be used to enforce mutual exclusion by
allowing only one process or thread to access a critical section of code or shared resource at a time. By
acquiring and releasing semaphores, processes can coordinate their access, ensuring that only one process
enters the critical section while others wait.

 Readers and Writers Problem: The Readers and Writers problem is a synchronization problem where
multiple readers and writers access a shared data resource. The goal is to ensure that readers can access the
resource simultaneously for reading, while writers can access it exclusively for writing, ensuring data
consistency.

 Solution Using Semaphores:

Semaphore mutex = 1; // Mutex for controlling access to the data


Semaphore rw_mutex = 1; // Mutex for controlling read and write access
int readers = 0; // Number of active readers

Reader:
P(rw_mutex); // Lock for read access
readers++;
if (readers == 1) {
P(mutex); // Lock the data for the first reader
}
V(rw_mutex); // Release the read access lock
// Read data

P(rw_mutex); // Lock for read access


readers--;
if (readers == 0) {
V(mutex); // Unlock the data after the last reader
}
V(rw_mutex); // Release the read access lock

Writer:
P(mutex); // Lock the data exclusively for writing

// Write data

V(mutex); // Unlock the data after writing

 Dining Philosophers Problem: The Dining Philosophers problem is a classic synchronization problem where
several philosophers sit around a dining table with a bowl of spaghetti and forks. To eat, a philosopher needs
two forks, one on each side of their plate. The problem is to find a way for the philosophers to dine without
encountering deadlocks or conflicts when trying to eat.

 Solution Using Semaphores: Semaphore-based solutions can be devised to ensure that philosophers
can only pick up both forks if both are available, preventing deadlocks.

Semaphore forks[N]; // Semaphore for each fork


Semaphore mutex = 1; // Mutex for critical section

Philosopher(i):
while (true) {
think(); // Philosopher thinks
P(mutex); // Acquire mutex for fork access
P(forks[i]); // Pick up left fork
P(forks[(i + 1) % N]); // Pick up right fork
eat(); // Philosopher eats
V(forks[i]); // Put down left fork
V(forks[(i + 1) % N]); // Put down right fork
V(mutex); // Release mutex
}
32) Explain the following file access method with example: i. Direct ii. Sequential iii. Indexed Sequential
i. Direct Access Method:

 In the direct access method, data can be retrieved or written directly by specifying its logical or physical
address.

 This method is typically used with indexed files, databases, or data structures that allow random access.

 An example is accessing elements in an array using an index.

// Example of direct access to an array


int data[10]; // Array with 10 elements
int index = 5; // Index of the element to access
int value = data[index]; // Retrieve the value at the specified index

ii. Sequential Access Method:

 In the sequential access method, data is accessed in a linear or sequential order from the beginning to the
end.
 This method is common in reading and writing data from files such as text files.

 Data is read or written sequentially, and the file pointer moves sequentially from one record to the next.

// Example of sequential file access (C code for reading a text file)


FILE *file = fopen("data.txt", "r");
if (file) {
char buffer[256];
while (fgets(buffer, sizeof(buffer), file)) {
printf("%s", buffer); // Process each line sequentially
}
fclose(file);
}

iii. Indexed Sequential Access Method:

 In the indexed sequential access method, data is organized into blocks or pages, and an index or directory
provides access to these blocks.

 The index allows for direct access to blocks, making it a combination of direct and sequential access.

 This method is often used in database systems where records are stored in data blocks, and an index helps
locate specific blocks quickly.

// Example of indexed sequential access in a database


// An index provides direct access to data blocks containing records.
// Records within a block may be accessed sequentially.
These access methods are chosen based on the specific requirements of the application and the data structure used
for storage. Each method offers advantages and trade-offs in terms of efficiency, complexity, and suitability for
different types of data.

Short Notes
i. Process Control Block (PCB):

 A Process Control Block is a data structure maintained by the operating system for each process.

 It contains essential information about a process, including process ID, program counter, CPU registers, and
scheduling information.

 PCBs are used to manage and control the execution of processes, allowing for context switching and
resource allocation.

ii. Scheduler:

 A scheduler is a component of the operating system responsible for selecting and assigning processes or
threads to the CPU for execution.

 Schedulers ensure efficient utilization of CPU resources by determining which process should run next based
on scheduling algorithms.

 Common scheduling algorithms include First-Come, First-Served (FCFS), Round Robin, and Priority
Scheduling.
iii. Paging:

 Paging is a memory management scheme used in operating systems with virtual memory.

 It divides physical memory and logical memory into fixed-size pages and frames, respectively.

 Paging allows for efficient memory allocation and facilitates the handling of page faults.

iv. Segmentation:

 Segmentation is another memory management scheme that divides memory into segments of varying sizes,
each representing a logical unit of a program.

 Segmentation is more flexible than paging and suits programs with different memory requirements.

 It's commonly used in combination with paging in modern operating systems.

v. Optimal Page Replacement Algorithm:

 The Optimal page replacement algorithm selects the page for replacement that will not be used for the
longest time in the future.

 While it provides the best possible page replacement performance, it's impractical because it requires
knowledge of future memory references.

vi. Virtual Machine:

 A virtual machine is an emulation of a physical computer that runs an operating system and applications.

 It allows multiple operating systems to run concurrently on a single physical machine.

 Virtualization technology provides isolation and resource sharing between virtual machines.

vii. Monitor:

 A monitor is a high-level synchronization construct used in concurrent programming.

 It provides a structured way for processes or threads to synchronize access to shared resources.

 Monitors ensure mutual exclusion and condition variables for inter-process communication.

viii. Thrashing:

 Thrashing occurs in virtual memory systems when excessive paging activity leads to a significant decrease in
system performance.

 It results from processes constantly swapping pages in and out of RAM, causing the CPU to spend more time
on page swapping than actual computation.

ix. Distributed OS:

 A Distributed Operating System is designed to run on multiple interconnected computers and provides a
cohesive environment for distributed computing.

 It enables resource sharing, communication, and coordination among networked machines.

 Distributed OSs are commonly used in cloud computing and large-scale server environments.

x. RAID (Redundant Array of Independent Disks):

 RAID is a technology that combines multiple physical hard drives into a single logical unit for data storage
and redundancy.
 Various RAID levels offer different features, including data striping, mirroring, and parity for performance
improvement and fault tolerance.

 RAID is used to enhance data reliability and speed in storage systems.

xi. Round Robin Scheduling:

 Round Robin is a pre-emptive scheduling algorithm used by operating systems to allocate CPU time to
multiple processes.

 Each process is assigned a fixed time quantum, and the CPU scheduler rotates among processes, allowing
each to execute for a time slice.

 Round Robin ensures fair sharing of CPU time among processes and is commonly used in time-sharing
systems.

xii. Virtual Memory:

 Virtual memory is a memory management technique used by operating systems to provide the illusion of a
larger memory space than physically available.

 It allows processes to use more memory than what is physically installed by using a combination of RAM and
disk space for data storage.

 Virtual memory facilitates multitasking, memory protection, and efficient memory allocation.

xiii. Paging and Segmentation:

 Paging and segmentation are memory management techniques used in virtual memory systems.

 Paging divides memory into fixed-size pages and is efficient for managing memory allocation and page faults.

 Segmentation divides memory into variable-sized segments, each representing a logical unit, making it more
flexible for various memory requirements.

xiv. Remote Procedure Call (RPC):

 Remote Procedure Call is a protocol that allows one program or process to execute code on a remote server
or another address space as if it were a local procedure call.

 RPC enables distributed computing by invoking functions or procedures on remote machines, providing a
way for processes to communicate and share resources.

xv. Virus and Worms:

 Viruses and worms are types of malicious software (malware) that can harm computer systems.

 A virus is a program that attaches itself to other executable files and can spread when those files are
executed.

 A worm is a self-replicating program that can spread independently over a network or through email
attachments, often without user intervention.

xvi. File Access Methods:

 File access methods define how data is read from or written to files in a computer system.

 Different methods include:

 Direct Access: Data can be retrieved or written directly by specifying its logical or physical address.

 Sequential Access: Data is accessed linearly from the beginning to the end of a file, with a file
pointer moving sequentially.
 Indexed Sequential Access: Data is organized into blocks or pages, and an index provides direct
access to these blocks, allowing for a combination of direct and sequential access.

xvii. Priority Scheduling:

 Priority scheduling is a scheduling algorithm used by operating systems where each process is assigned a
priority value.

 The process with the highest priority is scheduled to execute first.

 Priority scheduling can be either preemptive (processes can be interrupted) or non-preemptive (a process
runs until it voluntarily releases the CPU).

xviii. FIFO Disk Scheduling Algorithm:

 The FIFO (First-In, First-Out) disk scheduling algorithm is a simple and non-preemptive method for managing
requests to access data on a disk drive.

 Requests are serviced in the order they arrive, with the earliest request being served first.

 While simple, FIFO may not provide optimal disk access performance and can lead to longer seek times for
certain workloads.

xix. Process State Diagram:

 A Process State Diagram is a graphical representation of the various states a process can transition through
during its lifetime.

 Common states include "New," "Ready," "Running," "Blocked," and "Terminated."

 Process state diagrams help in understanding the life cycle of processes and the transitions between states.

xx. Context Switch:

 A context switch is the process of saving the current state of a running process or thread and loading the
saved state of another process or thread.

 Context switches are essential for multitasking, allowing multiple processes to share the CPU.

 They involve saving and restoring CPU registers, program counter, and other process-specific information.

xxi. The Take-Grant Model:

 The Take-Grant Model is a security model used in computer security to analyze access control and
permissions.

 It represents the granting and taking of access rights (e.g., read, write) between subjects (users or processes)
and objects (resources or data).

 The Take-Grant Model helps assess the flow of privileges and potential vulnerabilities in a system and is used
for access control policy analysis.

xxii. Multiprocessor Scheduling:

 Multiprocessor scheduling is the process of allocating and managing tasks or processes on multiple
processors or CPU cores in a multiprocessor system.

 The goal is to optimize processor utilization, minimize execution time, and ensure efficient load balancing
among processors.

xxiii. Artifact-based Authentication:


 Artifact-based authentication is an authentication method that relies on physical or digital artifacts to verify
the identity of an individual or entity.

 Examples of artifacts include smart cards, biometric data, digital certificates, and hardware tokens.

 This method enhances security by requiring possession or knowledge of a specific artifact to gain access.

xxiv. DES (Data Encryption Standard):

 DES is a widely used symmetric-key encryption algorithm designed to secure data transmission and storage.

 It operates on 64-bit blocks of data and uses a 56-bit secret key.

 DES applies multiple rounds of permutation, substitution, and transposition to encrypt and decrypt data.

xxv. Digital Signature:

 A digital signature is a cryptographic technique used to verify the authenticity and integrity of a digital
message or document.

 It involves generating a digital signature using a private key, which can be verified by anyone with the
corresponding public key.

 Digital signatures are crucial for secure communication and data integrity in electronic transactions.

xxvi. Multi-Queue Scheduling:

 Multi-queue scheduling is a scheduling strategy used in operating systems to manage processes or threads.

 It involves categorizing processes into multiple queues based on their characteristics or priorities.

 Each queue may use a different scheduling algorithm to optimize resource allocation.

xxvii. Resource Allocation Graph (RAG):

 A Resource Allocation Graph (RAG) is a graphical representation used in deadlock detection and resource
allocation management.

 It shows the relationships between processes and resources, with nodes representing processes and
resources and edges indicating resource requests and allocations.

 RAGs help identify potential deadlock situations and are used in resource allocation algorithms.

xxviii. Reader-Writer Problem:

 The Reader-Writer Problem is a classic synchronization problem in concurrent programming.

 It involves multiple readers and writers accessing a shared resource (e.g., a file or database).

 Readers can access the resource simultaneously for reading, but writers must have exclusive access to write.

 Solutions to this problem ensure data consistency and avoid conflicts between readers and writers.

You might also like