Operating System
Operating System
pdf
Group-A (Very Short Answer Type Question)
7.
2. Increased Throughput
[5×3=15][5×3=15]
A process is a program in execution and it is more than a program code called as text
section and this concept works under all the operating system because all the task perform
by the operating system needs a process to perform the task
The process executes when it changes the state. The state of a process is defined by the
current activity of the process.
• Waiting − The process is in waiting state until an event occurs like I/O operation
completion or receiving a signal.
It is important to know that only one process can be running on any processor at any
instant. Many processes may be ready and waiting.
Step 3 − If any higher priority process is ready, the uncompleted process will be sent to the
waiting state from the running state.
Step 4 − Whenever I/O or event is completed the process will send back to ready state
based on the interrupt signal given by the running state.
Step 5 − Whenever the execution of a process is completed in running state, it will exit to
terminate state, which is the completion of process.
FCFS is the simplest of all Disk Scheduling Algorithms. In FCFS, the requests are addressed in
the order they arrive in the disk queue. Let us understand this with the help of an example.
First Come First Serve
Example:
So, total overhead movement (total distance covered by the disk arm) =
(82-50)+(170-82)+(170-43)+(140-43)+(140-24)+(24-16)+(190-16) =642
Why?
4. A page takes 20 ns time to search in TLB and 100 ns to access memory. If page is not
found in TLB first access the memory 100 ns. Hit ratio is 80%. Then find the effective
memory access time?
Given:
Formula:
EMAT = (Hit ratio × (TLB time + Memory access)) +
(Miss ratio × (TLB time + 2 × Memory access))
a. First Fit
b. Best Fit
c. Worst Fit
Conclusion:
Best Fit makes the most efficient use of memory in this case — it successfully allocates
memory to all 4 processes
Group-C (Long Answer Type Question)
[15×3=45][15×3=45]
Context Switching in an operating system is a critical function that allows the CPU to
efficiently manage multiple processes. By saving the state of a currently active process and
loading the state of another, the system can handle various tasks simultaneously without
losing progress. This switching mechanism ensures optimal use of the CPU, enhancing the
system’s ability to perform multitasking effectively.
Suppose the operating system has (N) processes stored in a Process Control Block (PCB).
Each process runs using the CPU to perform its task. While a process is running, other
processes with higher priorities queue up to use the CPU and complete their tasks.
Switching the CPU to another process requires saving the state of the current process and
restoring the state of a different process. This task is known as a context switch. When a
context switch occurs, the kernel saves the context of the old process in its PCB and loads
the saved context of the new process scheduled to run. Context-switch time is pure
overhead because the system does no useful work while switching. The switching speed
varies from machine to machine, depending on factors such as memory speed, the number
of registers that need to be copied, and the existence of special instructions (such as a single
instruction to load or store all registers). A typical context switch takes a few milliseconds.
Context-switch times are highly dependent on hardware support. For example, some
processors (such as the Sun UltraSPARC) provide multiple sets of registers. In this case, a
context switch simply requires changing the pointer to the current register set. However, if
there are more active processes than available register sets, the system resorts to copying
register data to and from memory, as before. Additionally, the more complex the operating
system, the greater the amount of work that must be done during a context switch.
Context switching enables all processes to share a single CPU to finish their execution and
store the status of the system’s tasks. The execution of the process begins at the same place
where there is a conflict when the process is reloaded into the system.
The operating system’s need for context switching is explained by the reasons listed below.
• One process does not directly switch to another within the system. Context switching
makes it easier for the operating system to use the CPU’s resources to carry out its tasks
and store its context while switching between multiple processes.
• Context switching enables all processes to share a single CPU to finish their execution
and store the status of the system’s tasks. The execution of the process begins at the
same place where there is a conflict when the process is reloaded into the system.
• Context switching only allows a single CPU to handle multiple processes requests
parallelly without the need for any additional processors.
• Interrupts
• Multitasking
• User/Kernel switch
Interrupts: When a CPU requests that data be read from a disc, if any interruptions occur,
context switching automatically switches to a component of the hardware that can handle
the interruptions more quickly.
Multitasking: The ability for a process to be switched from the CPU so that another process
can run is known as context switching. When a process is switched, the previous state is
retained so that the process can continue running at the same spot in the system.
Kernel/User Switch: This trigger is used when the OS needed to switch between the user
mode and kernel mode.
When switching between user mode and kernel/user mode is necessary, operating systems
use the kernel/user switch.
The Process Control block(PCB) is also known as a Task Control Block. it represents a process
in the Operating System. A process control block (PCB) is a data structure used by a
computer to store all information about a process. It is also called the descriptive process.
When a process is created (started or installed), the operating system creates a process
manager.
Working Process Context Switching
In the context switching of two processes, the priority-based process occurs in the ready
queue of the process control block. Following are the steps:
• The PCB can be stored in a single layer in kernel memory or in a custom OS file.
• A handle has been added to the PCB to have the system ready to run.
• The operating system aborts the execution of the current process and selects a process
from the waiting list by tuning its PCB.
• Load the PCB’s program counter and continue execution in the selected process.
• Process/thread values can affect which processes are selected from the queue, this can
be important
7. Define process contention scope and system contention scope. Write the benefits of
threads.
Process contention scope is executed accordingly to the priority level that means the scheduler
first selects or check the thread which have the highest priority to run.
The programmer sets the user level thread priorities that are not adjusted by the thread library.
Also there are some thread libraries which will allow the programmer to change the priority of a
thread.
System using the one to one model such as windows, Linux schedules threads using only system
contention scope.
Process contention scope use many-to- System contention scope use one-to-
2.
many and many-to-one model in it. one model in it.
In process contention scope all the In system contention scope, all the
8. scheduling mechanism for the thread is scheduling mechanism for the thread
local to the processes. is global to the processes.
o FCFS
o SSTF
o SCAN]
Order:
143 → 86 → 47 → 93 → 174 → 98 → 150 → 102 → 170 → 130
Head movements:
|143-86| + |86-47| + |47-93| + |93-174| + |174-98| + |98-150| + |150-102| + |102-170| +
|170-130|
= 57 + 39 + 46 + 81 + 76 + 52 + 48 + 68 + 40 = 507 cylinders
Start at 143
Next closest = 150 (7)
Then: 150 → 130 (20)
→ 170 (40)
→ 174 (4)
→ 102 (72)
→ 98 (4)
→ 93 (5)
→ 86 (7)
→ 47 (39)
Order:
143 → 130 → 102 → 98 → 93 → 86 → 47 → 0 (end)
Then reverse: 174 → 170 → 150
Movement:
|143-130| + |130-102| + |102-98| + |98-93| + |93-86| + |86-47| + |47-0| + |0-150| + |150-
170| + |170-174|
= 13+28+4+5+7+39+47+150+20+4 = 317 cylinders
A directory is a container that is used to contain folders and files. It organizes files and
folders in a hierarchical manner. In other words, directories are like folders that help
organize files on a computer. Just like you use folders to keep your papers and documents in
order, the operating system uses directories to keep track of files and where they are stored.
Different structures of directories can be used to organize these files, making it easier to find
and manage them.
In an operating system, there are different types of directory structures that help organize
and manage files efficiently.
Each type of directory has its own way of arranging files and directories, offering unique
benefits and features.These are:
• Single-Level Directory
• Two-Level Directory
1) Single-Level Directory
The single-level directory is the simplest directory structure. In it, all files are contained in
the same directory which makes it easy to support and understand.
A single level directory has a significant limitation, however, when the number of files
increases or when the system has more than one user. Since all the files are in the same
directory, they must have a unique name. If two users call their dataset test, then the
unique name rule violated.
Advantages
• The operations like file creation, searching, deletion, updating are very easy in such a
directory structure.
• Logical Organization : Directory structures help to logically organize files and directories
in a hierarchical structure. This provides an easy way to navigate and manage files,
making it easier for users to access the data they need.
• Increased Efficiency: Directory structures can increase the efficiency of the file system by
reducing the time required to search for files. This is because directory structures are
optimized for fast file access, allowing users to quickly locate the file they need.
• Improved Security : Directory structures can provide better security for files by allowing
access to be restricted at the directory level. This helps to prevent unauthorized access
to sensitive data and ensures that important files are protected.
• Facilitates Backup and Recovery : Directory structures make it easier to backup and
recover files in the event of a system failure or data loss. By storing related files in the
same directory, it is easier to locate and backup all the files that need to be protected.
• Scalability: Directory structures are scalable, making it easy to add new directories and
files as needed. This helps to accommodate growth in the system and makes it easier to
manage large amounts of data.
Disadvantages
• There may chance of name collision because two files can have the same name.
2) Two-Level Directory
As we have seen, a single level directory often leads to confusion of files names among
different users. The solution to this problem is to create a separate directory for each user.
In the two-level directory structure, each user has their own user files directory (UFD). The
UFDs have similar structures, but each lists only the files of a single user. System’s master
file directory (MFD) is searched whenever a new user id is created.
Two-Levels Directory Structure
Advantages
• The main advantage is there can be more than two files with same name, and would be
very helpful if there are multiple users.
• A security would be there which would prevent user to access other user’s files.
Disadvantages
• As there is advantage of security, there is also disadvantage that the user cannot share
the file with the other users.
• Unlike the advantage users can create their own files, users don’t have the ability to
create subdirectories.
• Scalability is not possible because one user can’t group the same types of files together.
Tree directory structure of operating system is most commonly used in our personal
computers. User can create files and subdirectories too, which was a disadvantage in the
previous directory structures.
This directory structure resembles a real tree upside down, where the root directory is at
the peak. This root contains all the directories for each user. The users can create
subdirectories and even store files in their directory.
A user do not have access to the root directory data and cannot modify it. And, even in this
directory the user do not have access to other user’s directories. The structure of tree
directory is given below which shows how there are files and subdirectories in each user’s
directory.
Advantages
• This directory is more scalable than the other two directory structures explained.
Disadvantages
• As the user isn’t allowed to access other user’s directory, this prevents the file sharing
among users.
• As the user has the capability to make subdirectories, if the number of subdirectories
increase the searching may become complicated.
• If files do not fit in one, they might have to be fit into other directories.
As we have seen the above three directory structures, where none of them have the
capability to access one file from multiple directories. The file or the subdirectory could be
accessed through the directory it was present in, but not from the other directory.
This problem is solved in acyclic graph directory structure, where a file in one directory can
be accessed from multiple directories. In this way, the files could be shared in between the
users. It is designed in a way that multiple directories point to a particular directory or file
with the help of links.
In the below figure, this explanation can be nicely observed, where a file is shared between
multiple users. If any user makes a change, it would be reflected to both the users.
• Flexibility is increased as file sharing and editing access is there for multiple users.
Disadvantages
• The user must be very cautious to edit or even deletion of file as the file is accessed by
multiple users.
• If we need to delete the file, then we need to delete all the references of the file inorder
to delete it permanently.
Unlike the acyclic-graph directory, which avoids loops, the general-graph directory can have
cycles, meaning a directory can contain paths that loop back to the starting point. This can
make navigating and managing files more complex.
• Requires garbage collection to manage and clean up unused files and directories.
Conclusion
10. Explain paging with TLB with a suitable diagram. What is the disadvantage of using TLB?
In Operating System (Memory Management Technique: Paging), for each process page table
will be created, which will contain a Page Table Entry (PTE). This PTE will contain information
like frame number (The address of the main memory where we want to refer), and some
other useful bits (e.g., valid/invalid bit, dirty bit, protection bit, etc). This page table entry
(PTE) will tell where in the main memory the actual page is residing.
Now the question is where to place the page table, such that overall access time (or
reference time) will be less. The problem initially was to fast access the main memory
content based on the address generated by the CPU (i.e. logical/virtual address). Initially,
some people thought of using registers to store page tables, as they are high-speed memory
so access time will be less.
The idea used here is, to place the page table entries in registers, for each request generated
from the CPU (virtual address), it will be matched to the appropriate page number of the
page table, which will now tell where in the main memory that corresponding page resides.
Everything seems right here, but the problem is registered size is small (in practice, it can
accommodate a maximum of 0.5k to 1k page table entries) and the process size may be big
hence the required page table will also be big (let’s say this page table contains 1M entries),
so registers may not hold all the PTE’s of the Page table. So this is not a practical approach.
The entire page table was kept in the main memory to overcome this size issue. but the
problem here is two main memory references are required:
To overcome this problem a high-speed cache is set up for page table entries called a
Translation Lookaside Buffer (TLB). Translation Lookaside Buffer (TLB) is a special cache used
to keep track of recently used transactions. TLB contains page table entries that have been
most recently used. Given a virtual address, the processor examines the TLB if a page table
entry is present (TLB hit), the frame number is retrieved and the real address is formed. If a
page table entry is not found in the TLB (TLB miss), the page number is used as an index
while processing the page table. TLB first checks if the page is already in main memory, if
not in main memory a page fault is issued then the TLB is updated to include the new page
entry.
Steps in TLB hit
3. The corresponding frame number is retrieved, which now tells where the main memory
page lies.
3. Now the page number is matched to the page table residing in the main memory
(assuming the page table contains all PTE).
4. The corresponding frame number is retrieved, which now tells where the main memory
page lies.
5. The TLB is updated with new PTE (if space is not there, one of the replacement
techniques comes into the picture i.e either FIFO, LRU or MFU etc).
TLB is used to reduce adequate memory access time as it is a high-speed associative cache.
EMAT=h×(c+m)+(1−h)×(c+nm)
where:
• FIFO
• LRU
• Optimal
mca-2-sem-operating-system-mcan-202-2024.pdf
Group-A (Very Short Answer Type Question)
I. What is a Mutex?
• When a thread locks a mutex, others must wait until it’s released.
Example:
Types:
Solution:
Segmentation is a memory management scheme where memory is divided into logical units
called segments (code, stack, data).
Diagram:
o Process ID
o Program Counter
o CPU Registers
o Process state
o Memory limits
o I/O status
• Waiting time
• Execution time
• I/O time
Example:
Pages are fixed-size units of logical memory in the paging memory management scheme.
Page Table:
Access control is a security technique to regulate who or what can access system resources.
Types:
Structure:
A table with:
Example
UserA R/W R -
UserB R W Print
X. What is the Purpose of a System Call?
System Calls are the interface between user programs and the OS kernel.
• I/O operations
Example:
A binary semaphore is a synchronization tool that can take only two values: 0 and 1.
Operations:
Example:
XII. Explain TLB Hit and TLB Miss
TLB (Translation Lookaside Buffer) is a high-speed cache for page table entries.
TLB Hit:
TLB Miss:
[5×3=15][5×3=15]
Peterson’s Algorithm is a classical software-based solution for the Critical Section Problem
for two processes (P0 and P1).
Assumptions:
How it Works:
• Waits only if the other process wants to enter and it’s their turn.
Satisfies:
Security and Protection in OS are mechanisms to safeguard data and resources from
unauthorized access and misuse.
Aspect Description
Given:
• P1 needs 21 units
• P2 needs 31 units
• P3 needs 41 units
So:
CopyEdit
= (21 + 31 + 41) - 3 + 1
= 93 - 3 + 1 = 91 units
Given:
TLB Hit:
TLB Miss:
lua
CopyEdit
= 96 + 44 = 140 ns
The UNIX File System (UFS) is a hierarchical file system used in UNIX and UNIX-like OS.
Structure:
1. Superblock: Metadata about the file system (size, block size, etc.).
File Types:
Advantages:
7. (a) Consider the following set of processes, with the length of the CPU-burst time given in
milliseconds :
P1 4 ms 0 ms
P2 7 ms 2 ms
P3 5 ms 3 ms
P4 2 ms 4 ms
Consider the
a) Use Shortest-Remaining-Time-First (SRTF), (b)SJF, (c) FCFS scheduling algorithms.
i) Draw the Gantt chart to show the execution.
ii) Find the average turn around time?
iii) Find the average waiting time?
(b). Explain race condition in critical section problem.
ANSWER-
7 (a). CPU Scheduling Algorithms (Detailed Explanation for 15 Marks)
Given:
Process Burst Time Arrival Time
P1 4 ms 0 ms
P2 7 ms 2 ms
P3 5 ms 3 ms
P4 2 ms 4 ms
• Explanation:
o P1 executes first.
o At time 4ms, P2, P3, and P4 are in queue. P4 (2ms) has shortest burst → executes
next.
o P3 (5ms) next, then P2 (7ms).
• Explanation:
o P1 executes from 0–4ms.
o P2 arrives at 2ms and starts after P1.
o P4 arrives at 4ms and preempts P2 (2ms < 5ms remaining of P2).
o P3 arrives at 3ms but waits.
o After P4, the scheduler chooses shortest remaining time.
ii) Turnaround Time (TAT)
TAT = Completion Time - Arrival Time
Process FCFS SJF SRTF
P1 4 4 4
P2 11 18 18
P3 16 11 12
P4 18 6 7
Avg 12.25 9.75 10.25
If both threads read the counter before either writes it, they might both write back 1, even
though the correct result should be 2. This happens due to race condition.
Why it occurs:
• No proper synchronization.
• Shared variables.
Solution:
• Use synchronization mechanisms like mutex, semaphores, monitors.
• Enforce Mutual Exclusion: Only one process can enter its critical section at a time.
Importance:
Avoiding race conditions is crucial in concurrent programming to ensure data consistency and
correctness.
8.( a) Consider the following segment table:
0 219 600
1 2300 14
2 90 100
3 1327 580
4 1952 96
What are the physical address for the following logical address specified by segment number
and displacement within segment? State for which address it will generate segmentation fault.
(a) 0,430, (b) 1,10 (c) 2,500 (d) 3,400 (e) 4,112
(b) Explain segmentation technique in detail.
• (a) 0,430 → Base 219 + 430 = 649 (within limit 600) → Valid
• (d) 3,400 → Base 1327 + 400 = 1727 (within limit 580) → Valid
Segmentation is a memory management technique in which each job is divided into several
segments of different sizes, one for each module that contains pieces that perform related
functions.
Key Concepts:
Advantages:
Disadvantages:
Segmentation allows more flexibility than paging and aligns with the logical structure of
programs.
P1 1000 1750
P2 1354 2356
P3 0632 0652
P4 0014 0656
i) What is the content of need matrix?
ii) Is the system in safe state? If yes find the safe sequence.
b) Explain deadlock prevention techniques.
Process Need (A B C D)
P0 0000
P1 0750
P2 1002
P3 0020
P4 0642
Safe Sequence: P0 → P2 → P3 → P1 → P4
• Hold and Wait: Prevent by requiring processes to request all resources at once.
(b) State the differences between blocking and non blocking IO.
DMA allows devices to transfer data to/from memory without involving the CPU for each byte.
Working:
Diagram:
i) with 3 frames
7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1
1. When a page is referenced, check if it's already in a frame (no page fault).
7 7 - - Yes -
0 7 0 - Yes -
1 7 0 1 Yes -
2 2 0 1 Yes 7
0 2 0 1 No -
3 2 3 1 Yes 0
0 2 3 0 Yes 1
4 4 3 0 Yes 2
2 4 2 0 Yes 3
3 4 2 3 Yes 0
0 0 2 3 Yes 4
3 0 2 3 No -
2 0 2 3 No -
1 0 1 3 Yes 2
2 0 1 2 Yes 3
0 0 1 2 No -
1 0 1 2 No -
7 7 1 2 Yes 0
0 7 0 2 Yes 1
1 7 0 1 Yes 2
7 7 - - - Yes -
0 7 0 - - Yes -
1 7 0 1 - Yes -
2 7 0 1 2 Yes -
0 7 0 1 2 No -
3 3 0 1 2 Yes 7
0 3 0 1 2 No -
4 3 4 1 2 Yes 0
2 3 4 1 2 No -
3 3 4 1 2 No -
0 3 4 0 2 Yes 1
3 3 4 0 2 No -
2 3 4 0 2 No -
1 1 4 0 2 Yes 3
2 1 4 0 2 No -
0 1 4 0 2 No -
1 1 4 0 2 No -
7 1 7 0 2 Yes 4
0 1 7 0 2 No -
1 1 7 0 2 No -
Reference Frame 1 Frame 2 Frame 3 Frame 4 Frame 5 Page Fault? Replace Oldest
7 7 - - - - Yes -
0 7 0 - - - Yes -
1 7 0 1 - - Yes -
2 7 0 1 2 - Yes -
0 7 0 1 2 - No -
3 7 0 1 2 3 Yes -
0 7 0 1 2 3 No -
4 4 0 1 2 3 Yes 7
2 4 0 1 2 3 No -
3 4 0 1 2 3 No -
0 4 0 1 2 3 No -
3 4 0 1 2 3 No -
2 4 0 1 2 3 No -
1 4 0 1 2 3 No -
2 4 0 1 2 3 No -
0 4 0 1 2 3 No -
1 4 0 1 2 3 No -
7 7 0 1 2 3 Yes 4
0 7 0 1 2 3 No -
1 7 0 1 2 3 No -
3 15
4 10
5 8
Observations:
• As the number of frames increases, the number of page faults decreases (due to fewer
replacements).
• FIFO may suffer from Belady’s Anomaly (where increasing frames can sometimes
increase page faults), but in this case, it behaves normally.