Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
5 views48 pages

Operating System

The document is a study guide for an Operating Systems course, covering key concepts such as components of an OS, process management, memory management, and various scheduling algorithms. It includes very short answer questions, short answer questions, and long answer questions, addressing topics like process states, disk scheduling algorithms, context switching, and thread contention scopes. The guide provides definitions, examples, and explanations to aid in understanding the fundamental principles of operating systems.

Uploaded by

Manvinder Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views48 pages

Operating System

The document is a study guide for an Operating Systems course, covering key concepts such as components of an OS, process management, memory management, and various scheduling algorithms. It includes very short answer questions, short answer questions, and long answer questions, addressing topics like process states, disk scheduling algorithms, context switching, and thread contention scopes. The guide provides definitions, examples, and explanations to aid in understanding the fundamental principles of operating systems.

Uploaded by

Manvinder Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 48

mca-2-sem-operating-system-mcan-202-2023.

pdf
Group-A (Very Short Answer Type Question)

1. Answer any ten of the following:

I. What are the components of an Operating System?


The main components of an OS are:

1. Kernel – Core part that manages CPU, memory, and devices.

2. Process Management – Handles process scheduling and execution.

3. Memory Management – Allocates and tracks memory usage.

4. File System – Manages file storage and access.

5. Device Drivers – Interfaces for hardware devices.

6. User Interface – Provides CLI or GUI for user interaction.

7.

II. What is the function of Program Counter in PCB?


The Program Counter (PC) in a Process Control Block (PCB) holds the address of the next
instruction to be executed for the process.

III. What is swapping?


Swapping is the process of moving a process from main memory to disk (and back) to free up
memory or manage multiple processes.

IV. What are types of accessing a file?


Types of file access include:

1. Sequential Access – Read/write in order.

2. Direct (Random) Access – Access any part of the file directly.

3. Indexed Access – Uses an index for faster access.

V. What is seek time?


Seek time is the time taken by the read/write head to move to the correct track on a disk
where data is located.
VI. What is an Operating System?
An Operating System (OS) is system software that acts as an interface between the user and
hardware, managing resources like CPU, memory, and I/O devices.

VII. What are frames?


Frames are fixed-size blocks of physical memory into which pages of a process are loaded in a
paging memory management system.

VIII. What is busy-waiting?


Busy-waiting is when a process continuously checks for a condition (like resource availability)
without releasing the CPU, wasting resources.

IX. What is rotational latency?


Rotational latency is the time it takes for the desired disk sector to rotate under the read/write
head.

X. Write main advantages of multiprogramming.


Advantages of multiprogramming:

1. Better CPU Utilization

2. Increased Throughput

3. Reduced Waiting Time

4. Efficient Resource Usage

XI. What is dispatch latency?


Dispatch latency is the time taken by the OS to stop one process and start another, including
context switch time.

XII. What do you mean by page fault?


A page fault occurs when a process accesses a page not currently in main memory, causing the
OS to load it from disk.
Group-B (Short Answer Type Question)

Answer any three of the following:

[5×3=15][5×3=15]

1. Explain process states with a suitable diagram.

A process is a program in execution and it is more than a program code called as text
section and this concept works under all the operating system because all the task perform
by the operating system needs a process to perform the task

The process executes when it changes the state. The state of a process is defined by the
current activity of the process.

Each process may be in any one of the following states −

• New − The process is being created.

• Running − In this state the instructions are being executed.

• Waiting − The process is in waiting state until an event occurs like I/O operation
completion or receiving a signal.

• Ready − The process is waiting to be assigned to a processor.

• Terminated − the process has finished execution.

It is important to know that only one process can be running on any processor at any
instant. Many processes may be ready and waiting.

Now let us see the state diagram of these process states −


Explanation

Step 1 − Whenever a new process is created, it is admitted into ready state.

Step 2 − If no other process is present at running state, it is dispatched to running based on


scheduler dispatcher.

Step 3 − If any higher priority process is ready, the uncompleted process will be sent to the
waiting state from the running state.

Step 4 − Whenever I/O or event is completed the process will send back to ready state
based on the interrupt signal given by the running state.

Step 5 − Whenever the execution of a process is completed in running state, it will exit to
terminate state, which is the completion of process.

2. Explain FCFS disk scheduling with a suitable example.

FCFS (First Come First Serve)

FCFS is the simplest of all Disk Scheduling Algorithms. In FCFS, the requests are addressed in
the order they arrive in the disk queue. Let us understand this with the help of an example.
First Come First Serve

Example:

Suppose the order of request is- (82,170,43,140,24,16,190)


And current position of Read/Write head is: 50

So, total overhead movement (total distance covered by the disk arm) =
(82-50)+(170-82)+(170-43)+(140-43)+(140-24)+(24-16)+(190-16) =642

3. What is the most important system program and why?

The most important system program is the Operating System (OS).

Why?

• It manages hardware and software resources.

• Provides a user interface and system services.

• Controls processes, memory, file system, and I/O devices.

• Without it, a computer system is unusable for applications or user

4. A page takes 20 ns time to search in TLB and 100 ns to access memory. If page is not
found in TLB first access the memory 100 ns. Hit ratio is 80%. Then find the effective
memory access time?

Given:

• TLB search time = 20 ns

• Memory access time = 100 ns

• Hit ratio = 80% = 0.8

Formula:
EMAT = (Hit ratio × (TLB time + Memory access)) +
(Miss ratio × (TLB time + 2 × Memory access))

= 0.8 × (20 + 100) + 0.2 × (20 + 2×100)


= 0.8 × 120 + 0.2 × 220
= 96 + 44 = 140 ns
5. Given memory partitions of 100 KB, 500 KB, 200 KB, 300 KB and 600 KB (in order). How
would each of the first fit, best fit, worst fit algorithms place processes of 212 KB, 417
KB, 112 KB, and 426 KB (in order)? Which algorithm makes the most efficient use of
memory?

Given Memory Partitions:


100 KB, 500 KB, 200 KB, 300 KB, 600 KB
Processes: 212 KB, 417 KB, 112 KB, 426 KB

a. First Fit

• 212 → 500 KB → remaining: 288 KB

• 417 → 600 KB → remaining: 183 KB

• 112 → 200 KB → remaining: 88 KB

• 426 → Not allocated (no block left large enough)

b. Best Fit

• 212 → 300 KB → remaining: 88 KB

• 417 → 500 KB → remaining: 83 KB

• 112 → 200 KB → remaining: 88 KB

• 426 → 600 KB → remaining: 174 KB

c. Worst Fit

• 212 → 600 KB → remaining: 388 KB

• 417 → 500 KB → remaining: 83 KB

• 112 → 388 KB → remaining: 276 KB

• 426 → Not allocated

Conclusion:

Best Fit makes the most efficient use of memory in this case — it successfully allocates
memory to all 4 processes
Group-C (Long Answer Type Question)

Answer any three of the following:

[15×3=45][15×3=45]

6. Explain how CPU switches process to process with a suitable diagram.

Context Switching in an operating system is a critical function that allows the CPU to
efficiently manage multiple processes. By saving the state of a currently active process and
loading the state of another, the system can handle various tasks simultaneously without
losing progress. This switching mechanism ensures optimal use of the CPU, enhancing the
system’s ability to perform multitasking effectively.

Example of Context Switching

Suppose the operating system has (N) processes stored in a Process Control Block (PCB).
Each process runs using the CPU to perform its task. While a process is running, other
processes with higher priorities queue up to use the CPU and complete their tasks.

Switching the CPU to another process requires saving the state of the current process and
restoring the state of a different process. This task is known as a context switch. When a
context switch occurs, the kernel saves the context of the old process in its PCB and loads
the saved context of the new process scheduled to run. Context-switch time is pure
overhead because the system does no useful work while switching. The switching speed
varies from machine to machine, depending on factors such as memory speed, the number
of registers that need to be copied, and the existence of special instructions (such as a single
instruction to load or store all registers). A typical context switch takes a few milliseconds.

Context-switch times are highly dependent on hardware support. For example, some
processors (such as the Sun UltraSPARC) provide multiple sets of registers. In this case, a
context switch simply requires changing the pointer to the current register set. However, if
there are more active processes than available register sets, the system resorts to copying
register data to and from memory, as before. Additionally, the more complex the operating
system, the greater the amount of work that must be done during a context switch.

Need of Context Switching

Context switching enables all processes to share a single CPU to finish their execution and
store the status of the system’s tasks. The execution of the process begins at the same place
where there is a conflict when the process is reloaded into the system.
The operating system’s need for context switching is explained by the reasons listed below.

• One process does not directly switch to another within the system. Context switching
makes it easier for the operating system to use the CPU’s resources to carry out its tasks
and store its context while switching between multiple processes.

• Context switching enables all processes to share a single CPU to finish their execution
and store the status of the system’s tasks. The execution of the process begins at the
same place where there is a conflict when the process is reloaded into the system.

• Context switching only allows a single CPU to handle multiple processes requests
parallelly without the need for any additional processors.

Context Switching Triggers

The three different categories of context-switching triggers are as follows.

• Interrupts

• Multitasking

• User/Kernel switch

Interrupts: When a CPU requests that data be read from a disc, if any interruptions occur,
context switching automatically switches to a component of the hardware that can handle
the interruptions more quickly.

Multitasking: The ability for a process to be switched from the CPU so that another process
can run is known as context switching. When a process is switched, the previous state is
retained so that the process can continue running at the same spot in the system.

Kernel/User Switch: This trigger is used when the OS needed to switch between the user
mode and kernel mode.

When switching between user mode and kernel/user mode is necessary, operating systems
use the kernel/user switch.

What is Process Control Block(PCB)?

The Process Control block(PCB) is also known as a Task Control Block. it represents a process
in the Operating System. A process control block (PCB) is a data structure used by a
computer to store all information about a process. It is also called the descriptive process.
When a process is created (started or installed), the operating system creates a process
manager.
Working Process Context Switching

In the context switching of two processes, the priority-based process occurs in the ready
queue of the process control block. Following are the steps:

• The state of the current process must be saved for rescheduling.

• The process state contains records, credentials, and operating system-specific


information stored on the PCB or switch.

• The PCB can be stored in a single layer in kernel memory or in a custom OS file.

• A handle has been added to the PCB to have the system ready to run.

• The operating system aborts the execution of the current process and selects a process
from the waiting list by tuning its PCB.

• Load the PCB’s program counter and continue execution in the selected process.

• Process/thread values can affect which processes are selected from the queue, this can
be important
7. Define process contention scope and system contention scope. Write the benefits of
threads.

1. Process Contention Scope :


In an operating system the thread library is schedules by an operating system known as kernel
thread that is managed by a thread library for user level thread to runs off an available light
weight process. This is often referred to as process contention scope.

Process contention scope is executed accordingly to the priority level that means the scheduler
first selects or check the thread which have the highest priority to run.

The programmer sets the user level thread priorities that are not adjusted by the thread library.
Also there are some thread libraries which will allow the programmer to change the priority of a
thread.

2. System Contention Scope :


In an operating system, system contention scope is one among the two thread scheduling
scheme used in the operating system. This scheme is used by the kernel to make a decision,
which kernel level thread is to be schedule into the CPU, This uses system contention scope
wherein all threads within the system compete for the CPU. This is often referred as system
contention scope.

System using the one to one model such as windows, Linux schedules threads using only system
contention scope.

Difference Between Process Contention Scope And System Contention Scope :

S.no Process Contention Scope System Contention Scope

Process contention scope is commonly System contention scope commonly is


1. known local scheduling on unbound known as global scheduling on bound
threads. threads.

Process contention scope use many-to- System contention scope use one-to-
2.
many and many-to-one model in it. one model in it.

3. In process contention scope, there is a In system contention scope, there is a


competition for the usage of the CPU competition for the usage of the CPU
that takes place between threads that are that takes place among all the threads
equivalent to processes. in an operating system.

System contention scope thread use 1


Process contention scope use N :1 or N
4. :1 thread model relationship with a
: M thread model relationship.
kernel thread.

It is mostly used in windows, Linux,


5. It is mostly used in Linux threads.
and Solaris threads.

Process contention scope or local System contention scope or global


contention scope is a user level thread contention scope is a user thread
6.
that shares a kernel thread with other which is mapped directly to one
users thread within the process. kernel thread.

In process contention scope, the thread In system contention scope, operating


library has real control over which user system provides kernel level thread,
7.
level thread is to be scheduled on an i.e. kennel decides which thread is to
light weight process. be scheduled into a CPU.

In process contention scope all the In system contention scope, all the
8. scheduling mechanism for the thread is scheduling mechanism for the thread
local to the processes. is global to the processes.

System contention scope is very


Process contention scope is very
9. predictable because of high
cheaper than system contention scope
processing cost.

Process contention scope, thread shares


System contention scope threads,
10. one or more available light weight
shares separate light weight processes.
processes.
8. Suppose that a disk drive has 200 cylinders, numbered 0 to 199. The drive is currently
serving a request at cylinder 143, and the previous request was at cylinder 125. The
queue of pending requests, in FIFO order is 86, 47, 93, 174, 98, 150, 102, 170, 130.
Starting from the current head position what is the total distance that the disk arm
moves to satisfy all the pending requests for the following disk scheduling algorithms?

o FCFS

o SSTF

o SCAN]

A. FCFS (First Come First Serve):

Order:
143 → 86 → 47 → 93 → 174 → 98 → 150 → 102 → 170 → 130

Head movements:
|143-86| + |86-47| + |47-93| + |93-174| + |174-98| + |98-150| + |150-102| + |102-170| +
|170-130|
= 57 + 39 + 46 + 81 + 76 + 52 + 48 + 68 + 40 = 507 cylinders

B. SSTF (Shortest Seek Time First):

Start at 143
Next closest = 150 (7)
Then: 150 → 130 (20)
→ 170 (40)
→ 174 (4)
→ 102 (72)
→ 98 (4)
→ 93 (5)
→ 86 (7)
→ 47 (39)

Total = 7+20+40+4+72+4+5+7+39 = 198 cylinders


C. SCAN (Elevator Algorithm):

Assume head moves toward 0, then reverses.

Order:
143 → 130 → 102 → 98 → 93 → 86 → 47 → 0 (end)
Then reverse: 174 → 170 → 150

Movement:
|143-130| + |130-102| + |102-98| + |98-93| + |93-86| + |86-47| + |47-0| + |0-150| + |150-
170| + |170-174|
= 13+28+4+5+7+39+47+150+20+4 = 317 cylinders

9. Explain Directory Structures.

A directory is a container that is used to contain folders and files. It organizes files and
folders in a hierarchical manner. In other words, directories are like folders that help
organize files on a computer. Just like you use folders to keep your papers and documents in
order, the operating system uses directories to keep track of files and where they are stored.
Different structures of directories can be used to organize these files, making it easier to find
and manage them.

Understanding these directory structures is important because it helps in efficiently


organizing and accessing files on your computer. Following are the logical structures of a
directory, each providing a solution to the problem faced in the previous type of directory
structure.
Different Types of Directory in OS

In an operating system, there are different types of directory structures that help organize
and manage files efficiently.

Directories in an OS can be single-level, two-level, or hierarchical. To gain a comprehensive


understanding of file systems, explore the GATE CS Self-Paced Course which covers
operating systems in great detail.

Each type of directory has its own way of arranging files and directories, offering unique
benefits and features.These are:

• Single-Level Directory

• Two-Level Directory

• Tree Structure/ Hierarchical Structure

• Acyclic Graph Structure

• General-Graph Directory Structure

1) Single-Level Directory

The single-level directory is the simplest directory structure. In it, all files are contained in
the same directory which makes it easy to support and understand.

A single level directory has a significant limitation, however, when the number of files
increases or when the system has more than one user. Since all the files are in the same
directory, they must have a unique name. If two users call their dataset test, then the
unique name rule violated.
Advantages

• Since it is a single directory, so its implementation is very easy.

• If the files are smaller in size, searching will become faster.

• The operations like file creation, searching, deletion, updating are very easy in such a
directory structure.

• Logical Organization : Directory structures help to logically organize files and directories
in a hierarchical structure. This provides an easy way to navigate and manage files,
making it easier for users to access the data they need.

• Increased Efficiency: Directory structures can increase the efficiency of the file system by
reducing the time required to search for files. This is because directory structures are
optimized for fast file access, allowing users to quickly locate the file they need.

• Improved Security : Directory structures can provide better security for files by allowing
access to be restricted at the directory level. This helps to prevent unauthorized access
to sensitive data and ensures that important files are protected.

• Facilitates Backup and Recovery : Directory structures make it easier to backup and
recover files in the event of a system failure or data loss. By storing related files in the
same directory, it is easier to locate and backup all the files that need to be protected.

• Scalability: Directory structures are scalable, making it easy to add new directories and
files as needed. This helps to accommodate growth in the system and makes it easier to
manage large amounts of data.

Disadvantages

• There may chance of name collision because two files can have the same name.

• Searching will become time taking if the directory is large.

• This can not group the same type of files together.

2) Two-Level Directory

As we have seen, a single level directory often leads to confusion of files names among
different users. The solution to this problem is to create a separate directory for each user.

In the two-level directory structure, each user has their own user files directory (UFD). The
UFDs have similar structures, but each lists only the files of a single user. System’s master
file directory (MFD) is searched whenever a new user id is created.
Two-Levels Directory Structure

Advantages

• The main advantage is there can be more than two files with same name, and would be
very helpful if there are multiple users.

• A security would be there which would prevent user to access other user’s files.

• Searching of the files becomes very easy in this directory structure.

Disadvantages

• As there is advantage of security, there is also disadvantage that the user cannot share
the file with the other users.

• Unlike the advantage users can create their own files, users don’t have the ability to
create subdirectories.

• Scalability is not possible because one user can’t group the same types of files together.

3) Tree Structure/ Hierarchical Structure

Tree directory structure of operating system is most commonly used in our personal
computers. User can create files and subdirectories too, which was a disadvantage in the
previous directory structures.
This directory structure resembles a real tree upside down, where the root directory is at
the peak. This root contains all the directories for each user. The users can create
subdirectories and even store files in their directory.

A user do not have access to the root directory data and cannot modify it. And, even in this
directory the user do not have access to other user’s directories. The structure of tree
directory is given below which shows how there are files and subdirectories in each user’s
directory.

Tree/Hierarchical Directory Structure

Advantages

• This directory structure allows subdirectories inside a directory.

• The searching is easier.

• File sorting of important and unimportant becomes easier.

• This directory is more scalable than the other two directory structures explained.

Disadvantages

• As the user isn’t allowed to access other user’s directory, this prevents the file sharing
among users.
• As the user has the capability to make subdirectories, if the number of subdirectories
increase the searching may become complicated.

• Users cannot modify the root directory data.

• If files do not fit in one, they might have to be fit into other directories.

4) Acyclic Graph Structure

As we have seen the above three directory structures, where none of them have the
capability to access one file from multiple directories. The file or the subdirectory could be
accessed through the directory it was present in, but not from the other directory.

This problem is solved in acyclic graph directory structure, where a file in one directory can
be accessed from multiple directories. In this way, the files could be shared in between the
users. It is designed in a way that multiple directories point to a particular directory or file
with the help of links.

In the below figure, this explanation can be nicely observed, where a file is shared between
multiple users. If any user makes a change, it would be reflected to both the users.

Acyclic Graph Structure


Advantages

• Sharing of files and directories is allowed between multiple users.

• Searching becomes too easy.

• Flexibility is increased as file sharing and editing access is there for multiple users.

Disadvantages

• Because of the complex structure it has, it is difficult to implement this directory


structure.

• The user must be very cautious to edit or even deletion of file as the file is accessed by
multiple users.

• If we need to delete the file, then we need to delete all the references of the file inorder
to delete it permanently.

5) General-Graph Directory Structure

Unlike the acyclic-graph directory, which avoids loops, the general-graph directory can have
cycles, meaning a directory can contain paths that loop back to the starting point. This can
make navigating and managing files more complex.

General Graph Directory Structure


In the above image, you can see that a cycle is formed in the User 2 directory. While this
structure offers more flexibility, it is also more complicated to implement.

Advantages of General-Graph Directory

• More flexible than other directory structures.

• Allows cycles, meaning directories can loop back to each other.

Disadvantages of General-Graph Directory

• More expensive to implement compared to other solutions.

• Requires garbage collection to manage and clean up unused files and directories.

Conclusion

Understanding the different directory structures in an operating system is important for


efficient file organization and management. Each structure, whether single-level, two-level,
tree-structured, acyclic graph, or general graph, offers unique ways to arrange and access
files. Choosing the right directory structure helps ensure that files are easy to find, use, and
maintain.

10. Explain paging with TLB with a suitable diagram. What is the disadvantage of using TLB?

In Operating System (Memory Management Technique: Paging), for each process page table
will be created, which will contain a Page Table Entry (PTE). This PTE will contain information
like frame number (The address of the main memory where we want to refer), and some
other useful bits (e.g., valid/invalid bit, dirty bit, protection bit, etc). This page table entry
(PTE) will tell where in the main memory the actual page is residing.

Now the question is where to place the page table, such that overall access time (or
reference time) will be less. The problem initially was to fast access the main memory
content based on the address generated by the CPU (i.e. logical/virtual address). Initially,
some people thought of using registers to store page tables, as they are high-speed memory
so access time will be less.

The idea used here is, to place the page table entries in registers, for each request generated
from the CPU (virtual address), it will be matched to the appropriate page number of the
page table, which will now tell where in the main memory that corresponding page resides.
Everything seems right here, but the problem is registered size is small (in practice, it can
accommodate a maximum of 0.5k to 1k page table entries) and the process size may be big
hence the required page table will also be big (let’s say this page table contains 1M entries),
so registers may not hold all the PTE’s of the Page table. So this is not a practical approach.

The entire page table was kept in the main memory to overcome this size issue. but the
problem here is two main memory references are required:

1. To find the frame number

2. To go to the address specified by frame number

To overcome this problem a high-speed cache is set up for page table entries called a
Translation Lookaside Buffer (TLB). Translation Lookaside Buffer (TLB) is a special cache used
to keep track of recently used transactions. TLB contains page table entries that have been
most recently used. Given a virtual address, the processor examines the TLB if a page table
entry is present (TLB hit), the frame number is retrieved and the real address is formed. If a
page table entry is not found in the TLB (TLB miss), the page number is used as an index
while processing the page table. TLB first checks if the page is already in main memory, if
not in main memory a page fault is issued then the TLB is updated to include the new page
entry.
Steps in TLB hit

1. CPU generates a virtual (logical) address.

2. It is checked in TLB (present).

3. The corresponding frame number is retrieved, which now tells where the main memory
page lies.

Steps in TLB miss

1. CPU generates a virtual (logical) address.

2. It is checked in TLB (not present).

3. Now the page number is matched to the page table residing in the main memory
(assuming the page table contains all PTE).

4. The corresponding frame number is retrieved, which now tells where the main memory
page lies.

5. The TLB is updated with new PTE (if space is not there, one of the replacement
techniques comes into the picture i.e either FIFO, LRU or MFU etc).

Effective memory access time(EMAT)

TLB is used to reduce adequate memory access time as it is a high-speed associative cache.

EMAT=h×(c+m)+(1−h)×(c+nm)

where:

• h is the hit ratio of the TLB,

• m is the memory access time,

• c is the TLB access time and

• n represents the system level.

And the system level can be represented as follows:

1 –> No page table.

2 –> One page table.

3 –> Two page tables


11. Find page fault and also determine which algorithm is best suited from the following
reference string: 7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1 with 3 frames using the following
algorithms:

• FIFO

• LRU

• Optimal
mca-2-sem-operating-system-mcan-202-2024.pdf
Group-A (Very Short Answer Type Question)

1. Answer any ten of the following:

I. What is a Mutex?

Mutex (Mutual Exclusion Object) is a synchronization mechanism used to protect shared


resources from being simultaneously accessed by multiple threads or processes.

• Only one thread can acquire a mutex at a time.

• Used to prevent race conditions.

• When a thread locks a mutex, others must wait until it’s released.

Example:

II. What is Fragmentation?

Fragmentation occurs when memory is used inefficiently, reducing performance.

Types:

1. Internal Fragmentation – Wasted space within allocated memory blocks.

2. External Fragmentation – Wasted space between allocated memory blocks (not


contiguous).

Solution:

• Paging and Segmentation reduce fragmentation.


III. What are Different File Operations?

Common file operations provided by an OS include:

1. Create – Make a new file.

2. Open – Access an existing file.

3. Read – Retrieve data from a file.

4. Write – Store data in a file.

5. Close – End file access.

6. Delete – Remove file from system.

7. Seek – Move file pointer to desired location.

8. Rename, Copy, Truncate

IV. What is Segmentation?

Segmentation is a memory management scheme where memory is divided into logical units
called segments (code, stack, data).

• Each segment has a segment number and offset.

• Provides logical view of memory, useful for modular programming.

• Avoids internal fragmentation.

Diagram:

Logical Address = (Segment Number, Offset)

MMU translates to Physical Address

V. Process Control Block is also called ______.

Answer: Task Control Block (TCB)

• PCB contains process-specific information:

o Process ID

o Program Counter
o CPU Registers

o Process state

o Memory limits

o I/O status

VI. What is Turnaround Time?

Turnaround Time = Completion Time – Arrival Time

It represents the total time taken to execute a process including:

• Waiting time

• Execution time

• I/O time

Example:

If a process arrives at time 0 and completes at 20 seconds, Turnaround Time = 20 seconds.

VII. What are Pages?

Pages are fixed-size units of logical memory in the paging memory management scheme.

• Page size is typically 4KB.

• Loaded into frames in physical memory.

• Used to eliminate external fragmentation.

Page Table:

Maps page numbers to frame numbers.


VIII. What are the Different Access Controls?

Access control is a security technique to regulate who or what can access system resources.

Types:

1. Read (R) – View data

2. Write (W) – Modify data

3. Execute (X) – Run programs

4. Delete (D) – Remove files

5. Access Control Lists (ACL) – List of permissions for users

6. Role-Based Access Control (RBAC) – Based on user roles

IX. What is Access Control Matrix?

It is a security model that defines rights of subjects (users/processes) over objects


(files/resources).

Structure:

A table with:

• Rows = Subjects (users/processes)

• Columns = Objects (files/printers)

• Cells = Access rights (Read, Write, Execute)

Example

File1 File2 Printer

UserA R/W R -

UserB R W Print
X. What is the Purpose of a System Call?

System Calls are the interface between user programs and the OS kernel.

They allow programs to request services like:

• File operations (open, read, write)

• Process control (fork, exec)

• Memory management (allocate, deallocate)

• I/O operations

Example:

XI. Explain Binary Semaphore

A binary semaphore is a synchronization tool that can take only two values: 0 and 1.

• Used for mutual exclusion.

• Works like a mutex, but more general.

• Only one process can enter the critical section at a time.

Operations:

• wait(S) – Decrement; if S = 0, wait.

• signal(S) – Increment; if any process is waiting, wake it up.

Example:
XII. Explain TLB Hit and TLB Miss

TLB (Translation Lookaside Buffer) is a high-speed cache for page table entries.

TLB Hit:

• The page number is found in the TLB.

• Address translation is fast.

• Reduces memory access time.

TLB Miss:

• Page number not in TLB.

• CPU must access the page table in memory.

• Slower than a hit; TLB may be updated afterward.

Group-B (Short Answer Type Question)

Answer any three of the following:

[5×3=15][5×3=15]

1. Explain Peterson's solution to the critical section problem.

Peterson’s Algorithm is a classical software-based solution for the Critical Section Problem
for two processes (P0 and P1).

Assumptions:

• Two processes alternately want to enter their critical section.

• Requires shared variables:


Algorithm (for Process Pi):

How it Works:

• flag[i] = true means process i wants to enter.

• turn = j allows the other process to go first if both want to enter.

• Waits only if the other process wants to enter and it’s their turn.

Satisfies:

• Mutual Exclusion: Only one process in the critical section.

• Progress: Selection not postponed indefinitely.

• Bounded Waiting: No starvation.

2. Explain the goal of security and protection.

Security and Protection in OS are mechanisms to safeguard data and resources from
unauthorized access and misuse.

Aspect Description

Confidentiality Prevent unauthorized access to data.

Integrity Ensure data is not altered maliciously or accidentally.

Availability Ensure resources are available to authorized users.

Authentication Verify user identity before granting access.

Authorization Grant specific permissions to users based on their roles.

Protection Mechanisms within OS to isolate processes and manage access


control.
Mechanisms:

• Passwords, encryption, firewalls

• Access control lists

• User privilege levels

3. A system is having 3 user processes. Where P1 requires 21 units of resource R, P2


requires 31 units of resource R, P3 requires 41 units of resource R for completion of
execution. What is the minimum number of units of resource R that ensures no deadlock
if three processes are executing concurrently?

Given:

• P1 needs 21 units

• P2 needs 31 units

• P3 needs 41 units

To avoid deadlock, use the Deadlock Avoidance strategy:

Minimum resources = (Sum of max demands - Number of processes + 1)

So:

CopyEdit

= (21 + 31 + 41) - 3 + 1

= 93 - 3 + 1 = 91 units

Answer: 91 units ensure no deadlock.


4. A page takes 20 ns time to search in TLB and 100 ns to access primary memory. If the
page is not found in TLB first then it access the primary memory. Hit ratio of the system
is 80%. Then find the effective memory access time?

Effective Memory Access Time with TLB

Given:

• TLB access time = 20 ns

• Memory access time = 100 ns

• Hit ratio = 80% = 0.8

TLB Hit:

• Time = TLB + Memory = 20 + 100 = 120 ns

TLB Miss:

• Time = TLB + 2 × Memory = 20 + 200 = 220 ns

Effective Memory Access Time (EMAT):

lua

CopyEdit

EMAT = (Hit ratio × Hit time) + (Miss ratio × Miss time)

= (0.8 × 120) + (0.2 × 220)

= 96 + 44 = 140 ns

Effective Memory Access Time = 140 ns

5. Explain UNIX file system.

The UNIX File System (UFS) is a hierarchical file system used in UNIX and UNIX-like OS.

Structure:

• Everything is a file: text files, directories, devices.

• Root directory (/) is the starting point.


Components:

1. Superblock: Metadata about the file system (size, block size, etc.).

2. Inode Table: Each file has an inode that stores:

o File type, size, permissions

o Timestamps (created, modified)

o Pointers to data blocks

3. Data Blocks: Where actual file content is stored.

4. Directory Structure: Maps filenames to inode numbers.

5. Boot Block: Contains boot loader (optional).

6. Free Block List: Manages free space on the disk.

File Types:

• Regular Files – User data

• Directories – Special files that hold file lists

• Symbolic Links – Pointers to other files

• Device Files – Represent hardware devices

Advantages:

• Secure and reliable

• Hierarchical and modular

• Efficient space management with inodes


Group-C (Long Answer Type Questions)

Answer any three of the following:

7. (a) Consider the following set of processes, with the length of the CPU-burst time given in
milliseconds :

Process Burst Time Arrival Time

P1 4 ms 0 ms

P2 7 ms 2 ms

P3 5 ms 3 ms

P4 2 ms 4 ms

Consider the
a) Use Shortest-Remaining-Time-First (SRTF), (b)SJF, (c) FCFS scheduling algorithms.
i) Draw the Gantt chart to show the execution.
ii) Find the average turn around time?
iii) Find the average waiting time?
(b). Explain race condition in critical section problem.

ANSWER-
7 (a). CPU Scheduling Algorithms (Detailed Explanation for 15 Marks)
Given:
Process Burst Time Arrival Time
P1 4 ms 0 ms
P2 7 ms 2 ms
P3 5 ms 3 ms
P4 2 ms 4 ms

i) Gantt Charts and Explanation


a) FCFS (First Come First Serve)
• Process execution order: Based on arrival time.
• Gantt Chart:
• Explanation:
o P1 arrives at 0ms and finishes at 4ms.
o P2 starts at 4ms (arrived at 2ms) and runs for 7ms → finishes at 11ms.
o P3 (arrived at 3ms) starts at 11ms → finishes at 16ms.
o P4 (arrived at 4ms) starts at 16ms → finishes at 18ms.

b) SJF (Shortest Job First – Non-preemptive)


• Process execution order: P1 → P4 → P3 → P2
• Gantt Chart:

• Explanation:
o P1 executes first.
o At time 4ms, P2, P3, and P4 are in queue. P4 (2ms) has shortest burst → executes
next.
o P3 (5ms) next, then P2 (7ms).

c) SRTF (Shortest Remaining Time First – Preemptive)


• Gantt Chart:

• Explanation:
o P1 executes from 0–4ms.
o P2 arrives at 2ms and starts after P1.
o P4 arrives at 4ms and preempts P2 (2ms < 5ms remaining of P2).
o P3 arrives at 3ms but waits.
o After P4, the scheduler chooses shortest remaining time.
ii) Turnaround Time (TAT)
TAT = Completion Time - Arrival Time
Process FCFS SJF SRTF
P1 4 4 4
P2 11 18 18
P3 16 11 12
P4 18 6 7
Avg 12.25 9.75 10.25

iii) Waiting Time (WT)


WT = Turnaround Time - Burst Time
Process FCFS SJF SRTF
P1 0 0 0
P2 4 11 11
P3 11 6 7
P4 16 4 5
Avg 7.75 5.25 5.75

7 (b). Race Condition in Critical Section (Detailed Explanation)


A race condition occurs when two or more threads or processes access shared data
concurrently and try to change it at the same time. The final outcome depends on the order in
which the access occurs.
Example:
Suppose two threads A and B both want to increment a shared counter variable:

If both threads read the counter before either writes it, they might both write back 1, even
though the correct result should be 2. This happens due to race condition.
Why it occurs:
• No proper synchronization.
• Shared variables.
Solution:
• Use synchronization mechanisms like mutex, semaphores, monitors.
• Enforce Mutual Exclusion: Only one process can enter its critical section at a time.
Importance:
Avoiding race conditions is crucial in concurrent programming to ensure data consistency and
correctness.
8.( a) Consider the following segment table:

Segment Base Length

0 219 600

1 2300 14

2 90 100

3 1327 580

4 1952 96

What are the physical address for the following logical address specified by segment number
and displacement within segment? State for which address it will generate segmentation fault.
(a) 0,430, (b) 1,10 (c) 2,500 (d) 3,400 (e) 4,112
(b) Explain segmentation technique in detail.

8(a). Segment Table Address Translation and Segmentation Fault Detection

Given Segment Table:

Check the physical address or segmentation fault for the following:

• (a) 0,430 → Base 219 + 430 = 649 (within limit 600) → Valid

• (b) 1,10 → Base 2300 + 10 = 2310 (within limit 14) → Valid

• (c) 2,500 → Displacement > 100 → Segmentation Fault

• (d) 3,400 → Base 1327 + 400 = 1727 (within limit 580) → Valid

• (e) 4,112 → Displacement > 96 → Segmentation Fault

8 (b). Segmentation Technique in Detail

Segmentation is a memory management technique in which each job is divided into several
segments of different sizes, one for each module that contains pieces that perform related
functions.
Key Concepts:

• Logical address = (segment number, offset)

• Each segment has a base and a limit.

• The base contains the starting physical address of the segment.

• The limit specifies the length of the segment.

Advantages:

• Easy to handle growing data and modular programs.

• Supports user view of memory.

• Provides protection and sharing.

Disadvantages:

• External fragmentation can occur.

• Requires complex memory management hardware.

Segmentation allows more flexibility than paging and aligns with the logical structure of
programs.

9. (a) Consider the following snapshot of a system :

Process Allocation (A B C D) Max (A B C D) Available (A B C D)

P0 0012 0012 1520

P1 1000 1750

P2 1354 2356

P3 0632 0652

P4 0014 0656
i) What is the content of need matrix?
ii) Is the system in safe state? If yes find the safe sequence.
b) Explain deadlock prevention techniques.

i) Need Matrix = Max - Allocation:

Process Need (A B C D)

P0 0000

P1 0750

P2 1002

P3 0020

P4 0642

ii) Safe Sequence:

Start with Available = 1 5 2 0

• P0 can run → new Available = 1 5 3 2

• P2 can run → new Available = 2 8 8 6

• P3 can run → new Available = 2 14 11 8

• P1 can run → new Available = 3 14 11 8

• P4 can run → new Available = 3 14 12 12

Safe Sequence: P0 → P2 → P3 → P1 → P4

System is in a safe state.

9 (b). Deadlock Prevention Techniques

• Mutual Exclusion: Eliminate by making resources sharable.

• Hold and Wait: Prevent by requiring processes to request all resources at once.

• No Preemption: Allow preemption of resources.

• Circular Wait: Impose a linear ordering of resource types.

Deadlock prevention ensures system never enters unsafe state.


10. (a) What is DMA and how it works ? Explain with diagram.

(b) State the differences between blocking and non blocking IO.

10 (a). DMA – Direct Memory Access

DMA allows devices to transfer data to/from memory without involving the CPU for each byte.

Working:

1. CPU sets up DMA controller with address, count, operation.

2. DMA controller transfers data.

3. CPU continues with other tasks.

4. DMA interrupts CPU when transfer is done.

Diagram:

DMA improves speed and reduces CPU load.

10 (b). Blocking vs Non-Blocking I/O

Aspect Blocking I/O Non-blocking I/O

CPU Waiting Waits for I/O to finish Continues execution

Efficiency Less efficient More efficient

Complexity Simple Complex programming

Usage Default in most systems Requires extra handling


11. Find page fault using FIFO page replace algorithm from the following reference string:
70120304230321201701

i) with 3 frames

ii) with 4 frames

iii) with 5 frame

Given Reference String:

7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1

FIFO Algorithm Rules:

1. When a page is referenced, check if it's already in a frame (no page fault).

2. If not, replace the oldest page in the frames (FIFO replacement).

3. Count the total number of page faults.


(i) With 3 Frames

Reference Frame 1 Frame 2 Frame 3 Page Fault? Replace Oldest

7 7 - - Yes -

0 7 0 - Yes -

1 7 0 1 Yes -

2 2 0 1 Yes 7

0 2 0 1 No -

3 2 3 1 Yes 0

0 2 3 0 Yes 1

4 4 3 0 Yes 2

2 4 2 0 Yes 3

3 4 2 3 Yes 0

0 0 2 3 Yes 4

3 0 2 3 No -

2 0 2 3 No -

1 0 1 3 Yes 2

2 0 1 2 Yes 3

0 0 1 2 No -

1 0 1 2 No -

7 7 1 2 Yes 0

0 7 0 2 Yes 1

1 7 0 1 Yes 2

Total Page Faults (3 Frames): 15


(ii) With 4 Frames

Reference Frame 1 Frame 2 Frame 3 Frame 4 Page Fault? Replace Oldest

7 7 - - - Yes -

0 7 0 - - Yes -

1 7 0 1 - Yes -

2 7 0 1 2 Yes -

0 7 0 1 2 No -

3 3 0 1 2 Yes 7

0 3 0 1 2 No -

4 3 4 1 2 Yes 0

2 3 4 1 2 No -

3 3 4 1 2 No -

0 3 4 0 2 Yes 1

3 3 4 0 2 No -

2 3 4 0 2 No -

1 1 4 0 2 Yes 3

2 1 4 0 2 No -

0 1 4 0 2 No -

1 1 4 0 2 No -

7 1 7 0 2 Yes 4

0 1 7 0 2 No -

1 1 7 0 2 No -

Total Page Faults (4 Frames): 10


(iii) With 5 Frames

Reference Frame 1 Frame 2 Frame 3 Frame 4 Frame 5 Page Fault? Replace Oldest

7 7 - - - - Yes -

0 7 0 - - - Yes -

1 7 0 1 - - Yes -

2 7 0 1 2 - Yes -

0 7 0 1 2 - No -

3 7 0 1 2 3 Yes -

0 7 0 1 2 3 No -

4 4 0 1 2 3 Yes 7

2 4 0 1 2 3 No -

3 4 0 1 2 3 No -

0 4 0 1 2 3 No -

3 4 0 1 2 3 No -

2 4 0 1 2 3 No -

1 4 0 1 2 3 No -

2 4 0 1 2 3 No -

0 4 0 1 2 3 No -

1 4 0 1 2 3 No -

7 7 0 1 2 3 Yes 4

0 7 0 1 2 3 No -

1 7 0 1 2 3 No -

Total Page Faults (5 Frames): 8


Final Answer:

Frames Page Faults

3 15

4 10

5 8

Observations:

• As the number of frames increases, the number of page faults decreases (due to fewer
replacements).

• FIFO may suffer from Belady’s Anomaly (where increasing frames can sometimes
increase page faults), but in this case, it behaves normally.

You might also like