MidSem2
Wednesday, March 6, 2024 9:57 AM
Q: What is operating System?
• Operating System lies in the category of system software.
• It basically manages all the resources of the computer.
• An operating system acts as an interface between the software and different parts of
the computer or the computer hardware.
• The operating system is designed in such a way that it can manage the overall
resources and operations of the computer
Q1. What Is process scheduling. What are Functions Of Process Scheduling?
• The objective of multiprogramming is to keep CPU busy for maximum time and
achieve maximum CPU Utilization
• In uni-processor system only one process I there and any time it may get the attention
of the CPU
• But in multiprogramming as only one process may get attention of the CPU at a time
other processes have to wait. So the process scheduling is necessary and it is done by
O.S
Process Scheduling must perform the following function
1.Keep track of the status (running, ready or waiting) of all the process. This is done by
traffic controller
2.The process is selected from ready queue and decided by the scheduler how long it
executes
3.When running process requires any I/O resources or an interrupt occurs or it exceeds
its time quantum the processor de-allocates the process
New Section 4 Page 1
Q2) What Is Context Switch?
Interrupt cause the O.S to change a CPU from its current task to run a kernel routine.
The process of switching the CPU from an old process to an new process is called
context switch.
When interrupt occurs, the system needs to save the current context of the process
currently running on the CPU so that it can restore that context when its processing is
done essentially suspending the process and then resuming it.
Switching the CPU to another process requires performing a state save of the current
process and a state restore of a different process.
When a context switch occurs, the kernel saves the context of the old process in its PCB
and loads the saved context of the new process scheduled to run.
It speed varies from M/C to M/C depending on the memory speed, the no. of registers
that must be copied and the existence of special instruction.
Typical speeds are a few milliseconds.
Its times are highly dependent on H/W support.
New Section 4 Page 2
Q3) What are Scheduling Queues?
In the system , the processes waiting for CPU are kept in ready queue and the processes
waiting for a device are kept in the device queue.
• A process may have an I/O request and the device requested may be busy
• In such case the I/O request is maintained in the device queue.
• Each device has its own device queue.
• In case of dedicated devices the device queue will never have more process in it
• In case of sharable devices several processes may be in the ready queue.
• Different types of queues :
• Job queue – set of all processes in the system
• Ready queue – set of all processes residing in main memory, ready and waiting to
execute
• Device queues – set of processes waiting for an I/O device
•Processes migrate among the various queues
• CPU scheduling is represented by a queuing diagram
New Section 4 Page 3
• CPU scheduling is represented by a queuing diagram
• Once the process is allocated the CPU and is executing, one of several events could
occur.
• The process could issue an I/O request and then be
placed in an I/O queue
• The process could create a new sub-process and wait for
the sub-processes termination
• The process could be removed forcibly from the CPU as a
result of an interrupt and be put back in the ready queue.
Q4)What are CPU Schedulers
- A process migrates among the various scheduling queues throughout its lifetime.
- The O.S must select processes from these queue in some way.
- The selected process is carried out by the appropriate scheduler
There are 3 different types of scheduler.
1. Long-term scheduler (or job scheduler) – selects which processes
should be brought into the ready queue
2. Short-term scheduler (or CPU scheduler) – selects which process
should be executed next and allocates CPU
3. Medium term scheduling which determines when processes are to be suspended and
resumed;
New Section 4 Page 4
Q Explain CPU-I/O Burst Cycle.
- Maximum CPU utilization obtained with
multiprogramming
- Process execution consists of a cycle of CPU
execution and I/O wait
- Processes alternate between these two states.
- Process execution begins with a CPU burst ,
that is followed by an I/O burst, which is
followed by another CPU burst, then another
I/O burst so on.
- Eventually the final CPU burst ends with
system request to terminate execution
- The duration of CPU burst vary from process to
process or computer to computer
If the program is I/O bound then it has short CPU
burst, when the program or process is CPU bound
then
it has very short I/O burst.
This distribution is important in selecting
appropriate CPU scheduling algorithm
Q6)What is CPU Scheduler?
- Whenever the CPU becomes idle , the O.S must select one of the processes in the
ready queue to be executed.
- The selection process is carried out by the short term scheduler (CPU scheduler)
New Section 4 Page 5
- The selection process is carried out by the short term scheduler (CPU scheduler)
- The scheduler selects a process from the processes in memory that are ready to
execute and allocates the CPU to that process.
- CPU scheduling is the process of determining which process in the ready queue is
allocated to the CPU.
- Various scheduling algorithms can be used to make this decision, such as First-Come-
First-Served (FCFS), Shortest Job Next (SJN), Priority and Round Robin (RR).
Q7) What are types of scheduling.?
• Non-Pre-emptive:
- Under Non-Pre-emptive scheduling, once the CPU has been allocated to a process,
the process keeps the CPU until it releases the CPU willingly.
• A process will leave the CPU only
1. When a process completes its execution (Termination state)
2. When a process wants to perform some i/o operations(Blocked state)
Pre-emptive
• Under Pre-emptive scheduling, once the CPU has been allocated to a process, A process
will leave the CPU willingly or it can be forced out.
• So it will leave the CPU
1. When a process completes its execution
2. When a process leaves CPU voluntarily to perform some i/o operations
3. If a new process enters in the ready states (new, waiting), in case of high priority
4. When process switches from running to ready state because of time quantum expire.
New Section 4 Page 6
Q8) Difference between Pre-emptive and non-preemptive
Pre-emptive non-preemptive
A process can be interrupted and Once a process starts, it runs to
moved to the ready queue. completion or wait for some event
Process switches from running state Process switches from running state to
to ready state waiting state.
Also process switches from waiting In this a process may get terminated
state to ready state.
Here, multiple process can run, one In this when the process assigned to
process can be pre-empted to run CPU, it keeps process busy, till it gets
another process terminated.
Typically more efficient, as it can quickly May lead to inefficient CPU
switch tasks. utilization
It needs specific Platform It is platform independent (hardware
independent)
Simpler to implement. More complex, requiring careful
handling of shared resources.
E.g. Windows 95 and MAC E.g. Windows 3.X versions.
Q) what is Dispatcher.
- Dispatcher module gives control of the CPU to the process selected by the
short-term scheduler; this involves:
switching context
switching to user mode
jumping to the proper location in the user program to restart that program.
New Section 4 Page 7
jumping to the proper location in the user program to restart that program.
• Dispatch latency – time it takes for the dispatcher to stop one process and
start another running.
Q) Define CPU Utilization, Throughput, Turnaround time, Waiting time, Response
time.
1. CPU utilization –
- It gives the information about how much time the CPU is busy.
- It range from 0 to 100%. In real system it should range from 40% to 90%. Less
the time CPU is idle, more is its utilization.
2. Throughput:
- If the CPU is busy executing processes, then work is being done.
- One measure of work is the no. of processes that are completed per time unit
called throughput.
3. Turnaround time:
- From the point of view of a particular process, the important criteria is how
long it takes to execute that process.
- The interval from the time of submission of a process to the time of
completion is the turnaround time.
Turnaround time=finish time-arrival time
4. Waiting time –It is amount of time a process has been waiting in the ready
queue.
More waiting time more is the turn around time.
5. Response time – It is the time interval from submission of the job, to time
when the first response was produced.
• Scheduling Algorithm and it types:
New Section 4 Page 8
• Scheduling Algorithm and it types:
➢ CPU scheduling deals with the problem of deciding which of the processes in
the ready queue is to be allocated the CPU.
The different scheduling algorithm are:-
1. FCFS (first come first serve)
2. SJF (shortest job first) – preemptive and non preemptive
3. Priority scheduling – preemptive and non preemptive
4. Round-robin scheduling
5. Multiple queue scheduling
6. Multilevel feedback queue scheduling
Q10)Explain FCFS algorithm with 2 advantages and disadvantages.
- This algorithm selects first jobs or process for CPU . It find out such process
whose arrival time is smallest and then that selected process is submitted to
CPU for execution.
- This algorithm only return the process number who come first to the
dispatcher.
- It is non-preemptive scheduling algorithm. Because when it started execution
of any process it should not stop execution in between or in middle.
- So by default it act as a non-preemptive scheduling algorithm
Advantages of FCFS Algorithm
Simple and easy to use
Easy to understand
Easy to implement
Disadvantages of FCFS Algorithm
Suffer from convey effect (smaller process have to wait for long time for bigger
process to release CPU)
Normally higher average waiting time
No consideration to priority or burst time
New Section 4 Page 9
No consideration to priority or burst time
Q11) Explain SJF algorithm with 2 advantages and disadvantages.
- This algorithm selects job or process who has shortest burst time for CPU.
- It find out such process whose burst time is smallest and then that selected
process is submitted to CPU for
execution.
There are two types of the SJF algorithm
1) Non-preemptive SJF :
- When CPU is executing any process and during execution when any higher
priority process is entered in ready
queue, then it will not stop the current execution, it completes its current
execution first and then it considers priority (shortest duration job) at the
time of selection of new process.
If smallest burst time of two processes are same then it selects process from
smallest arrival time.
If smallest burst time as well as arrival time of both the process are same
then selection depends on their process
identification number.
2) Pre-emptive SJF:
- When CPU is executing any process and during execution when any higher
priority process is entered in ready
queue, then it will stop the current execution first and then considers priority
and immediately starts higher priority
Job.
Advantages of SJF Algorithm
Pre-emptive guarantees minimal average waiting time.
Provide a standard for other algorithm in terms of average waiting
time
Due to short process first, its waiting time decreases and waiting time
of long job increases ultimately the waiting time decreases.
New Section 4 Page 10
Disadvantages of SJF Algorithm
Algorithm cannot be implemented as there is no way to know the
burst time of a process
Process with longer CPU burst time requirement will go into starvation
No idea of priority, process with large burst time have poor response
Time
Q12) What Is process creation?
- A process may create several new process through create process system call
during the course of execution.
- The creating process is called a parent process and the new process are called
the children of that process.
- Each of these new processes in turn create other processes, forming a tree of
processes.
- Generally, process identified and managed via a process identifier (pid)
- A process will need certain resources (CPU time, memory, files, I/O devices)
to accomplish its task. When a process creates a sub process that sub process
may be
able to obtain its resources directly from the O.S
When a process creates new process two possibility exist in terms of
execution.
1. The parent continues to execute concurrently with its children
2. The parent waits until some or all of its children terminated
Q13) What is process Termination?
A process terminates when it finishes executing its
final statement and ask the O.S to delete it by using
the exit() system call.
At that point , the process may return a status values
to its parent process ( through the wait system call)
All the resources of the process including physical and
virtual memory open files and I/O buffers are
deallocated by the O.S
The another system call is ABORT usually this call can
be invoked by only the parent of the process that is to
New Section 4 Page 11
be invoked by only the parent of the process that is to
be terminated
• There are many reasons for terminating the execution of child process by
parent process such as:-
1. The child has crossed the limit of usage of the resources it has been
allocated
2. The child process no longer required.
3. If the parent terminates then O.S does not allows a child to continue
execution
Q) Explain Inter- process communication withs two models.
- Processes within a system may be independent or cooperating.
Independent process cannot affect or be affected by
other processes executing in the system.
- They do not share data with any other process .
Cooperating process can affect or be affected by other processes.
- They can share data with other processes .
Cooperating processes need inter process communication (IPC) mechanism
that will allow them to exchange data and information.
Two models of IPC
Shared memory
Message passing
Shared memory:
- In this model a region of memory that is shared by cooperating processes is
established .
Process can exchange information by reading and writing data to the shared
region.
It is faster than message passing because in shared memory systems, system
calls are required only to establish shared memory region.
Normally O.S tries to prevent one process from accessing other process
memory
The process are responsible for ensuring that they are not writing to the
New Section 4 Page 12
The process are responsible for ensuring that they are not writing to the
same location simultaneously
Message passing
- In this communication takes place by means of message exchanged between
the cooperating processes.
They are useful for exchanging smaller amounts of data.
Easier to implement than shared memory for inter computer communication
Slower than shared memory because it is implemented using system calls
and thus require the more time consuming task of kernel intervention.
This model provides a mechanism to allow processes to communicate and to
synchronize their actions.
It provides at least two operations: send (message) and receive (message)
If process P and Q want to communicate they must send messages to and
receive messages from each other
A communication link must exist between them
This communication link can be implemented in a variety of ways.
1. Direct or indirect communication
New Section 4 Page 13
1. Direct or indirect communication
2. Synchronous or Asynchronous communication
3. Automatic or explicit buffering
Q) What is process Synchronization and How It Works.
- Process Synchronization is the task of coordinating the execution of processes
in a way that no two processes can have access to the same shared data and
resources.
- It is specially needed in a multi-process system when multiple processes are
running together, and more than one processes try to gain access to the same
shared resource or data at the same time.
- This can lead to the inconsistency of shared data.
- So the change made by one process not necessarily reflected when other
processes accessed the same shared data.
- To avoid this type of inconsistency of data, the processes need to be
synchronized with each other.
Q. Explain Multilevel Queue Scheduling
In MLQ scheduling, processes are divided into multiple queues based on their
New Section 4 Page 14
- In MLQ scheduling, processes are divided into multiple queues based on their
priority, with each queue having a different priority level.
- Higher-priority processes are placed in queues with higher priority levels,
while lower-priority processes are placed in queues with lower priority levels.
- Priorities are assigned to processes based on their type, characteristics, and
importance.
- For example, interactive processes like user input/output may have a higher
priority than batch processes like file backups.
- Preemption is allowed in MLQ scheduling, which means a higher priority
process can preempt a lower priority process, and the CPU is allocated to the
higher priority process.
- This helps ensure that high-priority processes are executed in a timely
manner.
A. Explain Multilevel Feedback Queue
- This allows to move process between queues.
- The idea is to separate processes according to the characteristics of their CPU
burst.
- If a process uses too much CPU time, it will be moved to a lower priority
queue.
- In addition, a process that waits too long in a lower priority queue may be
moved to a higher priority queue
New Section 4 Page 15
moved to a higher priority queue
A process entering the ready queue is put in queue 0
A process in queue 0 is given a time quantum of 8 millisecond
If it does not finish within this time it is moved to the tail of queue1
If queue 0 is empty the process at the head of queue 1 is given a time quantum
of 16 milliseconds
If it does not complete it is preempted and is put into queue 2
Process in queue 2 are run on an FCFS basis but can run only when queue 0
and 1 are empty
Q. Describe PCB
- A Process Control Block in OS (PCB) is a data structure used by an operating
system (OS) to manage and control the execution of processes.
- It contains all the necessary information about a process, including the
process state, program counter, memory allocation, open files, and CPU
scheduling information
New Section 4 Page 16
How Process Synchronization Works?
- Process synchronization problem arises in the case of Cooperative processes
For Example, process A changing the data in a memory location while
another process B is trying to read the data from the same memory
location. There is a high probability that data read by the second process
will be erroneous.
New Section 4 Page 17
Tuesday, May 14, 2024 9:10 PM
A. Describe PCB
- A Process Control Block in OS (PCB) is a data structure used by an operating
system (OS) to manage and control the execution of processes.
- It contains all the necessary information about a process, including the process
state, program counter, memory allocation, open files, and CPU scheduling
information
How Process Synchronization Works?
- Process synchronization problem arises in the case of Cooperative processes
For Example, process A changing the data in a memory location while
another process B is trying to read the data from the same memory
location. There is a high probability that data read by the second process
will be erroneous.
New Section 5 Page 1
will be erroneous.
New Section 5 Page 2
Saturday, May 4, 2024 6:19 PM
i. Describe the system model.
- Any system consists of no. of resources which are distributed
among the process.
- E.g. if a system have 5 printers, then the resource is printer and
it has 5 instances.
- If a process P requires an instance of a resources type printer,
then it is allocated to process P (if free) to satisfy the request.
- If an instance of printer are not free, means all printers are
allocated to other process then P has to wait until anyone of an
instance of printer get free or release by the other process.
- So a process must request a resource before using it and must
always release the resource after using it.
- A process may request any no. of resource to carry out its task.
- A process cannot request the resource larger than the no. of
resources present in the system.
E.g. if the system having 3 printer and the process request 4
printers then the request is not granted
A resource is utilized by a process in following manner.
1. Request : A process may request for the resources. If it is
available it is immediately granted otherwise the process will have
to wait.
2. Use : After a process is granted a resources , it will use the
resource
3. Release : The process release the resource after it is used.
Here request and release are the system calls
DeadLock Page 1
ii Define deadlock and necessary conditions for deadlock to occur/
deadlock categorization:
- A deadlock is a situation where a set of processes are blocked
because each process is holding a resource and waiting for
another resource acquired by some other process.
- E.g. there are two tape driver in some system. There are two
processes each holding one of them. Now each of the process is
needing one more tape drive. Now both the processes are
waiting for each other to release a
tape drive. This is a deadlock situation.
• A deadlock situation can arise if the following four conditions
hold simultaneously in the system.
1. Mutual exclusion:
- At least one resource must be held in a non-sharable mode,
that is only one process at a time can use a resource.
- If another process requests that resource, the requesting
process must be delayed until the resource has been released.
2. Hold and wait:
- A process must be holding at least one resource is waiting to
acquire additional resources held by other processes.
3. No preemption:
- Resources cannot be pre-empted i.e. a resource can be
released only voluntarily by the process holding it after that
process has completed its task.
4. Circular wait:
- There must be circular chain of two or more processes each of
which is waiting for other
iv . Describe resource allocation graph with its components.
DeadLock Page 2
iv . Describe resource allocation graph with its components.
- Deadlock can be described more precisely in terms of a directed
graph called a system resource-allocation graph.
- This graph consists of a set of vertices V and a set of edges E.
- The set of vertices V is partitioned into two different types of
nodes:
• P = {P₁, P 2'... Pn}, the set consisting of all the active processes in
the system.
• R = {R₁, R₂, ..., Rm), the set consisting of all resource types in the
system.
- A directed edge from a process to a resource (P1 R1) is called as
request edge.
- A directed edge from a resource to a process (Rn, Pn) is called as
allocated or an assignment edge
• If there is no cycle in the resource allocation then there will not
be deadlock in the system
• A presence of cycle may or may not lead to a deadlock
• If each resource type has exactly one instance, then a
cycle implies that a deadlock has occurred
DeadLock Page 3
cycle implies that a deadlock has occurred
v .which are methods of handling deadlock?
The three common methods for dealing with deadlock situation
1. We can use a protocol to prevent or avoid deadlock, ensuring
that the system will never enter a deadlock state.
2. We can allow the system to enter a deadlock state detect it and
recover
3. We can ignore the problem altogether and pretend that
deadlock never occur in the system.
• The deadlock handling techniques are
1.Prevention
- Ensure that the system will never enter a deadlock state.
2. Avoidance
- Ensure that the system will never enter an unsafe state
3. Detection
- Allow the system to enter a deadlock state.
4. Recovery
DeadLock Page 4
4. Recovery
5. Do Nothing
- Ignore the problem and let the user or system administrator
respond to the problem; used by most operating systems,
including Windows and UNIX
vi. Explain deadlock prevention techniques.
We can prevent a Deadlock by eliminating any of the above four
conditions.
1) Mutual Exclusion:
- If some resources in the system were non-sharable by multiple
processes, then deadlocks will be possible.
- If no resources was ever assigned exclusively to a single
process, we would never have deadlock.
E.g. Printer cannot be simultaneously shared by several
processes and if several processes attempt to open a read-only
file at the same time , they can be granted simultaneous access
to the file.
2)Eliminate Hold and wait:
- Allocate all required resources to the process before the start
of its execution, this way hold and wait condition is eliminated
but it will lead to low device utilization.
- For example, if a process requires a printer at a later time and
we have allocated a printer before the start of its execution
printer will remain blocked till it has completed its execution.
- The process will make a new request for resources after
releasing the current set of resources.
- This solution may lead to starvation.
DeadLock Page 5
This solution may lead to starvation.
3) Eliminate No Preemption :
- Preempt resources from the process when resources are
required by other high-priority processes.
4) Eliminate Circular Wait :
- Each resource will be assigned a numerical number.
- A process can request the resources to increase/decrease.
order of numbering.
- For Example, if the P1 process is allocated R5 resources, now
next time if P1 asks for R4, R3 lesser than R5 such a request will
not be granted, only a request for resources more than R5 will
be granted.
vii. Define deadlock avoidance and explain Safe state.
- If detail information about the processes and resources is
available then it is possible to avoid deadlock.
- E.g. which process will require which resources, possibly in
what sequence etc. Because this information may help to decide
the sequence in which the processes can be executed to avoid
deadlock.
- Each request can be analyzed on the basis of number of
resources currently available, currently allocated and future
requests which may come from other processes.
From this information system can decide whether or not a
DeadLock Page 6
- From this information system can decide whether or not a
process should wait.
- The deadlock avoidance algorithm dynamically examines the
resource allocation and state from the available information to
ensure that there is not circular wait.
viii. Describe Banker's Algorithm.
- Bankers’s Algorithm is a resource allocation and deadlock
avoidance algorithm which test all the request made by
processes for resources.
- It then checks for the safe state, and if even after granting a
request system remains in the safe state it allows the request,
and if there is no safe state it doesn’t allow the request made by
the process.
Inputs to Banker’s Algorithm
1) Max needs of resources by each process.
2) Currently, allocated resources by each process.
3) Max free available resources in the system.
The request will only be granted under the below condition
1) If the request made by the process is less than equal to the max
needed for that process.
2) If the request made by the process is less than equal to the
freely available resource in the system.
ix. Define deadlock detection.
- If a system does not employ either a deadlock-prevention or a
deadlock avoidance algorithm, then a deadlock situation may
occur
For deadlock detection, the system must provide An algorithm
DeadLock Page 7
- For deadlock detection, the system must provide An algorithm
that examines the state of the system to detect whether a
deadlock has occurred And an algorithm to recover from the
deadlock.
- The purpose of a deadlock detection algorithm is to identify and
resolve deadlocks in a computer system
- It does so by identifying the occurrence of a deadlock,
determining the processes and resources involved, taking
corrective action to break the deadlock, and restoring normal
system operations.
X Discuss recovery methods from deadlock?
- When a detection algorithm determines that deadlock exists,
several alternatives are available.
1) One possibility is to inform the operator that deadlock has
occurred and let the operator deal with the deadlock manually
2)Another possibility is to let the system recover from the deadlock
automatically.
There are two options for breaking a deadlock.
1) One is simply to abort one or more processes to break the
circular wait.
2) The other is to pre-empt some resources from one or more of
the deadlock processes.
1. Process Termination:
• Abort all deadlocked processes
- It is very expensive way of breaking deadlock cycle.
- The deadlock processes may have computed for a long time,
and the results of these partial computations may be discarded
DeadLock Page 8
and the results of these partial computations may be discarded
and probably will have to be recomputed later.
• Abort one process at a time until the deadlock cycle is
eliminated
- After each process is aborted, a deadlock detection algorithm
must be invoked to determine whether any processes are still
deadlocked.
- Aborting a process may not be easy.
- If the process was in the midst of updating a file, terminating it
will leave that file in an incorrect state.
- Similarly, if the process was in the midst of printing data on a
printer, the system must reset the printer to a correct state
before printing the next job.
2. Resource Preemption:
- To eliminate deadlocks using resource preemption, we preempt
some resources from processes and give those resources to
other processes. This method will raise three issues –
i. Selecting a victim: We must determine which resources and
which processes are to be preempted and also in order to
minimize the cost.
ii. Rollback: We must determine what should be done with the
process from which resources are preempted. One simple idea
is total rollback. That means aborting the process and restarting
it.
iii. Starvation: In a system, it may happen that the same process is
always picked as a victim.
- As a result, that process will never complete its designated task.
- This situation is called Starvation and must be avoided.
One solution is that a process must be picked as a victim only a
DeadLock Page 9
- One solution is that a process must be picked as a victim only a
finite number of times.
DeadLock Page 10
Unit3
Friday, April 19, 2024 10:05 PM
Q) what is Address Binding? And types of address
binding?
- Address binding in an operating system (OS) is the
process of mapping symbolic addresses to physical
memory addresses at runtime.
- Symbolic addresses are used by programs to refer
to different parts of memory or code, but these
addresses are not directly usable by the hardware.
- Address binding translates these symbolic
addresses into actual memory addresses that can
be understood and accessed by the hardware.
-
• Types of Address Binding
Compile Time Address Binding:
- In this address binding method, memory addresses
are assigned to a program when it is being
compiled.
- Since the memory addresses are assigned during
the compile time, that is, before the program is
executed, the addresses are fixed.
- They cannot be changed when the program is being
executed.
Load Time Address Binding:
Unit 3 Page 1
Load Time Address Binding:
- In this process, memory addresses are assigned to
the program during its load time.
- So, memory addresses can be changed during the
execution of the program.
Execution Time Address Binding:
- As the name suggests, in this type of address
binding, addresses are assigned to programs while
the program is running.
- This means that the memory addresses can change
while the program is being executed.
- It is also known as run-time binding.
Q) What Is Dynamic Loading and Dynamic Linking?
The entire program and all data of a program must be
in physical memory for the process to execute.
The size of a process is thus limited to the size of
physical memory.
To obtain better memory space utilization, we can
use dynamic loading.
With dynamic loading a routine is not loaded until it
called.
All routines are kept on disk in a relocatable load
format.
The main program is loaded into memory and is
Unit 3 Page 2
The main program is loaded into memory and is
executed.
When a routine needs to call another routine, the
calling routine first checks to see whether the other
routine has been loaded.
When a part of the program is needed but not in
memory, the system uses a tool called a "relocatable
linking loader" to bring it into memory from storage.
Then control is passed to newly loaded routine.
The advantages of dynamic loading is that an
unused routine is never loaded
Dynamic Linking:
In dynamic linking a stub is included in the image for
each library routine reference
The stub is a small piece of code that indicate how
to locate the appropriate memory resident library
routine or how to load the library if
routine is not already present .
If routine is not in memory then stub loads the
routine into memory and replace itself with the
address of routine
'
Q write down difference between static and dynamic
Unit 3 Page 3
Q write down difference between static and dynamic
loading.
Q) Write down diff between Logical Address and
Physical Address With Diagram.
The address generated by the CPU is commonly
referred to as a logical address/
virtual address.
Whereas an address seen by the memory unit i.e.
address where actual program
and data are stored in memory is called physical
address
Logical and physical addresses are the same in
compile-time and load-time
Unit 3 Page 4
compile-time and load-time
address-binding schemes.
logical (virtual) and physical addresses differ in
execution-time address-binding
scheme
Logical address space is the set of all logical
addresses generated by a program
Physical address space is the set of all physical
addresses generated by a program.
What is dynamic loading, and how does it improve
Unit 3 Page 5
What is dynamic loading, and how does it improve
memory space utilization?
- Dynamic loading is a technique where a program is
not fully loaded into memory when it starts
execution. Instead, only the necessary portions of
the program are loaded as they are required during
execution.
- This improves memory space utilization by allowing
programs to occupy only the memory space they
need at any given time, rather than reserving space
for the entire program upfront.
Q) Describe the concept of swapping in memory
management?
- A process must be in memory to be executed
- A process can be swapped temporarily out of
memory to a backing store and then
brought back into memory for continued execution.
- E.g. assume a multiprogramming environment with
round robin CPU scheduling algorithm. When a
time quantum expires, the memory manager will
start to swap out the process that just finished and
to swap another process into the memory
space that has been freed
Swapping uses a special storage area called a
Unit 3 Page 6
- Swapping uses a special storage area called a
backing store, usually a fast disk.
- The backing store has to be big enough to hold all
the processes' data.
- The system keeps track of which processes are
ready to run in a queue.
- When it's time for a process to run, a special system
part checks if it's in memory.
- If not, the system swaps out the current process
and brings in the new one.
Q. What is Contiguous memory allocation. What are 2
approches used for memory management?
Unit 3 Page 7
approches used for memory management?
- Contiguous memory allocation is a memory
management technique used by operating systems
to allocate a block of contiguous memory to a
process.
- The allocation of contiguous memory to a process
involves dividing the available memory into fixed-
sized partitions or segments
There are 2 approaches used for memory
management.
1. Fixed-size Partition Scheme
2. Variable-size Partition Scheme
Let us look at both of these :
1. Fixed-size Partition Scheme
- In this type of contiguous memory allocation
technique, each process is allotted a fixed-size
continuous block in the main memory.
- That means there will be continuous blocks of fixed
size into which the complete memory will be
divided, and each time a process comes in, it will be
allotted one of the free blocks.
- Because irrespective of the size of the process,
each is allotted a block of the same size memory
space This technique is also called static
Unit 3 Page 8
space This technique is also called static
partitioning.
2. Variable-size Partition Scheme
- In this type of contiguous memory allocation
technique, no fixed blocks or partitions are made in
the memory.
- Instead, each process is allotted a variable-sized
block depending upon its requirements.
- That means, that whenever a new process wants
some space in the memory, if available, this
amount of space is allotted to it.
- Hence, the size of each block depends on the size
and requirements of the process which occupies it.
Q) What is Fragmentation
- When computers assign and release memory for
tasks, the memory gets divided up into portions.
- Sometimes, this division isn't ideal, leading to two
main types of fragmentation:
1. Internal Fragmentation:
- This happens when a task doesn't perfectly fit into a
memory portion.
Even if there's a portion that's a bit bigger than
Unit 3 Page 9
- Even if there's a portion that's a bit bigger than
what the task needs, the extra space within that
portion is wasted.
2. External Fragmentation:
- As tasks come and go, the free memory gets split
into smaller pieces.
- While the total amount of free memory might be
enough to fulfill a task's request, the pieces might
not be contiguous.
- This can leave gaps that are too small to fit any new
tasks.
- In simpler terms, internal fragmentation is like
having a bit of wasted space within a portion, while
external fragmentation is like having scattered
pieces of memory that might not fit together
perfectly to fulfill a task's needs.
Q) Diff Between External & Internal
Unit 3 Page 10
Q) Explain compaction
- Compaction refers to combining of all the empty
spaces together and processes.
- Compaction helps to solve the problem of
fragmentation, but it requires a lot of CPU time.
- It moves all the occupied areas of storage to one
end and leaves one large free space for incoming
jobs, instead of numerous small ones.
- After compaction, all the occupied space has been
moved up and the free space at the bottom.
- This makes the space contiguous and removes
external fragmentation.
- Processes with large memory requirements can be
now loaded into the main memory.
Q) Explain Paging and Method of Paging?
Unit 3 Page 11
Q) Explain Paging and Method of Paging?
Paging is a memory management scheme that
permits the physical address space
of a process to be non-contiguous . It avoids
external fragmentation.
In Operating Systems, Paging is a storage
mechanism used to retrieve processes
from the secondary storage into the main memory
in the form of pages.
The main idea behind the paging is to divide each
process in the form of pages. The
main memory will also be divided in the form of
frames.
• Method of Paging
1. It involves breaking physical memory into fixed
sized blocks called frames and logical memory into
blocks of the same size called pages.
2. Every address generated by the CPU is divided to
two parts. – Page Number (p), -Page Offset (d)
3. The page number (p) is used as a index into a
page table.
4. A page is used to map a page on a frame.
5. Page table has an entry for each page.
6. The page table tells us where each page starts in
the physical memory. When we combine this
Unit 3 Page 12
the physical memory. When we combine this
starting point with the page offset, we get the exact
physical memory address we need.
7. This address is then sent to the memory unit for
retrieval or storage.
Q. Explain Address Translation Scheme and need of
paging
Address generated by CPU is divided into:
Page number (p) – used as an index into a page
table which contains base address of each page in
physical memory
Page offset (d) – combined with base address to
define the physical memory address that is sent to
the memory unit
• Need Of Paging:
Need of Paging:
Lets consider a process P1 of size 2 MB and the
Unit 3 Page 13
Lets consider a process P1 of size 2 MB and the
main memory which is
divided into three partitions. Out of the three
partitions, two partitions are
holes of size 1 MB each.
P1 needs 2 MB space in the main memory to be
loaded. We have two holes of 1 MB each but they
are not contiguous.
Although, there is 2 MB space available in the
main memory in the form of those holes
but that remains useless until it become
contiguous. This is a serious problem to address.
We need to have some kind of mechanism which
can store one process at different
locations of the memory.
The Idea behind paging is to divide the process in
pages so that, we can store them in the
memory at different holes.
Unit 3 Page 14
Q. What Is Segmentation and Need of Segmentation.
- In Operating Systems, Segmentation is a memory
management technique in which the memory is
divided into the variable size parts.
- Each part is known as a segment which can be
allocated to a process.
- The details about each segment are stored in a
table called a segment table.
- Segment table is stored in one (or many) of the
segments.
- Segment table contains mainly two information
about segment:
Base: It is the base address of the segment
Limit: It is the length of the segment
Need of Segmentation
Paging is more close to the Operating system rather
Unit 3 Page 15
- Paging is more close to the Operating system rather
than the User.
- It divides all the processes into the form of pages
regardless of the fact that a process can have some
relative parts of functions which need to be loaded
in the same page.
- Operating system doesn't care about the User's
view of the process.
- It may divide the same function into different
pages and those pages may or may not be loaded at
the same time into the memory.
- It decreases the efficiency of the system.
- It is better to have segmentation which divides the
process into the segments.
- Each segment contains the same type of functions
such as the main function can be included in one
segment and the library functions can be included
in the other segment
Q) Diff Btwn Paging And Segmentation
Unit 3 Page 16
Q) What Is Virtual Memory?
- Computers can access more memory than what's
physically installed through something called virtual
memory.
- Virtual memory is like a section of the hard disk
pretending to be the computer's RAM
- Virtual memory is crucial for how computers
operate efficiently.
- It's a technique in the operating system that makes
your system faster and more efficient by providing
extra space to store and safeguard your data.
- Virtual memory tricks users into thinking they have
a massive main memory.
Unit 3 Page 17
a massive main memory.
- It uses part of the secondary memory (like the hard
drive) as if it were the main memory.
- Users can load larger processes than what the main
memory can handle, thanks to this trick
- Virtual memory is often implemented through
demand paging or segmentation.
Q. Write a note on Demanding Paging
- A demand paging system is quite similar to a paging
system with swapping where processes reside in
secondary memory and pages are loaded only on
demand, not in advance.
- When a context switch occurs, the operating
system does not copy any of the old program’s
pages out to the disk or any of the new program’s
pages into the main memory Instead, it just begins
executing the new program after loading the first
page and fetches that program’s pages as they are
referenced .
- Could bring entire process into memory at load
time.
- Or bring a page into memory only when it is
needed.
It is combination of paging and swapping
Unit 3 Page 18
- It is combination of paging and swapping
Less I/O needed, no unnecessary I/O,Less
memory needed
Faster response, More users
- Page is needed => reference to it
invalid reference => abort
not-in-memory => bring to memory
- It is Lazy swapper method bcoz it swaps the page
only when it is needed.
Q. What is Page Fault and Step in Handelling Page
Fault.
- Page fault dominates more like an error.
- It mainly occurs when any program tries to access
the data or the code that is in the address space of
the program, but that data is not currently located
in the RAM of the system.
- If there is a reference to a page, first reference to
that page will trap to operating system: page fault
- So basically when the page referenced by the CPU is
not found in the main memory then the situation is
termed as Page Fault.
- Whenever any page fault occurs, then the required
page has to be fetched from the secondary memory
into the main memory.
Unit 3 Page 19
into the main memory.
- In case if the required page is not loaded into the
memory, then a page fault trap arises.
- The page fault mainly generates an exception,
which is used to notify the operating system that it
must have to retrieve the "pages" from the virtual
memory in order to continue the execution.
- Once all the data is moved into the physical
memory the program continues its execution
normally.
- The Page fault process takes place in the
background and thus goes unnoticed by the
user.
• Steps in Handeling a Page Fault:
Unit 3 Page 20
•
1.First of all, internal table(that is usually the
process control block) for this process in
order to determine whether the reference was
valid or invalid memory access.
2.If the reference is invalid, then we will terminate
the process. If the reference is
valid, but we have not bought in that page so now
we just page it in.
3.Then we locate the free frame list in order to find
the free frame.
4.Now a disk operation is scheduled in order to
read the desired page into the newly
allocated frame.
5.When the disk is completely read, then the
internal table is modified that is kept
Unit 3 Page 21
internal table is modified that is kept
with the process, and the page table that mainly
indicates the page is now in memory.
6.Now we will restart the instruction that was
interrupted due to the trap. Now the
process can access the page as though it had always
been in memory.
Unit 3 Page 22
File System
Sunday, April 28, 2024 8:10 PM
1. Define File. Explain Attributes of Files
- A file is a named collection of related information that is recorded
on secondary storage such as magnetic disks, magnetic tapes and
optical disks.
- In general, a file is a sequence of bits, bytes, lines or records whose
meaning is defined by the files creator and user.
• Attributes of a File
Following are some of the attributes of a file:
1. Name . It is the only information which is in human-readable form.
2. Identifier. The file is identified by a unique tag(number) within file
system.
3. Type. It is needed for systems that support different types of files.
4. Location. Pointer to file location on device.
5. Size. The current size of the file.
6. Protection. This controls and assigns the power of reading, writing,
executing.
7. Time, date, and user identification. This is the data for protection,
security, and usage monitoring.
Q. Explain different ways to access file
File access mechanism refers to the manner in which the records of a
file may be accessed. There are several ways to access files −
1. Sequential access
2. Direct/Random access
3. Indexed sequential access
1. Sequential Access
- A sequential access is that in which the records are accessed in some
sequence, i.e., the information in the file is processed in order, one
File Sytem Page 1
sequence, i.e., the information in the file is processed in order, one
record after the other.
- This access method is the most primitive one.
- The idea of Sequential access is based on the tape model which is a
sequential access device.
- The Sequential access method is best because most of the records in
a file are to be processed.
- For example, transaction files.
Example: Compilers usually access files in this fashion.
Advantages of sequential access
- It is simple to program and easy to design.
- Sequential file is best use if storage space.
Disadvantages of sequential access
- Sequential file is time consuming process.
- It has high data redundancy.
- Random searching is not possible.
2. Direct Access
- Sometimes it is not necessary to process every record in a file.
- It is not necessary to process all the records in the order in which
they are present in the memory.
- In all such cases, direct access is used.
- The disk is a direct access device which gives us the reliability to
random access of any file block.
- In the file, there is a collection of physical blocks and the records of
that blocks.
Example: Databases are often of this type since they allow query
processing that involves immediate access to large amounts of
information. All reservation systems fall into this category
Advantages:
File Sytem Page 2
Advantages:
- Direct access file helps in online transaction processing system
(OLTP) like online railway reservation system.
- In direct access file, sorting of the records are not required
Disadvantages:
- Direct access file does not provide backup facility.
- It is expensive
3. Indexed Sequential Access
- The index sequential access method is a modification of the direct
access method.
- Basically, it is kind of combination of both the sequential access as
well as direct access.
- The main idea of this method is to first access the file directly and
then it accesses sequentially.
- In this access method, it is necessary for maintaining an index.
Advantages:
- In indexed sequential access file, sequential file and random file
access is possible.
- It accesses the records very fast if the index table is properly
organized.
- The records can be inserted in the middle of the file
Disadvantages:
- Indexed sequential access file requires unique keys and periodic
reorganization.
- Indexed sequential access file takes longer time to search the index
for the data access or retrieval
File Sytem Page 3
Q. Explain three ways to allocate disk space to files in 0S
1. Contiguous Allocation
- In this scheme, each file occupies a contiguous set of blocks on the
disk.
- For example, if a file requires n blocks and is given a block b as the
starting location, then the blocks assigned to the file will be: b, b+1,
b+2,……b+n-1.
- This means that given the starting block address and the length of
the file
- we can determine the blocks occupied by the file.
The directory entry for a file with contiguous allocation contains
1. Address of starting block
2. Length of the allocated portion.
- The file ‘mail’ in the following figure starts from the block 19 with
length = 6 blocks. Therefore, it occupies 19, 20, 21, 22, 23, 24 blocks.
Advantages:
- Both the Sequential and Direct Accesses are supported by this
- This is extremely fast since the number of seeks are minimal
because of contiguous allocation of file blocks.
Disadvantages:
File Sytem Page 4
because of contiguous allocation of file blocks.
Disadvantages:
- This method suffers from both internal and external fragmentation.
- Increasing file size is difficult because it depends on the availability
of contiguous memory at a particular instance
2. Linked Allocation
- In this scheme, each file is a linked list of disk blocks which need not
be contiguous.
- The disk blocks can be scattered anywhere on the disk.
- The directory entry contains a pointer to the starting and the ending
file block.
- Each block contains a pointer to the next block occupied by the file.
- The file ‘jeep’ in following image shows how the blocks are
randomly distributed. The last block (25) contains -1 indicating a null
pointer and does not point to any other block.
Advantages:
1. File size does not have to be specified.
2. No external fragmentation.
Disadvantages:
1. It does sequential access efficiently and is not for direct access
File Sytem Page 5
1. It does sequential access efficiently and is not for direct access
2. Each block contains a pointer, wasting space
3. Indexed Allocation
- In this scheme, a special block known as the Index block contains the
pointers to all the blocks occupied by a file.
- Each file has its own index block.
- Each entry in the index block tells you where to find a specific file
block on the disk.
- So, if you're looking for the ith file block, just check the ith entry in
the index block to get its disk address.
- The directory entry contains the address of the index block as shown
in the image.
Advantages:
- This supports direct access to the blocks occupied by the file and
therefore provides fast access to the file blocks.
It overcomes the problem of external fragmentation.
File Sytem Page 6
- It overcomes the problem of external fragmentation.
Disadvantages:
- The pointer overhead for indexed allocation is greater than linked
allocation.
- A faulty index block could result in the loss of the entire file.
File Sytem Page 7
Unit 2 Questios
Friday, February 9, 2024 8:34 PM
Q1 What is Virtual Machines? And Benefits Of It
- A virtual machine (VM) is a digital version of a physical computer.
- It is a computer file, or image, that behaves like an actual computer.
- A VM can run in a window as a separate computing environment.
- A VM is an operating system (OS) or application environment that
imitates dedicated hardware.
- It can run programs and operating systems, store data, connect to
networks, and do other computing functions.
- The end user's experience when using a VM is equivalent to that of
using dedicated hardware
• Benefits:
- Each OS runs independently of all the others, offering protection and
security benefits.
- Virtual machines are a very useful tool for OS development, as they
allow a user full access to and control over a virtual machine, without
affecting other users operating the real machine.
- This approach can also be useful for product development and testing
of SW that must run on multiple OS platforms.
Q. What is Multiprogramming?
- An operating system that allows multiple programmes to run
simultaneously on a single processor machine is known as a
multiprogramming operating system.
- The other programmes are prepared to use the CPU while one
programme waits for an input/output transfer.
- Imagine that I/O is a part of the process that is currently running
(which, by definition, does not need the CPU to be accomplished). The
OS might then terminate that process and hand off the command to
another ready-to-run in-memory programme (i.e., context switching).
This keeps the system from idly waiting for the I/O work to finish,
New Section 3 Page 1
- This keeps the system from idly waiting for the I/O work to finish,
wasting CPU time.
- Therefore, the primary goal of multiprogramming is to keep the CPU
active for as long as there are active processes.
- The CPU won't be idle in a multiprogramming OS, so it'll always be
active
- Desktop operating systems, including Windows, macOS, and various
Linux distributions. These are contemporary operating systems that
make use of a variety of multiprogramming concepts
Q7 List Various Services of OS:
Program execution
Control Input/output devices
Program creation
Error Detection and Response
Accounting
Security and Protection
File Management
Communication
Q:What is User Mode and Kernal Mode.
- The system is in user mode when the operating system is running a
user application such as handling a text editor.
- The transition from user mode to kernel mode occurs when the
application requests the help of operating system or an interrupt or a
system call occurs. The mode bit is set to 1 in the user mode.
- The User mode is normal mode where the process has limited access.
While the Kernel mode is the privileged mode where the process has
unrestricted access to system resources like hardware, memory, etc.
A process can access I/O Hardware registers to program it, can
New Section 3 Page 2
- A process can access I/O Hardware registers to program it, can
execute OS kernel code and access kernel data in Kernel mode.
- Anything related to Process management, IO hardware management,
and Memory management requires process to execute in Kernel
mode.
- The mode bit is set to 0 in the Kernal mode.
- When the mode changes from user mode to kernel mode or vice-
versa, it is known as Context Switching
Q What is System Call in OS and what are its features?
- A system call is a way for programs to interact with the operating
system.
- A computer program makes a system call when it makes a request to
the operating system’s kernel.
- System call provides the services of the operating system to the user
programs via Application Program Interface(API).
- It provides an interface between a process and operating system to
allow user-level processes to request services of the operating system.
- System calls are the only entry points into the kernel system.
- All programs needing resources must use system calls.
• Services Provided by System Calls :
Process creation and management
Main memory management
File Access, Directory and File system management
Device handling(I/O)
New Section 3 Page 3
Device handling(I/O)
Protection
Networking, etc
Q2 What is Process ?
- Process is a program in execution; process execution must progress in
sequential fashion
- A process is a smallest unit of work that is scheduled by OS.
- In multiprogramming environment many processes are present in
memory
- To select the process for execution some management is required .
- Hence process management is done by O.S
• Process Includes:
1) Program counter: includes current activity
2) Stack: which contains temp data.
3) Data selection: contains global variable
4) Text: program code
Q3 What is Process State with daigram:
- As a process executes, it changes state
- The state of a process is defined in part by the current activity of that
process
Each process may be in one of the following state
• new: The process is being created
• running: Instructions are being executed
• waiting: The process is waiting for some event to occur Such as I/O
completion or reception of a signal.
• ready: The process is waiting to be assigned to a processor
• terminated: The process has finished execution
New Section 3 Page 4
New Section 3 Page 5