Jagesh Soni
Roll no : 1805210023
Operating System Lab File
CSE 2nd year
Submitted to : MUDITA MAM
Q1. Study of hardwere and softwere requirements of different
oprating systems (UNIX,LINUX,WINDOWS
XP,WINDOWS7/8)
Ans: UNIX :
First developed in 1969 by Ken Thompson and DennisRitchie of
the research group at bell laboratories;incorporated features of
other operating systems,especially multics The third version was
written in c, which wasdeveloped at bell labs specifically to
support unix The most influential of the non-bell labs and non-
at&tunix development groups university of california atberkeley
(berkeley software distributions - BSD) 4BSD unix resulted from
darpa funding to develop astandard unix system for government
use developed for the VAX, 4.3BSD is one of the mostinfluential
versions, and has been ported to many otherplatform
LINUX:
Linux is a modern, free operating system based onUNIX standards
First developed as a small but self-contained kernel in1991 by
Linus Torvalds, with the major design goal ofUNIX
compatibility.Its history has been one of collaboration by many
usersfrom all around the world, corresponding almostexclusively
over the Internet .It has been designed to run efficiently and
reliably oncommon PC hardware, but also runs on a variety ofother
platform
WINDOWS 7:
ExtensibilitY
PortabilitY
Reliability
Compatibility
Performance WINDOWS 7 subsystems cancommunicate with one
another via high-performancemessage passing
Preemption of low priority threads enables thesystem to respond
quickly to external events
Designed for symmetrical multiprocessing
International support : supports different locales viathe national
language support (NLS) API
Q.2. Execute various UNIX system call for
Ans: i. Process management:
Process Management System Calls in UNIX
Let us now look at the UNIX system calls dealing with process
management. The main ones are listed in Fig. 10-3. Fork is a good
place to start the discussion. Fork is the only way to create a new
process in UNIX systems. It creates an exact duplicate of the
original process, including all the file descriptors, registers and
everything else. After the fork, the original process and the copy
(the parent and child) go their separate ways. All the variables have
identical values at the time of the fork, but since the entire parent
core image is copied to create the child, subsequent changes in one
of them do not affect the other one. The fork call returns a value,
which is zero in the child, and equal to the child's PID in the
parent. Using the returned PID, the two processes can see which is
the parent and which is the child.
ii. File management
A system call is just what its name implies -- a request for the
operating system to do something on behalf of the user's
program. The system calls are functions used in the kernel itself.
To the programmer, the system call appears as a normal C function
call.However since a system call executes code in the kernel, there
must be a mechanism to change the mode of a process from user
mode to kernel mode. The C compiler uses a predefined library of
functions (the C library) that have the names of the system calls.
The library functions typically invoke an instruction that changes
the process execution mode to kernel mode and causes the kernel
to start executing code for system calls. The instruction that causes
the mode change is often referred to as an "operating system trap"
which is a software generated interrupt. The library routines
execute in user mode, but the system call interface is a special case
of an interrupt handler. The library functions pass the kernel a
unique number per system call in a machine dependent way --
either as a parameter to the operating system trap, in a particular
register, or on the stack -- and the kernel thus determines the
specific system call the user is invoking. In handling the operating
system trap, the kernel looks up the system call number in a table
to find the address of the appropriate kernel routine that is the entry
point for the system call and to find the number of parameters the
system call expects. The kernel calculates the (user) address of the
first parameter to the system call by adding (or subtracting,
depending on the direction of stack growth) an offset to the user
stack pointer, corresponding to the number of the parameters to the
system call.
iii. Input/output systems call
File descriptor is integer that uniquely identifies an open file
of the process. File Descriptor table: File descriptor table is the
collection of integer array indices that are file descriptors in which
elements are pointers to file table entries. One unique file
descriptors table is provided in operating system for each process.
File Table Entry: File table entries is a structure In-memory
surrogate for an open file, which is created when process request to
opens file and these entries maintains file position.
2. Implement CPU scheduling policies:
Ans;CPU Scheduling is a process of determining which process
will own CPU for execution while another process is on hold. The
main task of CPU scheduling is to make sure that whenever the
CPU remains idle, the OS at least select one of the processes
available in the ready queue for execution.
Type of CPU scheduling algorithm
First Come First Serve
First Come First Serve is the full form of FCFS. It is the easiest
and most simple CPU scheduling algorithm. In this type of
algorithm, the process which requests the CPU gets the CPU
allocation first. This scheduling method can be managed with a
FIFO queue.
As the process enters the ready queue, its PCB (Process Control
Block) is linked with the tail of the queue. So, when CPU becomes
free, it should be assigned to the process at the beginning of the
queue.
Characteristics of FCFS method:
It offers non-preemptive and pre-emptive scheduling algorithm.
Jobs are always executed on a first-come, first-serve basis
It is easy to implement and use.
However, this method is poor in performance, and the general wait
time is quite high.
Shortest Remaining Time
The full form of SRT is Shortest remaining time. It is also known
as SJF preemptive scheduling. In this method, the process will be
allocated to the task, which is closest to its completion. This
method prevents a newer ready state process from holding the
completion of an older process.
Characteristics of SRT scheduling method:
This method is mostly applied in batch environments where short
jobs are required to be given preference.
This is not an ideal method to implement it in a shared system
where the required CPU time is unknown.
Associate with each process as the length of its next CPU burst. So
that operating system uses these lengths, which helps to schedule
the process with the shortest possible time.
II. Priority
Priority scheduling is one of the most common scheduling
algorithms in batch systems. Each process is assigned a priority.
Process with the highest priority is to be executed first and so on.
Processes with the same priority are executed on first come first
served basis. Priority can be decided based on memory
requirements, time requirements or any other resource
requirement.
Implementation :
1- First input the processes with their burst time
and priority.
2- Sort the processes, burst time and priority
according to the priority.
3- Now simply apply FCFS algorithm.
IV. multilevel queue :
Multilevel Queue (MLQ) CPU Scheduling
It may happen that processes in the ready queue can be divided
into different classes where each class has its own scheduling
needs. For example, a common division is a foreground
(interactive) process and background (batch) processes.These two
classes have different scheduling needs. For this kind of situation
Multilevel Queue Scheduling is used.Now, let us see how it works.
Ready Queue is divided into separate queues for each class of
processes. For example, let us take three different types of process
System processes, Interactive processes and Batch Processes. All
three process have there own queue.
4. Implement file storage allocation technique
Ans: The allocation methods define how the files are stored in the
disk blocks. There are three main disk space or file allocation
methods.
Contiguous Allocation
Linked Allocation
Indexed Allocation
1. Contiguous Allocation
In this scheme, each file occupies a contiguous set of blocks on the
disk. For example, if a file requires n blocks and is given a block b
as the starting location, then the blocks assigned to the file will be:
b, b+1, b+2,……b+n-1. This means that given the starting block
address and the length of the file (in terms of blocks required), we
can determine the blocks occupied by the file.
2. Linked List Allocation
In this scheme, each file is a linked list of disk blocks which need
not be contiguous. The disk blocks can be scattered anywhere on
the disk. The directory entry contains a pointer to the starting and
the ending file block. Each block contains a pointer to the next
block occupied by the file.
3. Indexed Allocation
In this scheme, a special block known as the Index block contains
the pointers to all the blocks occupied by a file. Each file has its
own index block. The ith entry in the index block contains the disk
address of the ith file block. The directory entry contains the
address of the index.
Q.5. Implement of contiguos allocation techniques:
i. Worst-fit : In this allocation technique the process traverse
the whole memory and always search for largest hole/partition, and
then the process is placed in that hole/partition.It is a slow process
because it has to traverse the entire memory to search largest bole.
Advantage of worst-fit.
Since this process chooses the largest hole/partition, therefore there
will be large internal fragmentation. Now, this internal
fragmentation will me quite big so that other small processes can
also be placed in that left over partition.
ii. Best -fit : This method keeps the free/busy list in order by
size – smallest to largest. In this method, the operating system first
searches the whole of the memory according to the size of the
given job and allocates it to the closest-fitting free partition in the
memory, making it able to use memory efficiently. Here the jobs
are in the order from smalest job to largest job.
Advantages of Best-Fit Allocation :
Memory Efficient. The operating system allocates the job
minimum possible space in the memory, making memory
management very efficient. To save memory from getting wasted,
it is the best method.
iii. First -fit : This method keeps the free/busy list of jobs
organized by memory location, low-ordered to high-ordered
memory. In this method, first job claims the first available memory
with space more than or equal to it’s size. The operating system
doesn’t search for appropriate partition but just allocate the job to
the nearest memory partition available with sufficient size.
Advantages of First-Fit Memory Allocation:
It is fast in processing. As the processor allocates the nearest
available memory partition to the job, it is very fast in execution.
Q.6. Calculation of external and internal fragmentation
i. Free space list of blocks from system
In a computer storage system, as processes are loaded and
removed from memory, the free memory space is broken into
small pieces. In this way memory space used inefficiently, so the
capacity or performance of the system may degrade. The
conditions of the fragmentation depend on the system of memory
allocation. In most of the cases, memory space is
wasted.Sometimes it happens that memory blocks cannot be
allocated to processes due to their small size and memory blocks
remain unused. This problem is known as Fragmentation.
ii. List process file from the system
I want to find the total external and internal fragmentation.
What I understand is that external fragmentation occurs when
processes are loaded and removed from memory, causing memory
to be broken into little pieces, and that internal fragmentation is the
unused memory internal to a partition. As an example, say I had
the following memory holes: 50 KB, 400 KB, 130 KB, 300 KB,
150 KB, and 70 KB (in that order). Now I have the following
processes that need the following memory space (in order): A =
230 KB, B =180 KB, C = 130 KB, D = 120 KB, E = 200 KB.
Q.7. Implementation of compaction for the continually
changing memory layout and calculate total movement of data.
Ans: Disk Scheduling Algorithms
FCFS: FCFS is the simplest of all the Disk Scheduling
Algorithms. In FCFS, the requests are addressed in the order they
arrive in the disk queue.Let us understand this with the help of an
example.
Advantages:
Every request gets a fair chance
No indefinite postponement
Disadvantages:
Does not try to optimize seek time
May not provide the best possible service
SSTF: In SSTF (Shortest Seek Time First), requests having
shortest seek time are executed first. So, the seek time of every
request is calculated in advance in the queue and then they are
scheduled according to their calculated seek time. As a result, the
request near the disk arm will get executed first. SSTF is certainly
an improvement over FCFS as it decreases the average response
time and increases the throughput of system.
Advantages:
Average Response Time decreases
Throughput increases
Disadvantages:
Overhead to calculate seek time in advance
Can cause Starvation for a request if it has higher seek time as
compared to incoming requests
High variance of response time as SSTF favours only some
requests
SCAN: In SCAN algorithm the disk arm moves into a particular
direction and services the requests coming in its path and after
reaching the end of disk, it reverses its direction and again services
the request arriving in its path. So, this algorithm works as an
elevator and hence also known as elevator algorithm. As a result,
the requests at the midrange are serviced more and those arriving
behind the disk arm will have to wait.
Advantages:
High throughput
Low variance of response time
Average response time
Disadvantages: Long waiting time for requests for locations just
visited by disk arm
Q.8. Implementation of resource allocation graph(RAG)
Ans: As Banker’s algorithm using some kind of table like
allocation, request, available all that thing to understand what is the
state of the system. Similarly, if you want to understand the state of
the system instead of using those table, actually tables are very
easy to represent and understand it, but then still you could even
represent the same information in the graph. That graph is called
Resource Allocation Graph (RAG). So, resource allocation graph
is explained to us what is the state of the system in terms of
processes and resources. Like how many resources are available,
how many are allocated and what is the request of each process.
Everything can be represented in terms of the diagram. One of the
advantages of having a diagram is, sometimes it is possible to see a
deadlock directly by using RAG, but then you might not be able to
know that by looking at the table. But the tables are better if the
system contains lots of process and resource and Graph is better if
the system contains less number of process and resource.
We know that any graph contains vertices and edges. So RAG also
contains vertices and edges. In RAG vertices are two type –
1. Process vertex – Every process will be represented as a process
vertex.Generally, the process will be represented with a circle.
2. Resource vertex – Every resource will be represented as a
resource vertex. It is also two type –
Single instance type resource – It represents as a box, inside the
box, there will be one dot.So the number of dots indicate how
many instances are present of each resource type. Multi-resource
instance type resource – It also represents as a box, inside the box,
there will be many dots present. Now coming to the edges of
RAG.There are two types of edges in RAG –
The total number of processes are three; P1, P2 & P3 and the total
number of resources are two; R1 & R2.
Allocation matrix – For constructing the allocation matrix, just go
to the resources and see to which process it is allocated. R1 is
allocated to P1, therefore write 1 in allocation matrix and similarly,
R2 is allocated to P2 as well as P3 and for the remaining element
just write 0.
Request matrix – In order to find out the request matrix, you have
to go to the process and see the outgoing edges. P1 is requesting
resource R2, so write 1 in the matrix and similarly, P2 requesting
R1 and for the remaining element write 0.
Q.9. Implementatiopn of Banker's algorithm.
Ans: Banker’s Algorithm in Operating System
The banker’s algorithm is a resource allocation and deadlock
avoidance algorithm that tests for safety by simulating the
allocation for predetermined maximum possible amounts of all
resources, then makes an “s-state” check to test for possible
activities, before deciding whether allocation should be allowed to
continue.
Why Banker’s algorithm is named so?Banker’s algorithm is named
so because it is used in banking system to check whether loan can
be sanctioned to a person or not. Suppose there are n number of
account holders in a bank and the total sum of their money is S. If
a person applies for a loan then the bank first subtracts the loan
amount from the total money that bank has and if the remaining
amount is greater than S then only the loan is sanctioned. It is done
because if all the account holders comes to withdraw their money
then the bank can easily do it.In other words, the bank would never
allocate its money in such a way that it can no longer satisfy the
needs of all its customers. The bank would try to be in safe state
always.Following Data structures are used to implement the
Banker’s Algorithm:Let ‘n’ be the number of processes in the
system and ‘m’ be the number of resources types.Available : It is a
1-d array of size ‘m’ indicating the number of available resources
of each type.
Available[ j ] = k means there are ‘k’ instances of resource type Rj
Max : It is a 2-d array of size ‘n*m’ that defines the maximum
demand of each process in a system.Max[ i, j ] = k means process
Pi may request at most ‘k’ instances of resource type Rj.Allocation
:It is a 2-d array of size ‘n*m’ that defines the number of resources
of each type currently allocated to each process. Allocation[ i, j ] =
k means process Pi is currently allocated ‘k’ instances of resource
type Rj Need : It is a 2-d array of size ‘n*m’ that indicates the
remaining resource need of each process. Need [ i, j ] = k means
process Pi currently need ‘k’ instances of resource type Rj for its
execution.
Need [ i, j ] = Max [ i, j ] – Allocation [ i, j ]
Allocationi specifies the resources currently allocated to process Pi
and Needi specifies the additional resources that process Pi may
still request to complete its task. Banker’s algorithm consists of
Safety algorithm and Resource request algorithm
Q.10. Conversion of resource alocation graph(RAG) to wait for
graph(WFG) for each type of method used for sorting.
Ans: Resource Allocation Graph (RAG) in Operating System
As Banker’s algorithm using some kind of table like allocation,
request, available all that thing to understand what is the state of
the system. Similarly, if you want to understand the state of the
system instead of using those table, actually tables are very easy to
represent and understand it, but then still you could even represent
the same information in the graph. That graph is called Resource
Allocation Graph (RAG). So, resource allocation graph is
explained to us what is the state of the system in terms of processes
and resources. Like how many resources are available, how many
are allocated and what is the request of each process. Everything
can be represented in terms of the diagram. One of the advantages
of having a diagram is, sometimes it is possible to see a deadlock
directly by using RAG, but then you might not be able to know
that by looking at the table. But the tables are better if the system
contains lots of process and resource and Graph is better if the
system contains less number of process and resource.
We know that any graph contains vertices and edges. So RAG also
contains vertices and edges. In RAG vertices are two type –
1. Process vertex – Every process will be represented as a process
vertex.Generally, the process will be represented with a circle.
2. Resource vertex – Every resource will be represented as a
resource vertex. It is also two type –
Single instance type resource – It represents as a box, inside the
box, there will be one dot.So the number of dots indicate how
many instances are present of each resource type. Multi-resource
instance type resource – It also represents as a box, inside the box,
there will be many dots present. Now coming to the edges of
RAG.There are two types of edges in RAG –
The total number of processes are three; P1, P2 & P3 and the total
number of resources are two; R1 & R2.
Allocation matrix – For constructing the allocation matrix, just go
to the resources and see to which process it is allocated. R1 is
allocated to P1, therefore write 1 in allocation matrix and similarly,
R2 is allocated to P2 as well as P3 and for the remaining element
just write 0.
Request matrix – In order to find out the request matrix, you have
to go to the process and see the outgoing edges. P1 is requesting
resource R2, so write 1 in the matrix and similarly, P2 requesting
R1 and for the remaining element write 0.
Q.11. Implement the solution for Bounded Buffer (producer-
consumer) problem using inter process communication
techniques - Semaphores.
Ans: Producer Consumer Problem using Semaphores Prerequisite
– Semaphores in operating system, Inter Process Communication
Producer consumer problem is a classical synchronization
problem. We can solve this problem by using semaphores. A
semaphore S is an integer variable that can be accessed only
through two standard operations : wait() and signal(). The wait()
operation reduces the value of semaphore by 1 and the signal()
operation increases its value by 1.
wait(S){
while(S<=0); // busy waiting
S--;
}
signal(S){
S++;
}
Semaphores are of two types:
Binary Semaphore – This is similar to mutex lock but not the
same thing. It can have only two values – 0 and 1. Its value is
initialized to 1. It is used to implement the solution of critical
section problem with multiple processes. Counting Semaphore –
Its value can range over an unrestricted domain. It is used to
control access to a resource that has multiple instances. Problem
Statement – We have a buffer of fixed size. A producer can
produce an item and can place in the buffer. A consumer can pick
items and can consume them. We need to ensure that when a
producer is placing an item in the buffer, then at the same time
consumer should not consume any item. In this problem, buffer is
the critical section. To solve this problem, we need two counting
semaphores – Full and Empty. “Full” keeps track of number of
items in the buffer at any given time and “Empty” keeps track of
number of unoccupied slots.
Q.12. Implement the solution for the reader - wrriter problem
using inter process communication technique-semaphopres.
Ans: Suppose that a database is to be shared among several
concurrent processes. Some of these processes may want only to
read the database, whereas others may want to update (that is, to
read and write) the database. We distinguish between these two
types of processes by referring to the former as readers and to the
latter as writers. Obviously, if two readers access the shared data
simultaneously, no adverse effects will result. However, if a writer
and some other process (either a reader or a writer) access the
database simultaneously, chaos may ensuer .To ensure that these
difficulties do not arise, we require that the writers have exclusive
access to the shared database while writing to the database. This
synchronization problem is referred to as the readers-writers
problem.
Reader: wait(mutex);
readcount++;
if(readcount == 1) {
wait(wrt);
}
signal(mutex);
// Perform read operation
wait(mutex);
readcount--;
if(readcount == 0) {
signal(wrt);
} signal(mutex);
Writer can only write when the readcount is zero or there are no
readers waiting. If the first reader executes wait(wrt) operation
before the writer does, then writer gets blocked. Only when the last
reader exits, it calls the signal(wrt) operation signalling writer to
continue Similarly, when a writer starts writing(readcount=0) then
the first reader gets blocked on wait(wrt) and this blocks all the
readers.