Bcagc106 Introos Notes
Bcagc106 Introos Notes
Patil University,
School of Management
BCA-I Sem I (2022)
Introduction to Operating System (BCAGC 1006)
Notes
Hardware
Application Software
It is a fully integrated set of specialized programs that handle all the operations of the
computer. It controls and monitors the execution of all other programs that reside in the
1
computer, which also includes application programs and other system software of the
computer. Examples of the operating system are Windows, Linux, Mac OS, etc.
Diagram of the operating system:
Operating system is the first program to be loaded in the computer and it runs in the memory
till the system is shut down.
Describe the Objectives of Operating System:
Let us now see some of the objectives of the operating system, which are:
1. Convenient to use: One of the objectives is to make the computer system more convenient
to use in an efficient manner.
2. User Friendly: To make the computer system more interactive with a more convenient
interface for the users.
3. To provide easy access to users for using resources by acting as an intermediary between
the hardware and its users.
4. For managing the resources of a computer.
5. Controls and Monitoring: By keeping the track of who is using which resource, granting
resource requests, and mediating conflicting requests from different programs and users.
6. Providing efficient and fair sharing of resources between the users and programs.
Device Management: The operating system keeps track of all the devices. So, it is also
called the Input / Output controller that decides which process gets the device, when, and
for how much time.
File Management: It allocates and de-allocates the resources and also decides who gets
the resource.
Job Accounting: It keeps the track of time and resources used by various jobs or users.
Error-detecting Aids: It contains methods that include the production of dumps, traces,
error messages, and other debugging and error-detecting methods.
2
Memory Management: It keeps track of the primary memory, like what part of it is in
use by whom, or what part is not in use, etc. and It also allocates the memory when a
process or program requests it.
Processor Management: It allocates the processor to a process and then de-allocates the
processor when it is no longer required or the job is done.
Control on System Performance: It records the delays between the request for a service
and from the system.
This type of operating system does not interact with the computer directly. There is an operator
which takes similar jobs having the same requirement and group them into batches. It is the
responsibility of the operator to sort jobs with similar needs.
Examples of Batch based Operating System: Payroll System, Bank Statements, etc.
Each task is given some time to execute so that all the tasks work smoothly. Each user gets the
time of CPU as they use a single system. These systems are also known as Multitasking
Systems. The task can be from a single user or different users also. The time that each task
gets to execute is called quantum. After this time interval is over OS switches over to the next
task.
4
These types of the operating system is a recent advancement in the world of computer
technology and are being widely accepted all over the world and, that too, with a great pace.
Various autonomous interconnected computers communicate with each other using a shared
communication network. Independent systems possess their own memory unit and CPU. These
are referred to as loosely coupled systems or distributed systems. These system’s processors
differ in size and function. The major benefit of working with these types of the operating
system is that it is always possible that one user can access the files or software which are not
actually present on his system but some other system connected within this network i.e.,
remote access is enabled within the devices connected in that network.
5
4. Network Operating System –
These systems run on a server and provide the capability to manage data, users, groups,
security, applications, and other networking functions. These types of operating systems allow
shared access of files, printers, security, applications, and other networking functions over a
small private network. One more important aspect of Network Operating Systems is that all
the users are well aware of the underlying configuration, of all other users within the network,
their individual connections, etc. and that’s why these computers are popularly known
as tightly coupled systems.
Examples of Network Operating System are: Microsoft Windows Server 2003, Microsoft
Windows Server 2008, UNIX, Linux, Mac OS X, Novell NetWare, and BSD, etc.
6
Real-time systems are used when there are time requirements that are very strict like missile
systems, air traffic control systems, robots, etc.
Advantages of RTOS:
Maximum Consumption: Maximum utilization of devices and system, thus more output
from all the resources
Task Shifting: The time assigned for shifting tasks in these systems are very less. For
example, in older systems, it takes about 10 microseconds in shifting one task to another,
and in the latest systems, it takes 3 microseconds.
Focus on Application: Focus on running applications and less importance to
applications which are in the queue.
Real-time operating system in the embedded system: Since the size of programs are
small, RTOS can also be used in embedded systems like in transport and others.
Error Free: These types of systems are error-free.
Memory Allocation: Memory allocation is best managed in these types of systems.
Disadvantages of RTOS:
Limited Tasks: Very few tasks run at the same time and their concentration is very less
on few applications to avoid errors.
Use heavy system resources: Sometimes the system resources are not so good and they
are expensive as well.
Complex Algorithms: The algorithms are very complex and difficult for the designer to
write on.
Device driver and interrupt signals: It needs specific device drivers and interrupts
signals to respond earliest to interrupts.
Thread Priority: It is not good to set thread priority as these systems are very less prone
to switching tasks.
Some mobile OS put a heavy drain on a device’s battery, requiring frequent recharging.
Examples of mobile operating systems include Android OS, Apple and Windows mobile OS.
Although Windows, Mac, UNIX, Linux, and other OS do not have the same structure, most operating
systems share similar OS system components, such as file, memory, process, I/O device management.
The components of an operating system play a key role to make a variety of computer system parts work
together. There are the following components of an operating system, such as:
1. Process Management
2. File Management
3. Network Management
4. Main Memory Management
5. Secondary Storage Management
6. I/O Device Management
7. Security Management
8. Command Interpreter System
8
Operating system components help you get the correct computing by detecting CPU and memory
hardware errors.
1 Process Management
The process management component is a procedure for managing many processes running
simultaneously on the operating system. Every running software application program has one or more
processes associated with them.
For example, when you use a search engine like Chrome, there is a process running for that browser
program.
Process management keeps processes running efficiently. It also uses memory allocated to them and
shutting them down when needed.
The execution of a process must be sequential so, at least one instruction should be executed on behalf
of the process.
9
Functions of process management
Here are the following functions of process management in the operating system, such as:
NOTE: OS facilitates an exchange of information between processes executing on the same or different
systems.
2 File Management
A file is a set of related information defined by its creator. It commonly represents programs (both source
and object forms) and data. Data files can be alphabetic, numeric, or alphanumeric.
10
Function of file management
The operating system has the following important activities in connection with file management:
3 Network Management
Network management is the process of administering and managing computer networks. It includes
performance management, provisioning of networks, fault analysis, and maintaining the quality of
service.
11
A distributed system is a collection of computers or processors that never share their memory and clock.
In this type of system, all the processors have their local memory, and the processors communicate with
each other using different communication cables, such as fibre optics or telephone lines.
The computers in the network are connected through a communication network, which can configure in
many different ways. The network can fully or partially connect in network management, which helps
users design routing and connection strategies that overcome connection and security issues.
o Distributed systems help you to various computing resources in size and function. They may
involve minicomputers, microprocessors, and many general-purpose computer systems.
o A distributed system also offers the user access to the various resources the network shares.
o It helps to access shared resources that help computation to speed up or offers data availability
and reliability.
It should be mapped to absolute addresses and loaded inside the memory to execute a program. The
selection of a memory management method depends on several factors.
However, it is mainly based on the hardware design of the system. Each algorithm requires
corresponding hardware support. Main memory offers fast storage that can be accessed directly by the
CPU. It is costly and hence has a lower storage capacity. However, for a program to be executed, it must
be in the main memory.
12
Functions of Memory management
An Operating System performs the following functions for Memory Management in the operating
system:
5 Secondary-Storage Management
The most important task of a computer system is to execute programs. These programs help you to
access the data from the main memory during execution. This memory of the computer is very small to
store all data and programs permanently. The computer system offers secondary storage to back up the
main memory.
13
Today modern computers use hard drives/SSD as the primary storage of both programs and data.
However, the secondary storage management also works with storage devices, such as USB flash drives
and CD/DVD drives. Programs like assemblers and compilers are stored on the disk until it is loaded
into memory, and then use the disk is used as a source and destination for processing.
Here are some major functions of secondary storage management in the operating system:
o Storage allocation
o Free space management
o Disk scheduling
The I/O management system offers the following functions, such as:
14
o It provides drivers for particular hardware devices.
o I/O helps you to know the individualities of a specific device.
NOTE: The user's program can't execute I/O operations directly. The operating system should provide some
medium to perform this.
7 Security Management
The various processes in an operating system need to be secured from other activities. Therefore, various
mechanisms can ensure those processes that want to operate files, memory CPU, and other hardware
resources should have proper authorization from the operating system.
Security refers to a mechanism for controlling the access of programs, processes, or users to the
resources defined by computer controls to be imposed, together with some means of enforcement.
For example, memory addressing hardware helps to confirm that a process can be executed within its
own address space. The time ensures that no process has control of the CPU without renouncing it.
Lastly, no process is allowed to do its own I/O to protect, which helps you to keep the integrity of the
various peripheral devices.
15
Security can improve reliability by detecting latent errors at the interfaces between component
subsystems. Early detection of interface errors can prevent the foulness of a healthy subsystem by a
malfunctioning subsystem. An unprotected resource cannot misuse by an unauthorized or incompetent
user.
Many commands are given to the operating system by control statements. A program that reads and
interprets control statements is automatically executed when a new job is started in a batch system or a
user logs in to a time-shared system. This program is variously called.
Its function is quite simple, get the next command statement, and execute it. The command statements
deal with process management, I/O handling, secondary storage management, main memory
management, file system access, protection, and networking.
16
What are System Calls?
Above all categories of system calls describe as per the Windows and Unix Operating
system perspective as follows:
CreateProcess() fork()
ExitProcess() exit()
Process Control WaitForSingleObject() wait()
17
CreateFile() open()
ReadFile() read()
WriteFile() write()
File Manipulation
CloseHandle() close()
SetConsoleMode() ioctl()
ReadConsole() read()
Device Manipulation WriteConsole() write()
GetCurrentProcessID() getpid()
SetTimer() alarm()
Information Maintenance Sleep() sleep()
CreatePipe() pipe()
CreateFileMapping() shmget()
Communication MapViewOfFile() mmap()
SetFileSecurity() chmod()
InitlializeSecurityDescriptor() umask()
Protection SetSecurityDescriptorGroup() chown()
Process
A process is basically a program in execution. The execution of a process must progress in a sequential
fashion.
A process is defined as an entity which represents the basic unit of work to be implemented in the system.
To put it in simple terms, we write our computer programs in a text file and when we execute this program, it
becomes a process which performs all the tasks mentioned in the program.
When a program is loaded into the memory and it becomes a process, it can be divided into four sections ─
stack, heap, text and data. The following image shows a simplified layout of a process inside main memory −
18
S.N. Component & Description
1
Stack
The process Stack contains the temporary data such as method/function parameters,
return address and local variables.
2
Heap
This is dynamically allocated memory to a process during its run time.
3
Text
This includes the current activity represented by the value of Program Counter and the
contents of the processor's registers.
4
Data
This section contains the global and static variables.
Program
A program is a piece of code which may be a single line or millions of lines. A computer program is
usually written by a computer programmer in a programming language.
A computer program is a collection of instructions that performs a specific task when executed by a
computer. When we compare a program with a process, we can conclude that a process is a dynamic
instance of a computer program.
A part of a computer program that performs a well-defined task is known as an algorithm. A
collection of computer programs, libraries and related data are referred to as a software.
1
Start
This is the initial state when a process is first started/created.
19
2
Ready
The process is waiting to be assigned to a processor. Ready processes are waiting to have
the processor allocated to them by the operating system so that they can run. Process may
come into this state after Start state or while running it by but interrupted by the
scheduler to assign CPU to some other process.
3
Running
Once the process has been assigned to a processor by the OS scheduler, the process state
is set to running and the processor executes its instructions.
4
Waiting
Process moves into the waiting state if it needs to wait for a resource, such as waiting for
user input, or waiting for a file to become available.
5
Terminated or Exit
Once the process finishes its execution, or it is terminated by the operating system, it is
moved to the terminated state where it waits to be removed from main memory.
1
Process State
The current state of the process i.e., whether it is ready, running, waiting, or whatever.
20
2
Process privileges
This is required to allow/disallow access to system resources.
3
Process ID
Unique identification for each of the process in the operating system.
4
Pointer
A pointer to parent process.
5
Program Counter
Program Counter is a pointer to the address of the next instruction to be executed for this
process.
6
CPU registers
Various CPU registers where process need to be stored for execution for running state.
7
CPU Scheduling Information
Process priority and other scheduling information which is required to schedule the
process.
8
Memory management information
This includes the information of page table, memory limits, Segment table depending on
memory used by the operating system.
9
Accounting information
This includes the amount of CPU used for process execution, time limits, execution ID
etc.
10
IO status information
This includes a list of I/O devices allocated to the process.
The architecture of a PCB is completely dependent on Operating System and may contain different
information in different operating systems. Here is a simplified diagram of a PCB −
21
The PCB is maintained for a process throughout its lifetime, and is deleted once the process terminates.
Whenever the CPU becomes idle, the operating system must select one of the processes in the line ready for launch.
The selection process is done by a temporary (CPU) scheduler. The Scheduler selects between memory processes
ready to launch and assigns the CPU to one of them.
Preemptive Scheduling is a CPU scheduling technique that works by dividing time slots of CPU to a given process.
The time slot given might be able to complete the whole process or might not be able to it. When the burst time of the
process is greater than CPU cycle, it is placed back into the ready queue and will execute in the next chance. This
scheduling is used when the process switch to ready state.
Algorithms that are backed by preemptive Scheduling are round-robin (RR), priority, SRTF (shortest remaining
time first).
Non-preemptive Scheduling is a CPU scheduling technique the process takes the resource (CPU time) and holds it
till the process gets terminated or is pushed to the waiting state. No process is interrupted until it is completed, and
after that processor switches to another process.
Algorithms that are based on non-preemptive Scheduling are non-preemptive priority and shortest Job first.
22
Preemptive Vs Non-Preemptive Scheduling / Difference between Preemptive and Non-
Preemptive Scheduling
1. Resources are allocated according to 1. Resources are used and then held by
the cycles for a limited time. the process until it gets terminated.
2. The process can be interrupted, even 2. The process is not interrupted until its
before the completion. life cycle is complete.
3. Starvation may be caused, due to the 3. Starvation can occur when a process
insertion of priority process in the with large burst time occupies the
queue. system.
Advantages
23
1. It can be actually implementable in the system because it is not depending on the burst time.
2. It doesn't suffer from the problem of starvation or convoy effect.
3. All the jobs get a equal opportunity to get execute.
Disadvantages
1. The higher the time quantum, the higher the response time in the system.
2. Round Robin scheduling leads to high throughput.
3. Deciding a perfect time quantum is really a very difficult task in the system.
24
Unit-2 Process Management
Producer-Consumer problem
The Producer-Consumer problem is a classical multi-process synchronization problem, that is we are trying
to achieve synchronization between more than one process.
There is one Producer in the producer-consumer problem, Producer is producing some items, whereas there
is one Consumer that is consuming the items produced by the Producer. The same memory buffer is shared
by both producers and consumers which is of fixed-size.
The task of the Producer is to produce the item, put it into the memory buffer, and again start producing
items. Whereas the task of the Consumer is to consume the item from the memory buffer.
Below are a few points that considered as the problems occur in Producer-Consumer:
o The producer should produce data only when the buffer is not full. In case it is found that the buffer
is full, the producer is not allowed to store any data into the memory buffer.
o Data can only be consumed by the consumer if and only if the memory buffer is not empty. In case
it is found that the buffer is empty, the consumer is not allowed to use any data from the memory
buffer.
o Accessing memory buffer should not be allowed to producer and consumer at the same time.
25
A race condition is a condition when there are many processes and every process shares the data with each
other and accessing the data concurrently, and the output of execution depends on a particular sequence in
which they share the data and access.
Critical Section is the part of a program which tries to access shared resources. That resource may be any
resource in a computer like a memory location, Data structure, CPU or any IO device.
The critical section cannot be executed by more than one process at the same time; operating system faces
the difficulties in allowing and disallowing the processes from entering the critical section.
The critical section problem is used to design a set of protocols which can ensure that the Race condition
among the processes will never arise.
In order to synchronize the cooperative processes, our main task is to solve the critical section problem. We
need to provide a solution in such a way that the following conditions can be satisfied.
Primary
1. Mutual Exclusion
Our solution must provide mutual exclusion. By Mutual Exclusion, we mean that if one process is
executing inside critical section then the other process must not enter in the critical section.
26
2. Progress
Progress means that if one process doesn't need to execute into critical section then it should not
stop other processes to get into the critical section.
Secondary
1. Bounded Waiting
We should be able to predict the waiting time for every process to get into the critical section. The
process must not be endlessly waiting for getting into the critical section.
2. Architectural Neutrality
Our mechanism must be architectural natural. It means that if our solution is working fine on one
architecture then it should also run on the other ones as well.
27
Semaphores
Semaphores are integer variables that are used to solve the critical section problem by using two atomic
operations, wait and signal that are used for process synchronization.
The definitions of wait and signal are as follows −
Wait
The wait operation decrements the value of its argument S, if it is positive. If S is negative or zero,
then no operation is performed.
wait(S)
{
S--;
}
Signal
The signal operation increments the value of its argument S.
signal(S)
{
S++;
}
Types of Semaphores
There are two main types of semaphores i.e. counting semaphores and binary semaphores. Details about
these are given as follows −
Counting Semaphores
These are integer value semaphores and have an unrestricted value domain. These semaphores are
used to coordinate the resource access, where the semaphore count is the number of available
resources. If the resources are added, semaphore count automatically incremented and if the
resources are removed, the count is decremented.
Binary Semaphores
The binary semaphores are like counting semaphores but their value is restricted to 0 and 1. The wait
operation only works when the semaphore is 1 and the signal operation succeeds when semaphore is
0. It is sometimes easier to implement binary semaphores than counting semaphores.
28
Advantages of Semaphores
Some of the advantages of semaphores are as follows −
Semaphores allow only one process into the critical section. They follow the mutual exclusion
principle strictly and are much more efficient than some other methods of synchronization.
There is no resource wastage because of busy waiting in semaphores as processor time is not wasted
unnecessarily to check if a condition is fulfilled to allow a process to access the critical section.
Semaphores are implemented in the machine independent code of the microkernel. So they are
machine independent.
Disadvantages of Semaphores
Some of the disadvantages of semaphores are as follows −
Semaphores are complicated so the wait and signal operations must be implemented in the correct
order to prevent deadlocks.
Semaphores are impractical for last scale use as their use leads to loss of modularity. This happens
because the wait and signal operations prevent the creation of a structured layout for the system.
Semaphores may lead to a priority inversion where low priority processes may access the critical
section first and high priority processes later.
The Dining Philosopher Problem states that K philosophers seated around a circular table with one
chopstick between each pair of philosophers. There is one chopstick between each philosopher. A
philosopher may eat if he can pickup the two chopsticks adjacent to him. One chopstick may be
picked up by any one of its adjacent followers but not both. This problem involves the allocation of
limited resources to a group of processes in a deadlock-free and starvation-free manner.
29
Readers and Writers Problem:
Suppose that a database is to be shared among several concurrent processes. Some of these processes
may want only to read the database, whereas others may want to update (that is, to read and write) the
database. We distinguish between these two types of processes by referring to the former as readers
and to the latter as writers. Precisely in OS we call this situation as the readers-writers problem.
Problem parameters:
Once a writer is ready, it performs its write. Only one writer may write at a time.
Barber shop with one barber, one barber chair and N chairs to wait in. When no customers the barber
goes to sleep in barber chair and must be woken when a customer comes in. When barber is cutting
hair new customers take empty seats to wait, or leave if no vacancy.
30
Deadlock
A process in operating system uses resources in the following way.
1) Requests a resource
2) Use the resource
3) Releases the resource
Deadlock is a situation where a set of processes are blocked because each process is holding a resource
and waiting for another resource acquired by some other process.
Consider an example when two trains are coming toward each other on the same track and there is only
one track, none of the trains can move once they are in front of each other. A similar situation occurs in
operating systems when there are two or more processes that hold some resources and wait for resources
held by other(s). For example, in the below diagram, Process 1 is holding Resource 1 and waiting for
resource 2 which is acquired by process 2, and process 2 is waiting for resource 1.
Deadlock can arise if the following four conditions hold simultaneously (Necessary Conditions)
Mutual Exclusion: Two or more resources are non-shareable (Only one process can use at a
time)
Hold and Wait: A process is holding at least one resource and waiting for resources.
No Preemption: A resource cannot be taken from a process unless the process releases the
resource.
Circular Wait: A set of processes are waiting for each other in circular form.
31
Methods for handling deadlock
There are three ways to handle deadlock
1) Deadlock prevention or avoidance: The idea is to not let the system into a deadlock state.
The deadlock-avoidance algorithm helps you to dynamically assess the resource-allocation state so that
there can never be a circular-wait situation.
2) Deadlock detection and recovery: Let deadlock occur, then do preemption to handle it once
occurred.
3) Ignore the problem altogether: If deadlock is very rare, then let it happen and reboot the system.
This is the approach that both Windows and UNIX take.
32