Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
5 views14 pages

OS Unit-1

Uploaded by

pradeenhari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views14 pages

OS Unit-1

Uploaded by

pradeenhari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Operating System

Basic Concepts
An Operating System (OS) is software that enables applications to interact with a computer’s
hardware. The software that contains the core components of the operating system is called the
kernel. Operating system can be found in devices ranging from cell phones and automobile to
personal and main frame computers. An operating system is software which performs all the
basic tasks like file management, memory management, process management, handling input and
output, and controlling peripheral devices such as disk drives and printers.

Some popular Operating Systems include Linux Operating System, Windows Operating System,
VMS, OS/400, AIX, z/OS, etc.

An operating system is a program that acts as an interface between the user and the computer
hardware and controls the execution of all kinds of programs.

History
Generations of Operating Systems

1940s-1950s: Early Beginnings


Computers operated without operating systems (OS).
Programs were manually loaded and run, one at a time.
The first operating system was introduced in 1956. It was a batch processing system GM-NAA I/O
(1956) that automated job handling.

1960s: Multiprogramming and Timesharing


Introduction of multiprogramming to utilize CPU efficiently.
Timesharing systems, like CTSS (1961) and Multics (1969), allowed multiple users to interact with
a single system.
1970s: Unix and Personal Computers
Unix (1971) revolutionized OS design with simplicity, portability, and multitasking.
Personal computers emerged, leading to simpler OSs like CP/M (1974) and PC-DOS (1981).

1980s: GUI and Networking


Graphical User Interfaces (GUIs) gained popularity with systems like Apple Macintosh (1984) and
Microsoft Windows (1985).
Networking features, like TCP/IP in Unix, became essential.

1990s: Linux and Advanced GUIs


Linux (1991) introduced open-source development.
Windows and Mac OS refined GUIs and gained widespread adoption.

2000s-Present: Mobility and Cloud


Mobile OSs like iOS (2007) and Android (2008) dominate.
Cloud-based and virtualization technologies reshape computing, with OSs like Windows Server
and Linux driving innovation.

AI Integration - (Ongoing)
With the growth of time, Artificial intelligence came into picture. Operating system integrates
features of AI technology like Siri, Google Assistant, and Alexa and became more powerful and
efficient in many way. These AI features with operating system create a entire new feature like
voice commands, predictive text, and personalized recommendations.

Batch Processing Systems − These systems were popular From 1940s to 1950s. The users
of a batch operating system did not interact with the computer directly. Each user prepared his job
on an off-line device like punch cards and submitted it to the computer operator who then batched
the similar jobs together to speed up processing and run as a group. The programmers left their
programs with the operator and the operator then sorted the programs with similar requirements
into batches. In such systems, CPU usage was very low and it was difficult to prioritize jobs over
one another.

Multiprogramming Systems − These operating systems emerged from 1950s to 1960s and
revolutionalized the computer arena. Now a user could load multiple programs into memory and
each program could have specific memory allocated. While one program was waiting for I/O
operation, CPU was alloted to second program.

Time-Sharing Systems − Such Operating system can be categorized from 1960s to 1970s
yearwise. Time-sharing or multitasking is a logical extension of multiprogramming. Processor's
time which is shared among multiple users simultaneously is termed as time-sharing. The
operating system used CPU scheduling and multiprogramming to provide each user with a small
portion of a time. Computer systems that were designed primarily as batch systems have been
modified to time-sharing systems.

GUI Based Systems − From 1970s to 1980s, GUI based Operating Systems became
popular. These operating systems were more user friendly. Instead of typing commands, a user
could click on graphical icons. Microsoft Windows is one of earlier popular GUI based operating
system which still dominates the personal computer space.

Networked Systems − As time advances, so as technologies. From 1980s to 1990s,


network based system gained momentum. A Network Operating System runs on a server and
provides the server the capability to manage data, users, groups, security, applications, and other
networking functions. The primary purpose of the network operating system is to allow shared file
and printer access among multiple computers in a network, typically a local area network (LAN), a
private network or to other networks.

Mobile Operating Systems − From Late 1990s to Early 2000s, Symbian, Java ME based
OS were popular for mobile devices. Over the period of time, with the introduction of smart
phones, need of more complex operation systems arised. That leads to development of Android
and iOS mobile operating system which are getting more and more powerful and becoming
feature rich till date.

AI Powered − From 2010s to Present

In today's time, Artificial Intelligence is dominating every aspects of computers including Operating
Systems. Siri, Google Assistant, Alexa and many other AI based assitant softwares which can
even understand the voice commands and can perform any operation that a user needs to
perform. Middleware are software tools that act as intermediaries between different applications,
systems, or services, facilitating their communication and interaction.

Operating System – Types

Batch Operating System


The users of a batch operating system do not interact with the computer directly. Eachuser
prepares his job on an off-line device like punch cards and submits it to the computeroperator. To
speed up processing, jobs with similar needs are batched together and runas a group. The
programmers leave their programs with the operator and the operator
then sorts the programs with similar requirements into batches.

Time-sharing Operating Systems


Time-sharing is a technique which enables many people, located at various terminals, touse a
particular computer system at the same time. Time-sharing or multitasking is alogical extension of
multiprogramming. Processor's time which is shared among multipleusers simultaneously is
termed as time-sharing.

The main difference between Multiprogrammed Batch Systems and Time-Sharing Systemsis that
in case of Multiprogrammed batch systems, the objective is to maximize processor use, whereas
in Time-Sharing Systems, the objective is to minimize response time.

Multiple jobs are executed by the CPU by switching between them, but the switches occurso
frequently. Thus, the user can receive an immediate response. For example, in atransaction
processing, the processor executes each user program in a short burst orquantum of computation.
That is, if n users are present, then each user can get a time quantum. When the user submits the
command, the response time is in few seconds atmost.

The operating system uses CPU scheduling and multiprogramming to provide each userwith a
small portion of a time. Computer systems that were designed primarily as batchsystems have
been modified to time-sharing systems.

Distributed Operating System


Distributed systems use multiple central processors to serve multiple real-timeapplications and
multiple users. Data processing jobs are distributed among the processorsaccordingly.

The processors communicate with one another through various communication lines (suchas high-
speed buses or telephone lines). These are referred as loosely coupled systemsor distributed
systems. Processors in a distributed system may vary in size and function.These processors are
referred as sites, nodes, computers, and so on.

Network Operating System


A Network Operating System runs on a server and provides the server the capability tomanage
data, users, groups, security, applications, and other networking functions. Theprimary purpose of
the network operating system is to allow shared file and printer access among multiple computers
in a network, typically a local area network (LAN), a private network or to other networks.

Examples of network operating systems include Microsoft Windows Server 2003, Microsoft
Windows Server 2008, UNIX, Linux, Mac OS X, Novell NetWare, and BSD.

Real-Time Operating System


A real-time system is defined as a data processing system in which the time intervalrequired to
process and respond to inputs is so small that it controls the environment. Thetime taken by the
system to respond to an input and display of required updatedinformation is termed as the
response time. So in this method, the response time is veryless as compared to online
processing.

Real-time systems are used when there are rigid time requirements on the operation of
aprocessor or the flow of data and real-time systems can be used as a control device in
adedicated application. A real-time operating system must have well-defined, fixed timeconstraints
otherwise the system will fail. For example, scientific experiments, medicalimaging systems,
industrial control systems, weapon systems, robots, air traffic controlsystems, etc.

There are two types of real-time operating systems.

Hard real-time systems


Hard real-time systems guarantee that critical tasks complete on time. In hard real-timesystems,
secondary storage is limited or missing and the data is stored in ROM. In thesesystems, virtual
memory is almost never found.

Soft real-time systems


Soft real-time systems are less restrictive. A critical real-time task gets priority over othertasks and
retains the priority until it completes. Soft real-time systems have limited utilitythan hard real-time
systems. For example, multimedia, virtual reality, Advanced ScientificProjects like undersea
exploration and planetary rovers, etc.

Distributed Computing
Distributed computing refers to a system where processing and data storage is distributed
across multiple devices or systems, rather than being handled by a single central device. In a
distributed system, each device or system has its own processing capabilities and may also
store and manage its own data. These devices or systems work together to perform tasks and
share resources, with no single device serving as the central hub.
One example of a distributed computing system is a cloud computing system, where resources
such as computing power, storage, and networking are delivered over the Internet and accessed
on demand. In this type of system, users can access and use shared resources through a web
browser or other client software.
Parallel Computing

Before taking a toll on Parallel Computing, first, let’s take a look at the background of
computations of computer software and why it failed for the modern era.

Computer software was written conventionally for serial computing. This meant that to solve a
problem, an algorithm divides the problem into smaller instructions. These discrete instructions
are then executed on the Central Processing Unit of a computer one by one. Only after one
instruction is finished, next one starts.

Parallel Systems are designed to speed up the execution of programs by dividing the programs
into multiple fragments and processing these fragments at the same time.

It is the use of multiple processing elements simultaneously for solving any problem. Problems
are broken down into instructions and are solved concurrently as each resource that has been
applied to work is working at the same time.

Process
A process is an active program i.e a program that is under execution. It contains the program
code, program counter, process stack, registers etc.

Process Management
Process Life Cycle in Operating System is one of the five states in which a process can be starting
from the time it has been submitted for execution, till the time when it has been executed by the
system. Here we will learn detailed explanation about each states.

Process States
The different states that a process is in during its execution are explained using the following
diagram −

 New- The process is in new state when it has just been created.
 Ready - The process is waiting to be assigned the processor by the short term scheduler.
 Running - The process instructions are being executed by the processor.
 Waiting - The process is waiting for some event such as I/O to occur.
 Terminated - The process has completed its execution.

When a user runs a program, processes are created and inserted into the ready list. A process
moves toward the head of the list as other processes complete their turns using a processor.
When a process reaches the head of the list, and when a processor becomes available, that
process is given a processor and is said to make a state tran sition from the ready state to the
running state

The act of assigning a processor to the first process on the ready list is called dispatching and is
performed by a system entity called the dispatcher. Processes that are in the ready or running
states are said to be awake, because they are actively contending for processor time.

The operating system manages state transitions to best serve processes in the system. the
operating system sets a hardware interrupting clock (also called an interval timer) to allow a
process to run for a specific time interval or quantum. If the process does not voluntarily yield the
processor before the time interval expires, the interrupting clock generates an interrupt, causing
the operating system to gain control of the processor

The operating system then changes the state of the previously running process to ready and
dispatches the first process on the ready list, changing its state from ready to running. If a running
process initiates an input/output operation before its quantum expires, and therefore must wait for
the I/O operation to complete before it can use a processor again, the running process voluntarily
relinquishes the processor. In this case, the process

is said to block itself, pending the completion of the I/O operation. Processes in the blocked state
are said to be asleep, because they cannot execute even if a processor becomes available.

We have defined four possible state transitions. When a process is dispatched, it transitions from
ready to running. When a process's quantum expires, it transi tions from running to ready. When a
process blocks, it transitions from running to blocked. Finally, when a process wakes up because
of the completion of some event it is awaiting, it transitions from blocked to ready. Note that the
only state transition initiated by the user process itself is block—the other three transitions are
initiated by the operating system.

Process Operations
Operating systems must be able to perform certain process operations, including:
• create a process
• destroy a process
• suspend a process
• resume a process
• change a process's priority
• block a process
• wake up a process
• dispatch a process
• enable a process to communicate with another process (this is called inter process
communication).

Process Control Block


A process control block is associated with each of the processes. It contains important details
about that particular process. These are as follows −

PCB
Process Control Block is a data structure that contains information of the process related to it. The
process control block is also known as a task control block, entry of the process table, etc.
It is very important for process management as the data structuring for processes is done in terms
of the PCB. It also defines the current state of the operating system.

Structure of the Process Control Block


A Process Control Block (PCB) is a data structure used by the operating system to
manage information about a process. The process control keeps track of many helps
explain some of these key data items.

 Pointer: It is a stack pointer that is required to be saved when the process is


switched from one state to another to retain the current position of the process.

 Process state: It stores the respective state of the process.

 Process number: Every process is assigned a unique id known as process ID or


PID which stores the process identifier.

 Program counter: Program Counter stores the counter, which contains the
address of the next instruction that is to be executed for the process.

 Register: Registers in the PCB, it is a data structure. When a processes is running


and it's time slice expires, the current value of process specific registers would be
stored in the PCB and the process would be swapped out. When the process is
scheduled to be run, the register values is read from the PCB and written to the
CPU registers. This is the main purpose of the registers in the PCB.
 Memory limits: This field contains the information about memory management
system used by the operating system. This may include page tables, segment
tables, etc.

 List of Open files: This information includes the list of files opened for a process.
CPU Scheduling Information
The process priority, pointers to scheduling queues etc. is the CPU scheduling information that is
contained in the PCB. This may also include any other scheduling parameters.
Memory Management Information
The memory management information includes the page tables or the segment tables depending
on the memory system used. It also contains the value of the base registers, limit registers etc.
I/O Status Information
This information includes the list of I/O devices used by the process, the list of files etc.
Accounting information
The time limits, account numbers, amount of CPU used, process numbers etc. are all a part of the
PCB accounting information.
Location of the Process Control Block
The process control block is kept in a memory area that is protected from the normal user access.
This is done because it contains important process information. Some of the operating systems
place the PCB at the beginning of the kernel stack for the process as it is a safe location.

Process Scheduling
There are many scheduling queues that are used to handle processes. When the processes enter
the system, they are put into the job queue. The processes that are ready to execute in the main
memory are kept in the ready queue. The processes that are waiting for the I/O device are kept in
the device queue.
The different schedulers that are used for process scheduling are:
Long Term Scheduler
The job scheduler or long term scheduler selects processes from the storage pool and loads them
into memory for execution. The job scheduler must select a careful mixture of I/O bound and CPU
bound processes to yield optimum system throughput. If it selects too many CPU bound
processes then the I/O devices are idle and if it selects too many I/O bound processes then the
processor has nothing to do.
Short Term Scheduler
The short term scheduler selects one of the processes from the ready queue and schedules them
for execution. The short term scheduler executes much more frequently than the long term
scheduler as a process may execute only for a few milliseconds.
Medium Term Scheduler
The medium term scheduler swaps out a process from main memory. It can again swap in the
process later from the point it stopped executing. This is helpful in reducing the degree of
multiprogramming. Swapping is also useful to improve the mix of I/O bound and CPU bound
processes in the memory.

Context Switching
Removing a process from a CPU and scheduling another process requires saving the state of the
old process and loading the state of the new process. This is known as context switching. The
context of a process is stored in the Process Control Block (PCB) and contains the process
register information, process state and memory information.
The operating system performs a context switch to stop executing a running process and begin
executing a previously ready process. To perform a context switch, the kernel must first save the
execution context of the running process to its PCB, then load the ready process's previous
execution context from its PCB.

Interrupts
interrupts enable software to respond to signals from hardware. The operating system may
specify a set of instructions, called an interrupt handler, to be executed in response to each type of
interrupt. This allows the operating system to gain control of the processor to manage system
resources.

A processor may generate an interrupt as a result of executing a process's instructions (in which
case it is often called a trap and is said to be synchronous with the operation of the process). For
example, synchronous interrupts occur when a process attempts to perform an illegal action, such
as referencing a protected memory location.
Interrupts may also be caused by some event that is unrelated to a process's current instruction, in
which case they are said to be asynchronous with process execution. Hardware devices issue
asynchronous interrupts to communicate a status change to the processor.
Interrupt Processing

The interrupt line, an electrical connection between the mainboard and a processor, becomes
active—devices such as timers, peripheral cards and controllers send signals that activate the
interrupt line to inform a proces sor that an event has occurred.

After the interrupt line becomes active, the processor completes execution of the current
instruction, then pauses the execution of the current process. To pause process execution, the
processor must save enough information so that the process can be resumed at the correct place
and with the correct register information. such process state is referred to as the task state
segment (TSS). The TSS is typically stored in a process's PCB.

The processor then passes control to the appropriate interrupt handler. Each type of interrupt is
assigned a unique value that the processor uses as an index into the interrupt vector, which is an
array of pointers to interrupt handlers. The interrupt vector is located in memory that processes
cannot access,

The interrupt handler performs appropriate actions based on the type of interrupt. After the
interrupt handler completes, the state of the interrupted process s restored and resumes its
execution.

Interrupt Classes

The set of interrupts a computer supports is dependent on the system's architecture. Several types
of interrupts are common to many architectures; in this section we discuss the interrupt structure
supported by the Intel IA-32 specification, which is implemented in Intel® Pentium® processors.

The IA-32 specification distinguishes between two types of signals a processor may receive:
interrupts and exceptions. Interrupts notify the processor that an event has occurred or that an
external device's status has changed. Exceptions indicate that an error has occurred, either in
hardware or as a result of a software instruction. The IA-32 architecture also provides software-
generated interrupts—processes can use these to perform system calls.

Common Interrupt types and description

I/O - These are initiated by the input/output hardware. They notify a processor that the status of a
channel or device has changed. I/O interrupts are caused when an I/O operation completes.

Timer - A system may contain devices that generate interrupts periodically. These interrupts can
be used for tasks such as timekeeping and performance monitoring. Timers also enable the
operating system to determine if a process's quantum has expired.

Interprocessor interrupts - These interrupts allow one processor to send a message to another in a
multiprocessor system.

Common Exception types and description

Fault - These are caused by a wide range of problems that may occur as a program's machine-
language instructions are executed. These problems include division by zero, data (being
operated upon) in the wrong format, attempt to execute an invalid operation code, attempt to
reference a memory location beyond the limits of real memory, attempt by a user process to
execute a privileged instruction and attempt to reference a protected resource.

Abort - These are generated by exceptions such as overflow (when the value stored by a register
exceeds the capacity of the register) and when program control reaches a breakpoint in code.

Trap - This occurs when the processor detects an error from which a process cannot recover. For
example, when an exception-handling routine itself causes an exception, the processor may not
be able to handle both errors sequentially. This is called a double-fault exception, which
terminates the process that initiated it.

Interprocess communication
Interprocess communication is the mechanism provided by the operating system that allows
processes to communicate with each other. This communication could involve a process letting
another process know that some event has occurred or the transferring of data from one process
to another.
A diagram that illustrates interprocess communication is as follows −
Synchronization in Interprocess Communication
Synchronization is a necessary part of inter process communication. It is either provided by the
interprocess control mechanism or handled by the communicating processes. Some of the
methods to provide synchronization are as follows −

 Semaphore
A semaphore is a variable that controls the access to a common resource by multiple
processes. The two types of semaphores are binary semaphores and counting
semaphores.

 Mutual Exclusion
Mutual exclusion requires that only one process thread can enter the critical section at a
time. This is useful for synchronization and also prevents race conditions.

 Barrier
A barrier does not allow individual processes to proceed until all the processes reach it.
Many parallel languages and collective routines impose barriers.

 Spinlock
This is a type of lock. The processes trying to acquire this lock wait in a loop while checking
if the lock is available or not. This is known as busy waiting because the process is not
doing any useful operation even though it is active.

Approaches to Interprocess Communication


The different approaches to implement interprocess communication are given as follows −

 Pipe
A pipe is a data channel that is unidirectional. Two pipes can be used to create a two-way
data channel between two processes. This uses standard input and output methods. Pipes
are used in all POSIX systems as well as Windows operating systems.

 Socket
The socket is the endpoint for sending or receiving data in a network. This is true for data
sent between processes on the same computer or data sent between different computers
on the same network. Most of the operating systems use sockets for interprocess
communication.

 File
A file is a data record that may be stored on a disk or acquired on demand by a file server.
Multiple processes can access a file as required. All operating systems use files for data
storage.

 Signal
Signals are useful in interprocess communication in a limited way. They are system
messages that are sent from one process to another. Normally, signals are not used to
transfer data but are used for remote commands between processes.
 Shared Memory
Shared memory is the memory that can be simultaneously accessed by multiple processes.
This is done so that the processes can communicate with each other. All POSIX systems,
as well as Windows operating systems use shared memory.

 Message Queue
Multiple processes can read and write data to the message queue without being connected
to each other. Messages are stored in the queue until their recipient retrieves them.
Message queues are quite useful for interprocess communication and are used by most
operating systems.

A diagram that demonstrates message queue and shared memory methods of interprocess
communication is as follows −

You might also like