Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
14 views22 pages

OS Unit 1 Notes

The document provides an overview of operating systems, defining their role as an interface between users and computer hardware, and detailing their functions such as resource management and program execution control. It discusses the evolution of operating systems from batch processing in the 1960s to time-sharing systems in the 1980s, highlighting key developments like multiprogramming and virtual storage. Additionally, it categorizes different types of operating systems, including single-user and batch systems, and emphasizes the importance of user convenience and efficient resource utilization.

Uploaded by

hatijob647
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views22 pages

OS Unit 1 Notes

The document provides an overview of operating systems, defining their role as an interface between users and computer hardware, and detailing their functions such as resource management and program execution control. It discusses the evolution of operating systems from batch processing in the 1960s to time-sharing systems in the 1980s, highlighting key developments like multiprogramming and virtual storage. Additionally, it categorizes different types of operating systems, including single-user and batch systems, and emphasizes the importance of user convenience and efficient resource utilization.

Uploaded by

hatijob647
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

LECTURE NOTES

of
Subject: Operating System
Prepared by
Faculty Name: Er. Gurvir Singh

Department of Computer Science & Engineering


Ludhiana College of Engineering & Technology
Katani Kalan
Ludhiana-141006
Introduction to Operating System
DEFINITION

Operating system acts as interface between the user and the computer hardware. They sit between the
user and the hardware of the computer providing an operational environment to the users and
application programs. For a user, therefore, a computer is nothing but the operating system running on
it. It is an extended machine.
Operating System (or shortly OS) primarily provides services for running applications on a computer
system.

User does not interact with the hardware of a computer directly but through the services offered by OS.
This is because the language that users employ is different from that of the hardware where as users
prefer to use natural language or near natural language for interaction, the hardware uses machine
language. Os takes instruction in the form of commands from the user and translates into machine
understandable instructions, gets these instructions executed by CPU and translates the result back into
user understandable form.

OS is a resource allocate
Manages all resources
Decides between conflicting requests for efficient and fair resource use OS is a control program
Controls execution of programs to prevent errors and improper use of the computer

Need for an OS:

The primary need for the OS arises from the fact that user needs to be provided with services and OS
ought to facilitate the provisioning of these services. The central part of a computer system is a
processing engine called CPU. A system should make it possible for a user’s application to use the
processing unit. A user application would need to store information. The OS makes memory available
to an application when required. Similarly, user applications need use of input facility to communicate
with the application. This is
often in the form of a key board, or a mouse or even a joy stick (if the application is a game for instance).

User and System View:


From the user point of view the primary consideration is always the convenience. It should be easy to
use an application. In launching an application, it helps to have an icon which gives a clue which
application it is. We have seen some helpful clues for launching a browser, e-mail or even a document
preparation application. In other words, the human computer interface which helps to identify an
application and its launch is very useful. This hides a lot of details of the more elementary instructions
that help in selecting the application. Similarly, if we examine the programs that help us in using input
devices like a key board – all the complex details of character reading program are hidden from the
user. The same is true when we write a program. For instance, when we use a programming language
like C, a printf command helps to generate the desired form of output. The following figure essentially
depicts the basic schema of the use of OS from a user stand point. However, when it comes to the view
point of a system, the OS needs to ensure that all the system users and applications get to use the
facilities that they need.
Operating Systems IV Sem CSE

Also, OS needs to ensure that system resources are utilized efficiently. For instance, there may be many
service requests on a Web server. Each user request need to be serviced. Similarly, there may be many
programs residing in the main memory. The system need to determine which programs are active and
which need to await some form of input or output. Those that need to wait can be suspended
temporarily from engaging the processor. This strategy alone enhances the processor throughput. In
other words, it is important for an operating system to have a control policy and algorithm to allocate
the system resources.

The Evolution of OS:

It would be worthwhile to trace some developments that have happened in the last four to five decades. In
the 1960s, the common form of computing facility was a mainframe computer system. The mainframe
computer system would be normally housed in a computer center with a controlled environment which was
usually an air conditioned area with a clean room like facility. The users used to bring in a deck of punched
cards which encoded the list of program instructions.

The mode of operation was as follows:


User would prepare a program as a deck of punched cards.
The header cards in the deck were the “job control” cards which would indicate which compiler was
to be used (like Fortran / Cobol compilers).
The deck of cards would be handed in to an operator who would collect such jobs from various users.
The operators would invariably group the submitted jobs as Fortran jobs, Cobol jobs etc. In addition,
these were classified as “long jobs” that required considerable processing time or short jobs which
required a short and limited computational time.
Each set of jobs was considered as a batch and the processing would be done for a batch. Like for instance
there may be a batch of short Fortran jobs. The output for each job would be separated and turned over to
users in a collection area. This scenario clearly shows that there was no interactivity. Users had no direct
control. Also, at any one time only one program would engage the processor. This meant that if there was
any input or output in between processing then the processor would wait idling till such time that the I/O is
completed. This meant that processor would idling most of the time as processor speeds were orders of
magnitude higher than the input or output or even memory units. Clearly, this led to poor utilization of the
processor. The systems that utilized the CPU and memory better and with multiple users connected to the
systems evolved over` a period of time as shown in Table1.1.
Operating Systems IV Sem CSE

At this time we would like to invoke Von - Neumann principle of stored program operation. For a program
to be executed it ought to be stored in the memory. In the scheme of things discussed in the previous
paragraph, we notice that at any time only one program was kept in the memory and executed. In the decade
of 70s this basic mode of operation was altered and system designers contemplated having more than one
program resident in the memory. This clearly meant that when one program is awaiting completion of an
input or output, another program could, in fact, engage the CPU.

Late 60’s and early 70’s


Storing multiple executables (at the same time) in the main memory is called multiprogramming. With
multiple executables residing in the main memory, the immediate consideration is: we now need a policy
to allocate memory and processor time to the resident programs. It is obvious that by utilizing the processor
for another process when a process is engaged in input or output the processor utilization and, therefore, its
output are higher. Overall, the multiprogramming leads to higher throughput for this reason.

Fig: Multiprogramming

While multiprogramming did lead to enhanced throughput of a system, the systems still essentially operated
in batch processing mode.

1980’s
Operating Systems IV Sem CSE
In late 70s and early part of the decade of 80s the system designers offered some interactivity with each
user having a capability to access system. This is the period when the timeshared systems came on the
scene.
Basically, the idea is to give every user an illusion that all the system resources were available to him as his
program executed. To strengthen this illusion a clever way was devised by which each user was allocated
a slice of time to engage the processor. During the allocated time slice a users’ program would be executed.
Now imagine if the next turn for the same program comes quickly enough, the user would have an illusion
that the system was continuously available to his task. This is what precisely time sharing systems attempted
– giving each user a small time slice and returning back quickly enough so that he never feels lack of
continuity. In fact, he carries an impression that the system is entirely available to him alone.

Timeshared systems clearly require several design considerations.

These include the following:


How many programs may reside in the main memory to allow, and also sustain
timesharing? What should be the time slice allocated to process each program?
How would one protect a users’ program and data from being overwritten by another users’
program?

Basically, the design trends that were clearly evident during the decade of 1970-80 were: Achieve as much
overlapping as may be feasible between I/O and processing. Bulk storage on disks clearly witnessed a
phenomenal growth. This also helped to implement the concept to offer an illusion of extended storage.
The concept of “virtual storage” came into the vogue. The virtual storage essentially utilizes these disks to
offer enhanced addressable space. The fact that only that part of a program that is currently active need be
in the main memory also meant that multi-programming could support many more programs.

In fact this could be further enhanced as follows:


1. Only required active parts of the programs could be swapped in from disks.
2. Suspended programs could be swapped out.

This means that a large number of users can access the system. This was to satisfy the notion that
“computing” facility be brought to a user as opposed to the notion that the “user go to compute”. The fact
that a facility is brought to a user gives the notion of a utility or a service in its true sense. In fact, the PC
truly reflects the notion of “computing utility” - it is regarded now as a personal productivity tool.

Fig: Swapping of program parts main memory - disc, vice-versa

It was in early 1970s Bell Laboratory scientists came up with the now well known OS: Unix. Also, as the
microcomputers came on scene in 1980s a forerunner to current DOS was a system called CP/M. The
decade of 1980s saw many advances with the promise of networked systems. One notable project
Operating Systems IV Sem CSE
amongst these was the project Athena at MIT in USA. The project forms the basis to several modern
developments. The client-server paradigm was indeed a major fall out. The users could have a common
server to the so called X-terminals.

The X windows also provided many widgets to support convenient human computer interfaces. Using X
windows it is possible to create multiple windows. In fact each of the windows offers a virtual terminal. In
other words it is possible to think about each of these windows as a front-end terminal connection. So it is
possible to launch different applications from each of the windows. This is what you experience on modern
day PC which also supports such an operating environment.

CP/M based computer


Networking topologies like star, ring and general graphs, as shown in the figure, were being experimented
with protocols for communication amongst computers evolved. On the micro-computer front the
development was aimed at relieving the processor from handling input output responsibilities. The I/O
processing was primarily handled by two mechanisms: one was BIOS and the other was the graphics cards
to drive the display. The processor now was relieved from regulating the I/O. This made it possible to utilize
the processor more effectively for other processing tasks With the advent of 1990s the computer
communication was pretty much the order of the day. With the advent of 1990s the computer networking
Technology like Star, Ring and General Graphs, communication was pretty much the order of the
day.particular, the TCP/IP suite of network protocols were implemented.

The growth in the networking area also resulted in giving users a capability to establish communication
between computers. It was now possible to connect to a remote computer using a telnet protocol. It was
also possible to get a file stored in a remote location using a file transfer (FTP) protocol. All such services
are broadly called network services

Layered Operating System Structure:


Let’s now briefly explore where the OS appears in the context of the software and application.
Operating Systems IV Sem CSE

COMPUTER SYSTEM SATRT UP (BOOTING)


Power On Self Test (POST) is Done
Bootstrap program is loaded at power-up or reboot.
Typically stored in ROM or EPROM, generally known as firmware.
Initialized all aspects of system
Loads operating system kernel and starts execution

COMPUTER SYSTEM ORGANIZATION

One or more CPUs, device controllers connect through common bus providing access to shared
memory.
Concurrent execution of CPUs and devices competing for memory
cycles. I/O devices and the CPU can execute concurrently.
Each device controller is in charge of a particular device
type. Each device controller has a local buffer.
CPU moves data from/to main memory to/from local buffers.
I/O is from the device to local buffer of controller.
Device controller informs CPU that it has finished its operation by causing an interrupt.

Fig Computer System Organization

COMPUTER SYSTEM STRUCTURE


Computer system can be divided into four components
Hardware : provides basic computing resources
CPU, memory, I/O
devices Operating system:
Controls and coordinates use of hardware among various applications and users
Operating Systems IV Sem CSE

Four Components of a Computer System

Fig Extended machine view of operating system

All operating system contain the same components whose functionalities are almost the same. For instance,
all the operating systems perform the functions of storage management, process management, protection of
users from one-another, etc.Operating system in general, performs similar functions but may have
distinguishing features. Therefore, they can be classified into different categories on different bases.

Different types of operating system

Single user- Single Processing System:-


It has a single processor, runs a single program and interacts with a single user at a time. The OS
for this system is very simple to design and implement. Example: - MS-DOS
Only one program resides in computer memory and it remains that till it is executed. It is also
called uni-program OS. In this OS, the whole, memory space is allocated to one program to
memory management’s not very difficult task to do. The CPU has to execute only 1 program at a
time, so that CPU management also does not have any problem.
In a single user OS, a single user can access the computer at a particular time. The computer which
are based on this OS have a single processor and able to execute only a single program at a
particular time. This system provides all the resources to the users all the time. The single user OS
into following categories: -

Single user, single tasking:


In a single user, single tasking OS, There is a single user to execute a program at a particular system.
Example – MS-DOS

Single user, multitasking OS:


In single user, multitasking OS a single user can execute multiple programs.
Example – A user can program different programs such as making calculations in excel sheet,
printing a word document & downloading into the file from internet at the same time.
Operating Systems IV Sem CSE

USER Application Programs

Operating System

Hardware

Advantages of single user OS:-


o The CPU has to handle only one application program at a time so that process
management is easy in this environment.
o Due to the limited number of programs allocation of memory to the process & allocation
of resources to the process becomes any easy task to handle.

Disadvantages of single user OS:-


o As the OS is handling one application at a time most of the CPU time is wasted, because it
has to sit idle most of the time while executing a single program.
o Resources like memory, CPU are not utilized at the maximum.

e.g: Resident Monitors

- Monitors are the simplest operating systems.


- Single user systems
- Allow user interaction

Batch operating system


The users of batch operating system do not interact with the computer directly. Each user prepares his job
on an off-line device like punch cards and submits it to the computer operator. To speed up processing, jobs
with similar needs are batched together and run as a group. Thus, the programmers left their programs with
the operator. The operator then sorts programs into batches with similar requirements.

The problems with Batch Systems are following.

Lack of interaction between the user and job.

CPU is often idle, because the speeds of the mechanical I/O devices is slower than CPU.

Difficult to provide the desired priority.

Time-sharing operating systems


Time sharing is a technique which enables many people, located at various terminals, to use a particular
computer system at the same time. Time-sharing or multitasking is a logical extension of
multiprogramming. Processor's time which is shared among multiple users simultaneously is termed as
time-sharing. The main difference between Multiprogrammed Batch Systems and Time-Sharing Systems
is that in case of Multiprogrammed batch systems, objective is to maximize processor use, whereas in Time-
Sharing Systems objective is to minimize response time.
Operating Systems IV Sem CSE
Multiple jobs are executed by the CPU by switching between them, but the switches occur so frequently.
Thus, the user can receives an immediate response. For example, in a transaction processing, processor
execute each user program in a short burst or quantum of computation. That is if n users are present, each
user can get time quantum. When the user submits the command, the response time is in few seconds at
most.

Operating system uses CPU scheduling and multiprogramming to provide each user with a small portion
of a time. Computer systems that were designed primarily as batch systems have been modified to time-
sharing systems.

Advantages of Timesharing operating systems are following

1. Provide advantage of quick response.

2. Avoids duplication of software.

3. Reduces CPU idle time.

Disadvantages of Timesharing operating systems are following.

Problem of reliability.

Question of security and integrity of user programs and data.

Problem of data communication.

Multi-Programming
As we know that in the Batch Processing System there are multiple jobs Execute by the System. The
System first prepare a batch and after that he will Execute all the jobs those are Stored into the Batch. But
the Main Problem is that if a process or job requires an Input and Output Operation, then it is not possible
and second there will be the wastage of the Time when we are preparing the batch and the CPU will
remain idle at that Time. But With the help of Multi programming we can Execute Multiple Programs
on the System at a Time and in the Multi-programming the CPU will never get idle, because with the help
of Multi-Programming we can Execute Many Programs on the System and When we are Working with the
Program then we can also Submit the Second or Another Program for Running and the CPU will then
Execute the Second Program after the completion of the First Program. And in this we can also specify our
Input means a user can also interact with the System.

The Multi-programming Operating Systems never use any cards because the Process is entered on the Spot
by the user. But the Operating System also uses the Process of Allocation and De-allocation of the
Memory Means he will provide the Memory Space to all the Running and all the Waiting Processes. There
must be the Proper Management of all the Running Jobs.

Distributed operating System


Distributed systems use multiple central processors to serve multiple real time application and multiple
users. Data processing jobs are distributed among the processors accordingly to which one can perform
each job most efficiently.

The processors communicate with one another through various communication lines (such as high-speed
buses or telephone lines). These are referred as loosely coupled systems or distributed systems.
Operating Systems IV Sem CSE
Processors in a distributed system may vary in size and function. These processors are referred as sites,
nodes, computers and so on.

The advantages of distributed systems are following.

With resource sharing facility user at one site may be able to use the resources available at
another.
Speedup the exchange of data with one another via electronic mail.
If one site fails in a distributed system, the remaining sites can potentially continue
operating.
Better service to the customers.
Reduction of the load on the host computer.
Reduction of delays in data processing.
Network operating System
Network Operating System runs on a server and and provides server the capability to manage data, users,
groups, security, applications, and other networking functions. The primary purpose of the network
operating system is to allow shared file and printer access among multiple computers in a network, typically
a local area network (LAN), a private network or to other networks. Examples of network operating systems
are Microsoft Windows Server 2003, Microsoft Windows Server 2008, UNIX, Linux, Mac OS X, Novell
NetWare, and BSD.

The advantages of network operating systems are following.

Centralized servers are highly stable.


Security is server managed.
Upgrades to new technologies and hardwares can be easily integrated into the system.
Remote access to servers is possible from different locations and types of
systems. The disadvantages of network operating systems are following.
High cost of buying and running a server.
Dependency on a central location for most operations.
Regular maintenance and updates are required.
Real Time operating System
Real time system is defines as a data processing system in which the time interval required to process and
respond to inputs is so small that it controls the environment. Real time processing is always on line whereas
on line system need not be real time. The time taken by the system to respond to an input and display of
required updated information is termed as response time. So in this method response time is very less as
compared to the online processing.

Real-time systems are used when there are rigid time requirements on the operation of a processor or the
flow of data and real-time systems can be used as a control device in a dedicated application. Real-time
operating system has well-defined, fixed time constraints otherwise system will fail.For example Scientific
experiments, medical imaging systems, industrial control systems, weapon systems, robots, and home-
applicance controllers, Air traffic control system etc.

There are two types of real-time operating systems.

HARD REAL-TIME SYSTEMS

Hard real-time systems guarantee that critical tasks complete on time. In hard real-time systems secondary
storage is limited or missing with data stored in ROM. In these systems virtual memory is almost never
found.
Operating Systems IV Sem CSE
SOFT REAL-TIME SYSTEMS

Soft real time systems are less restrictive. Critical real-time task gets priority over other tasks and retains
the priority until it completes. Soft real-time systems have limited utility than hard real-time systems.For
example, Multimedia, virtual reality, Advanced Scientific Projects like undersea exploration and planetary
rovers etc.

Functions of Operating Systems


Operating systems perform the following important functions:
1. Processor Management: It means assigning processor to different tasks which has to be
performed by the computer system.
2. Memory Management: It means allocation of main memory and secondary storage areas
to the system programs, as well as user programs and data.
3. Input and Output Device Management: It means co-ordination and assignment of the
different output and input devices while one or more programs are being executed.
4. File System Management: Operating system is also responsible for maintenance of a file
system, in which the users are allowed to create, delete and move files.
5. Process Management Functions:
Traffic Controller: It constantly checks processor and status of processes.
Job Scheduler: Selects jobs from job queue submitted to the system for execution.
Process Scheduler: decides when process is to be executed in case of
multiprogramming.
Dispatcher: Allocates the processor for particular process which is chosen by
process scheduler.
I/O Traffic Controller: It constantly keeps the track of I/O devices, channels and
control processors.
I/O scheduler: Decides which device is to be allocated to which process.
File System: It is a collection of functions which are used to find status, usage and
location of a file. From this all informations can be easily find which are stored on a
disk in form of files.

Responsibilities of Operating Systems:

1. Perform basic tasks, such as recognizing input from the keyboard, sending output to the display
screen, keeping track of files and directories on the disk and controlling peripheral devices such as
disk drives and printers.
2. Ensure that different programs and users running at the same time do not interfere with each other.
3. Provide a software platform on top of which other programs (i.e., application software) can run.

The first two responsibilities address the need for managing the computer hardware and the application
programs that use the hardware. The third responsibility focuses on providing an interface between
application software and hardware so that application software can be efficiently developed. Since the
operating system is already responsible for managing the hardware, it should provide a programming
interface for application developers.

Services Provided by Operating Systems:


An Operating System provides services to both the users and to the programs.

1. It provides programs, an environment to execute.


Operating Systems IV Sem CSE
2. It provides users, services to execute the programs in a convenient manner.
3. Following are few common services provided by operating systems.
4. Program execution
5. I/O operations
6. File System manipulation
7. Communication
8. Error Detection
9. Resource Allocation
10. Protection

1. Program execution
Operating system handles many kinds of activities from user programs to system programs like
printer spooler, name servers, file server etc. Each of these activities is encapsulated as a process.

A process includes the complete execution context (code to execute, data to manipulate,
registers, OS resources in use). Following are the major activities of an operating system with
respect to program management.

Loads a program into memory.


Executes the program.
Handles program's execution.
Provides a mechanism for process synchronization.
Provides a mechanism for process communication.
Provides a mechanism for deadlock handling.
2. I/O Operation
I/O subsystem comprised of I/O devices and their corresponding driver software. Drivers
hides the peculiarities of specific hardware devices from the user as the device driver knows
the peculiarities of the specific device.

Operating System manages the communication between user and device drivers. Following
are the major activities of an operating system with respect to I/O Operation.

I/O operation means read or write operation with any file or any specific I/O
device.
Program may require any I/O device while running.
Operating system provides the access to the required I/O device when required.
3. File system manipulation
A file represents a collection of related information. Computer can store files on the disk
(secondary storage), for long term storage purpose. Few examples of storage media are magnetic
tape, magnetic disk and optical disk drives like CD, DVD. Each of these media has its own
properties like speed, capacity, data transfer rate and data access methods.

A file system is normally organized into directories for easy navigation and usage. These
directories may contain files and other directions. Following are the major activities of an
operating system with respect to file management.

Program needs to read a file or write a file.


The operating system gives the permission to the program for operation on file.
Permission varies from read-only, read-write, denied and so on.
Operating Systems IV Sem CSE
Operating System provides an interface to the user to create/delete files.
Operating System provides an interface to the user to create/delete directories.
Operating System provides an interface to create the backup of file system.
4. Communication
In case of distributed systems which are a collection of processors that do not share memory,
peripheral devices, or a clock, operating system manages communications between processes.
Multiple processes with one another through communication lines in the network.

OS handles routing and connection strategies, and the problems of contention and security.
Following are the major activities of an operating system with respect to communication.

Two processes often require data to be transferred between them.


The both processes can be on the one computer or on different computer but are
connected through computer network.
Communication may be implemented by two methods either by Shared
Memory or by Message Passing.
5. Error handling
Error can occur anytime and anywhere. Error may occur in CPU, in I/O devices or in the
memory hardware. Following are the major activities of an operating system with respect to
error handling.

OS constantly remains aware of possible errors.


OS takes the appropriate action to ensure correct and consistent computing.
6. Resource Management
In case of multi-user or multi-tasking environment, resources such as main memory, CPU cycles
and files storage are to be allocated to each user or job. Following are the major activities of an
operating system with respect to resource management.

OS manages all kind of resources using schedulers.


CPU scheduling algorithms are used for better utilization of CPU.
7. Protection
Considering a computer system having multiple users the concurrent execution of multiple
processes, then the various processes must be protected from each another's activities.

Protection refers to mechanism or a way to control the access of programs, processes, or users
to the resources defined by computer systems. Following are the major activities of an operating
system with respect to protection.

OS ensures that all access to system resources is controlled.


OS ensures that external I/O devices are protected from invalid access attempts.
OS provides authentication feature for each user by means of a password.

The owners of information stored in a multi-user or networked computer system may want to
control use of that information, concurrent processes should not interfere with each other
protection involves ensuring that all access to system resources is controlled. Security of the
system from outsiders requires user authentication, extends to defending external I/O devices
from invalid access attempts. If a system is to be protected and secure, precautions must be
instituted throughout it. A chain is only as strong as its weakest link.
Operating Systems IV Sem CSE

OPERATING SYSTEM – TASKS:


Following are few of very important tasks that Operating System handles

1. Batch processing
Batch processing is a technique in which Operating System collects one programs and data
together in a batch before processing starts. Operating system does the following activities
related to batch processing.

OS defines a job which has predefined sequence of commands, programs and data as a single
unit.
OS keeps a number a jobs in memory and executes them without any manual information.
Jobs are processed in the order of submission i.e first come first served fashion.
When job completes its execution, its memory is released and the output for the job gets
copied into an output spool for later printing or processing.

Advantages
Batch processing takes much of the work of the operator to the computer.
Increased performance as a new job gets started as soon as the previous job finished
without any manual intervention.
Disadvantages
Difficult to debug program.
A job could enter an infinite loop.
Due to lack of protection scheme, one batch job can affect pending jobs.

2. Multitasking
Multitasking refers to term where multiple jobs are executed by the CPU simultaneously
by switching between them.Switches occur so frequently that the users may interact with
each program while it is running. Operating system does the following activities related
to multitasking.
The user gives instructions to the operating system or to a program directly, and
receives an immediate response.
Operating System handles multitasking in the way that it can handle multiple
operations / executes multiple programs at a time.
Multitasking Operating Systems are also known as Time-sharing systems.
These Operating Systems were developed to provide interactive use of a computer
system at a reasonable cost.
A time-shared operating system uses concept of CPU scheduling and
multiprogramming to provide each user with a small portion of a time-shared CPU.
Each user has at least one separate program in memory.
Operating Systems IV Sem CSE

A program that is loaded into memory and is executing is commonly referred to as


a process.
When a process executes, it typically executes for only a very short time before it
either finishes or needs to perform I/O.
Since interactive I/O typically runs at people speeds, it may take a long time to
completed. During this time a CPU can be utilized by another process.
Operating system allows the users to share the computer simultaneously. Since each
action or command in a time-shared system tends to be short, only a little CPU time
is needed for each user.
As the system switches CPU rapidly from one user/program to the next, each user is
given the impression that he/she has his/her own CPU, whereas actually one CPU is
being shared among many users.

3. Multiprogramming

When two or more programs are residing in memory at the same time, then sharing the
processor is referred to the multiprogramming. Multiprogramming assumes a single
shared processor. Multiprogramming increases CPU utilization by organizing jobs so
that the CPU always has one to execute.
Following figure shows the memory layout for a multiprogramming system.

Operating system does the following activities related to multiprogramming. The


operating system keeps several jobs in memory at a time. This
set of jobs is a subset of the jobs kept in the job pool.
The operating system picks and begins to execute one of the job in the memory.
Operating Systems IV Sem CSE
Multiprogramming operating system monitors the state of all active programs and
system resources using memory management programs to ensures that the CPU is never
idle unless there are no jobs
ADVANTAGES
High and efficient CPU utilization.
User feels that many programs are allotted CPU almost simultaneously.
DISADVANTAGES
CPU scheduling is required.
To accommodate many jobs in memory, memory management is required.
Interactivity
Interactivity refers that a User is capable to interact with computer system. Operating
system does the following activities related to interactivity.
OS provides user an interface to interact with system.
OS managers input devices to take inputs from the user. For example, keyboard. OS
manages output devices to show outputs to the user. For example, Monitor. OS
Response time needs to be short since the user submits and waits for the result.

4. Real Time System

Real time systems represents are usually dedicated, embedded systems. Operating system does
the following activities related to real time system activity.
In such systems, Operating Systems typically read from and react to sensor data.
The Operating system must guarantee response to events within fixed periods of
time to ensure correct performance.
5. Distributed Environment
Distributed environment refers to multiple independent CPUs or processors in a computer system.
Operating system does the following activities related to distributed environment.
OS Distributes computation logics among several physical processors.
The processors do not share memory or a clock.
Instead, each processor has its own local memory.
OS manages the communications between the processors. They communicate with
each other through various communication lines.
Spooling
Spooling is an acronym for simultaneous peripheral operations on line. Spooling refers
to putting data of various I/O jobs in a buffer. This buffer is a special area in memory or
hard disk which is accessible to I/O devices. Operating system does the following
activites related to distributed environment.
OS handles I/O device data spooling as devices have different data access rates.
OS maintains the spooling buffer which provides a waiting station where data can rest
while the slower device catches up.
OS maintains parallel computation because of spooling process as a computer can
perform I/O in parallel fashin. It becomes possible to have the computer read data from
a tape, write data to disk and to write out to a tape printer while it is doing its computing
task.
Operating Systems IV Sem CSE
Advantages
The spooling operation uses a disk as a very large buffer.
Spooling is capable of overlapping I/O operation for one job with processor operations
for another job.

OPERATING SYSTEM STRUCTURE:

Kernels

The kernel is the core code of an operating system. It includes the lowest-level code, and it provides the
basic abstractions that all other code requires.

Modern processors can execute in various privilege modes. At the very least, a processor must allow
for user mode and supervisor mode (another term used is kernel mode). The modes differ in the
privileges allowed. Essentially, supervisor mode can do anything the processor is capable of, while
user mode, or other lower-privilege modes, are limited to certain address spaces and operations.

As one might expect, the basic idea is that an OS kernel executes in kernel mode, and other code,
including application code, executes in user mode. Different OS designs differ in where this user-
kernel boundary is drawn: what are the responsibilities of the kernel, and what code lies in it.

The portion of an OS that runs in user mode is often called its userland. (This term can also be used to
refer to all code that runs in user mode.)

Monolithic Architecture:

MS-DOS Structure

The first, and, historically, most common, kernel design is that of a monolithic kernel. In this design,
essentially all non-application code resides in the kernel and executes at high privilege: interrupt
handlers, system call implementation, memory management, process scheduling, device drivers. The
kernel is a single large executable, and all of this code is linked together.
Operating Systems IV Sem CSE
Today, the primary example of a monolithic design is the Linux kernel. This kernel is the core of the
various OSs known as either “GNU/Linux” or (somewhat incorrectly) just “Linux”. It is also used in the
Android OS.

Because monolithic kernels have a large amount of code linked together, and executing at high privilege, they
can suffer in terms of maintainability. Another problem is that, since so much code executes at high privilege,
bugs can have severe effects. Thus, security and robustness can also be issues.

Operating systems such as MS-DOS and the original UNIX did not have well-defined structures.
There is no CPU Execution Mode (user and kernel), and so errors in applications can cause
the whole system to crash
Functionality of the OS is invoked with simple function calls within the kernel, which is one large
program.
Device drivers are loaded into the running kernel and become part of the kernel.

A monolithic kernel, such as Linux and other Unix systems.

Layered Approach:

The disadvantages of a monolithic kernel can be mitigated somewhat using a layered design. Such a kernel
is divided into logical layers, each of which provides abstractions for the layers above it. For example, the
lowest layer might handle memory management and process scheduling; it would provide the abstractions
of process andaddress space to all the layers above it.This approach breaks up the operating system into
different layers.

Some systems have more layers and are more strictly structured. An early layered system was
“THE” operating system by Dijkstra. The layers were.
Operating Systems IV Sem CSE
1. The operator
2. User programs
3. I/O mgt
4. Operator-process communication
5. Memory and drum management

The layering was done by convention, i.e. there was no enforcement by hardware and the entire OS is
linked together as one program. This is true of many modern OS systems as well (e.g., linux).

The multics system was layered in a more formal manner. The hardware provided several protection
layers and the OS used them. That is, arbitrary code could not jump to or access data in a more protected
layer.

This allows implementers to change the inner workings, and increases modularity.
As long as the external interface of the routines don’t change, developers have more freedom to
change the inner workings of the routines.
With the layered approach, the bottom layer is the hardware, while the highest layer is the user
interface.
o The main advantage is simplicity of construction and debugging.
o The main difficulty is defining the various layers.
o The main disadvantage is that the OS tends to be less efficient than other implementations.

Microkernel Architecture:

“Lean kernel” or expanded kernel architecture moves as much service functionality as possible
from the kernel into “user” space.
Kernel uses minimal process and memory management;
It uses the Inter Process Communication (IPC) i.e. message passing mechanism for
communication between client programs and other system processes running under user space.
The basic structure is modularized in which all non essential components are removed from the
kernel and are implemented in system or user level programs. As a result the kernel becomes
smaller and hence the name “ micro kernel”.

Example: Mach kernel, used e.g. in Tru64 Unix or Mac OS-X, Sysbian OS.
Operating Systems IV Sem CSE

Advantages:
Easier to extend a microkernel
Easier to port the operating system to new architectures
More reliable (less code is running in kernel mode)
More secure
Ability to be used in distributed system

Disadvantages:
Performance overhead of user space to kernel space communication

Virtual machines:

Usually in this approach, system programs are treated as an application programs and are at higher level
than H/W routines, the application program may view all these layers under them in the hierarchy though
the H/w routines are the part of machine itself. This layered approach is a concept of Virtual Machine.

e.g. VMOS for IBM 370 & IBM mainframe

Components of Virtual Machine:

Control Program (Controls physical machine)


Conversational Monitor System ( Controls Virtual Machine, Single user Interactive OS,
runs at the top of control program and interacts with useruser and any application program
running at that virtual machine)
The Virtual Machine is an exact copy of H/W which provides support for the kernel or user mode, I/O
instructions, Interrupts etc.
Operating Systems IV Sem CSE

Use a ``hypervisor'' (beyond supervisor, i.e. beyond a normal OS) to switch between
multiple Operating Systems

Each App/CMS runs on a virtual


370 CMS is a single user OS
A system call in an App traps to the corresponding CMS
CMS believes it is running on the machine so issues I/O instructions but ...
... I/O instructions in CMS trap to VM/370

Advantages:

Each VM is completely isolated from other, so no security problem.


Allows system development to be done w/o disrupting normal system operations.

Disadvantages:

Difficult to implements
Much work is required to provide an exact duplicate of underlying machine.

You might also like