Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
53 views55 pages

Operating Systems

The document provides comprehensive lecture notes on Operating Systems, detailing their functions, history, classifications, and user interfaces. It covers various types of operating systems, including batch, time-sharing, real-time, distributed, and embedded systems, along with their advantages and disadvantages. Additionally, it discusses user interfaces such as command-line, graphical, touch, voice, and natural language interfaces.

Uploaded by

Canon Davis
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
53 views55 pages

Operating Systems

The document provides comprehensive lecture notes on Operating Systems, detailing their functions, history, classifications, and user interfaces. It covers various types of operating systems, including batch, time-sharing, real-time, distributed, and embedded systems, along with their advantages and disadvantages. Additionally, it discusses user interfaces such as command-line, graphical, touch, voice, and natural language interfaces.

Uploaded by

Canon Davis
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 55

Mt Kenya University

SCHOOL OF COMPUTING AND INFORMATICS


DEPARTMENT OF INFORMATION
TECHNOLOGY

LECTURE NOTES
OPERATING SYSTEMS
Course Code: DIT1310

Prepared by Sir David Mbiti D.


MKU MOMBASA CAMPUS

OPERATING SYSTEMS 1
OPERATING SYSTEMS
Operating system is a set of programs that control and supervise the hardware resources of a
computer and provide services to other system software. Examples of operating system
includes; Microsoft Windows 95/98/2000/XP/Vista, Unix, Linux, MS-DOS, novel etc. 215
Functions of operating system (OS)
Resource management– the OS allocates computer resources such as CPU time, main
memory, secondary storage and input/output for use by application program.
Job scheduling – the OS prepares, schedules, controls and monitors tasks submitted for
execution to ensure the most efficient processing.
Memory management – the OS ensures that each program and the data it requires are
allocated adequate space in memory.
Error handling – OS provides the error correction routines to ensure smooth operations within
CPU.
Interrupt handling – OS determines the cause of the interrupt and transfer the control to the
most appropriate programs.
Input/output handling – the OS governs input/output of data and their location, storage and
retrieval.
Communication control and management – the operating system is responsible for managing
various communication devices and provide an environment within which communication
protocol operate. The term protocol refers to the rules that governs system come with
network management utilities that provide external communication by connecting to a
communication systems using interface cable or through wireless interface.
History Of Operating Systems.
The history of operating systems (OS) provides insight into the evolution and development of
these critical software systems over time. Here’s a comprehensive overview:
1. Early Days (1950s - 1960s):
• Batch Processing Systems: Early computers used batch processing, where jobs were
executed in batches without user interaction. Examples include IBM’s IBMs 701 and
650.
• Introduction of Multiprogramming: Systems like IBM’s OS/360 introduced
multiprogramming, allowing multiple jobs to be loaded into memory simultaneously
to improve utilization.
2. Development of UNIX (1960s - 1970s):
• UNIX: Developed at Bell Labs by Ken Thompson, Dennis Ritchie, and others, UNIX
was one of the first operating systems to use a hierarchical file system and support
multitasking and multiuser capabilities. It became highly influential and spawned
many derivatives.
3. Personal Computers Era (1980s - 1990s):
OPERATING SYSTEMS 2
• MS-DOS: Microsoft’s Disk Operating System (MS-DOS) became widely used in
personal computers. It was a command-line-based OS known for its simplicity and
efficiency in managing hardware resources.
• Windows: Microsoft introduced Windows as a graphical user interface (GUI) for MS-
DOS, leading to Windows 95, which was a major milestone in personal computing
with a more advanced GUI and improved multitasking capabilities.
4. Modern Operating Systems (2000s - Present):
• Linux: An open-source UNIX-like OS created by Linus Torvalds. It has become
popular for servers, desktops, and embedded systems due to its flexibility and strong
community support.
• macOS: Apple’s macOS, based on the NeXTSTEP OS and UNIX, is known for its
elegant interface and strong integration with Apple’s hardware and software
ecosystem.
• Android and iOS: Mobile operating systems that have dominated the smartphone and
tablet markets, providing touch-based interfaces and extensive app ecosystems.
5. Emerging Trends:
• Cloud Computing: OSs designed for cloud environments, such as Google’s Chrome
OS and various cloud-based server OSs, are becoming increasingly important as more
computing resources move to the cloud.
• IoT and Edge Computing: Operating systems designed for Internet of Things (IoT)
devices and edge computing are evolving to support a wide range of connected
devices and real-time data processing.

Classification of Operating Systems (OS)


Operating systems can be classified based on various criteria, such as their purpose,
functionality, and architecture. Here’s a comprehensive overview of different classifications:

1. Based on Interaction with Users


1.1 Batch Operating Systems
• Definition:
o In batch systems, similar jobs are batched together and processed sequentially
without user interaction during execution.
• Characteristics:
o No Interaction: Once a job starts, it runs without user input.
o Job Scheduling: Jobs are collected and scheduled for execution in batches.
o Example Systems: IBM OS/360, early versions of UNIX.
OPERATING SYSTEMS 3
• Advantages:
o Efficient for large volumes of data processing.
• Disadvantages:
o No real-time interaction, leading to delays in execution.
1.2 Time-Sharing Operating Systems
• Definition:
o Allows multiple users to interact with the computer simultaneously by sharing
system time.
• Characteristics:
o Multitasking: The CPU switches between different tasks rapidly, giving the
impression of simultaneous execution.
o Interactive: Users can interact with their programs while they are running.
o Example Systems: UNIX, Multics.
• Advantages:
o Maximizes CPU utilization.
o Provides quick response to user inputs.
• Disadvantages:
o Requires sophisticated scheduling algorithms.
o May lead to security concerns with multiple users.
1.3 Real-Time Operating Systems (RTOS)
• Definition:
o Designed to process data and respond to inputs within a guaranteed time
frame, crucial for time-sensitive applications.
• Types:
o Hard Real-Time OS: Strict timing constraints. Missing a deadline can lead to
catastrophic failures (e.g., in medical devices, industrial control systems).
o Soft Real-Time OS: More lenient timing, where delays are acceptable but
undesirable (e.g., multimedia systems).
• Characteristics:
o Deterministic: Predictable behavior in execution.
o Low Latency: Quick context switching and interrupt handling.
o Example Systems: VxWorks, FreeRTOS.

OPERATING SYSTEMS 4
• Advantages:
o Suitable for critical applications where timing is crucial.
• Disadvantages:
o Complex to design and implement.
o Limited multitasking capabilities.
1.4 Distributed Operating Systems
• Definition:
o Manages a group of independent computers and makes them appear to users as
a single coherent system.
• Characteristics:
o Resource Sharing: Resources like files, printers, and processing power are
shared across multiple machines.
o Transparency: The distributed nature is hidden from users.
o Example Systems: Amoeba, Plan 9, Google’s Borg.
• Advantages:
o Improved reliability and availability.
o Scalability and resource efficiency.
• Disadvantages:
o Complex network management.
o Issues with data consistency and security.
2. Based on the Number of Simultaneous Users
2.1 Single-User Operating Systems
• Definition:
o Allows only one user to operate the computer at a time.
• Characteristics:
o Dedicated Resources: All system resources are allocated to one user.
o Simpler Design: Less complex compared to multi-user systems.
o Example Systems: MS-DOS, early versions of macOS.
• Advantages:
o Simpler and less resource-intensive.
• Disadvantages:

OPERATING SYSTEMS 5
o Inefficient use of resources if the user’s tasks do not fully utilize the system.
2.2 Multi-User Operating Systems
• Definition:
o Allows multiple users to use the computer and its resources simultaneously.
• Characteristics:
o Concurrent Access: Multiple users can run programs and access files at the
same time.
o User Management: The OS handles user permissions, authentication, and
resource allocation.
o Example Systems: UNIX, Windows Server, Linux.
• Advantages:
o Efficient resource utilization.
o Suitable for environments like servers and mainframes.
• Disadvantages:
o More complex security and resource management.
o Higher chances of performance degradation under heavy load.
3. Based on the Number of Simultaneous Tasks
3.1 Single-Tasking Operating Systems
• Definition:
o Can run only one task or program at a time.
• Characteristics:
o Dedicated Execution: The CPU executes one task until completion before
moving to the next.
o Simpler Design: Less overhead in managing resources.
o Example Systems: MS-DOS.
• Advantages:
o Simple and fast for basic tasks.
• Disadvantages:
o Inefficient for modern computing needs where multitasking is expected.
3.2 Multitasking Operating Systems
• Definition:

OPERATING SYSTEMS 6
o Can execute multiple tasks or programs simultaneously.
• Types:
o Preemptive Multitasking: The OS decides when to switch tasks based on
priority (e.g., Windows, UNIX).
o Cooperative Multitasking: Each task controls the CPU until it voluntarily
yields control (e.g., early versions of macOS).
• Characteristics:
o Task Scheduling: The OS schedules tasks to maximize CPU usage.
o Concurrency: Multiple processes or threads can run concurrently.
o Example Systems: Windows, Linux, macOS.
• Advantages:
o Efficient use of CPU and system resources.
• Disadvantages:
o Increased complexity in managing tasks and resources.
o Potential for resource conflicts (e.g., deadlock).
4. Based on System Architecture
4.1 Monolithic Operating Systems
• Definition:
o The entire operating system works in the kernel space and is tightly integrated.
• Characteristics:
o Single Large Kernel: All OS functions (file management, process scheduling,
etc.) run in a single, large block of code in kernel mode.
o Fast Performance: Minimal context switching and overhead.
o Example Systems: UNIX, Linux.
• Advantages:
o High performance due to less overhead.
• Disadvantages:
o Harder to debug and maintain due to the complexity and integration of
functions.
4.2 Microkernel Operating Systems
• Definition:

OPERATING SYSTEMS 7
o The core functionality (such as IPC, basic scheduling) is provided by a small
kernel, while other services run in user space.
• Characteristics:
o Minimal Kernel: Only essential services run in kernel mode.
o Modular Design: Other services (e.g., device drivers, file systems) run in user
mode.
o Example Systems: MINIX, QNX.
• Advantages:
o Greater security and stability due to isolation of services.
o Easier to extend and modify.
• Disadvantages:
o Potentially slower due to the overhead of user-kernel mode transitions.
4.3 Hybrid Operating Systems
• Definition:
o Combines elements of both monolithic and microkernel architectures.
• Characteristics:
o Modular Kernel: Core functions run in the kernel space, but additional
modules or services can be loaded dynamically.
o Balance: Tries to achieve a balance between performance and modularity.
o Example Systems: Windows NT, macOS.
• Advantages:
o Flexibility and extensibility.
o Better performance than pure microkernel systems.
• Disadvantages:
o More complex design and implementation.
5. Based on Specialization
5.1 Embedded Operating Systems
• Definition:
o Designed to operate within embedded systems, which are specialized
computing devices.
• Characteristics:

OPERATING SYSTEMS 8
o Real-Time Capabilities: Often includes RTOS features to manage time-critical
tasks.
o Resource Constraints: Designed to work with limited memory, processing
power, and energy.
o Example Systems: Embedded Linux, VxWorks, FreeRTOS.
• Advantages:
o Optimized for specific tasks and environments.
o Efficient resource management.
• Disadvantages:
o Limited in scope and functionality.
o Harder to modify or upgrade.
5.2 Network Operating Systems
• Definition:
o Facilitates networked computing by providing services to computers
connected in a network.
• Characteristics:
o Resource Sharing: Enables sharing of files, printers, and other resources over a
network.
o User Management: Handles multiple users across different machines.
o Example Systems: Novell NetWare, Windows Server, UNIX/Linux in server
mode.
• Advantages:
o Centralized control and management.
o Supports collaborative work environments.
• Disadvantages:
o Complex to configure and manage.
o Security vulnerabilities in a networked environment.
5.3 Distributed Operating Systems
• Definition:
o Manages a collection of independent computers and makes them appear as a
single unified system.
• Characteristics:

OPERATING SYSTEMS 9
o Transparency: Users and applications perceive the distributed resources as a
single coherent system.
o Resource Sharing: Files, processing power, and other resources are shared
across multiple systems.
o Example Systems: Amoeba, Plan 9, Google’s Borg.
• Advantages:
o Enhanced reliability and fault tolerance.
o Scalability to handle increasing loads.
• Disadvantages:
o Complex to implement and maintain.
o Issues with data consistency and coordination.

6. Based On User Interface


Computers can be classified based on the type of user interface they provide. The user
interface (UI) is the point of interaction between users and computers, and it plays a crucial
role in how users interact with and control the system.

1. Command-Line Interface (CLI)


1.1 Definition:
• A CLI allows users to interact with the computer by typing commands into a text-
based interface. Commands are entered through a command prompt or terminal.
1.2 Characteristics:
• Text-Based: Users type commands to perform operations.
• Powerful: Provides a high level of control and flexibility for advanced users.
• Scriptable: Allows automation of tasks through scripting.
1.3 Examples:
• MS-DOS Command Prompt
• Unix Shell (Bash, Zsh)
• Windows PowerShell

2. Graphical User Interface (GUI)

OPERATING SYSTEMS 10
2.1 Definition:
• A GUI allows users to interact with the computer through graphical elements such as
windows, icons, buttons, and menus. It is designed to be intuitive and user-friendly.
2.2 Characteristics:
• Graphical Elements: Uses visual components like icons, windows, and menus.
• Ease of Use: Provides a more accessible and user-friendly experience compared to
CLI.
• WYSIWYG: What You See Is What You Get, allowing for visual manipulation of
documents and applications.
2.3 Examples:
• Microsoft Windows
• macOS
• GNOME and KDE (Linux Desktop Environments)

3. Touch User Interface


3.1 Definition:
• A touch user interface allows users to interact with the computer using touch gestures
on a touchscreen display.
3.2 Characteristics:
• Touch-Based: Users interact by tapping, swiping, and pinching on a touchscreen.
• Intuitive: Designed for devices like smartphones and tablets, providing a natural and
direct way to interact with applications.
3.3 Examples:
• iOS (iPhone, iPad)
• Android (smartphones, tablets)
• Windows 8 and later (with touch support)

4. Voice User Interface (VUI)


4.1 Definition:
• A VUI allows users to interact with the computer using voice commands and speech
recognition technology.
4.2 Characteristics:

OPERATING SYSTEMS 11
• Voice-Based: Users give commands or provide input through spoken language.
• Hands-Free: Useful for situations where hands-free interaction is preferred or
necessary.
4.3 Examples:
• Amazon Alexa
• Google Assistant
• Apple Siri

5. Natural Language Interface


5.1 Definition:
• A natural language interface allows users to interact with the computer using natural
language queries or commands, aiming to understand and process human language.
5.2 Characteristics:
• Language-Based: Users communicate in natural language, such as English, to perform
tasks or retrieve information.
• Context-Aware: Attempts to understand user intent and provide relevant responses.
5.3 Examples:
• Chatbots (e.g., customer service bots)
• Search Engines (e.g., Google’s conversational search)

Monolithic and Microkernel Architecture


Operating system (OS) architectures define how the OS components are structured and
interact with each other. Two prominent architectures are monolithic and microkernel.

1. Monolithic Architecture
1.1 Definition:
• In a monolithic architecture, the operating system is designed as a single, large kernel
that includes all the essential components and services. All system services, including
device drivers, file systems, and process management, run in kernel mode.
1.2 Characteristics:
• Single Large Kernel: The OS kernel is a monolithic block of code.
• Direct Communication: Components of the OS can communicate directly with each
other.

OPERATING SYSTEMS 12
• Integrated Services: Device drivers, system calls, and core functionalities are tightly
integrated into the kernel.
1.3 Advantages:
• Performance: Direct communication between components can lead to high
performance and efficiency.
• Simplicity: The design is straightforward since all functionalities are within one large
module.
1.4 Disadvantages:
• Complexity: The monolithic kernel can become complex and hard to manage as it
grows.
• Stability: A failure in one part of the kernel can affect the entire system, leading to
potential stability issues.
• Maintenance: Modifying or extending functionality can be challenging due to the
tight coupling of components.
1.5 Examples:
• Linux (traditional monolithic design)
• MS-DOS

2. Microkernel Architecture
2.1 Definition:
• In a microkernel architecture, the kernel is designed to be minimal, containing only
the most fundamental components such as basic scheduling, inter-process
communication (IPC), and low-level hardware management. Other services, including
device drivers, file systems, and network protocols, run in user space as separate
processes.
2.2 Characteristics:
• Minimal Kernel: The microkernel includes only essential functions required for
managing hardware and communication.
• User-Space Services: Non-essential services and components run outside the kernel in
user space.
• Modular Design: Components are modular and can be updated or replaced
independently.
2.3 Advantages:
• Modularity: Easier to maintain and extend due to the separation of services.

OPERATING SYSTEMS 13
• Stability: Faults in user-space services do not crash the entire system; the kernel
remains unaffected.
• Flexibility: Allows for easy adaptation and enhancement of system services without
modifying the kernel.
2.4 Disadvantages:
• Performance Overhead: Communication between user-space services and the
microkernel can introduce performance overhead due to IPC.
• Complex Design: Requires efficient design and management of inter-process
communication and system calls.
2.5 Examples:
• MINIX
• QNX
• Mach (used in macOS and GNU Hurd)

Conclusion
Monolithic and microkernel architectures represent two different approaches to designing
operating systems. Monolithic systems integrate all core functionalities into a single kernel,
leading to high performance but potential complexity and stability issues. Microkernel
systems aim for modularity and flexibility by minimizing the kernel’s responsibilities and
running most services in user space, though they may face performance overhead. Each
architecture has its strengths and trade-offs, influencing the design and functionality of the
operating systems that use them.

Kernel
The kernel is the core component of an operating system (OS) that manages system resources
and facilitates communication between hardware and software. It is responsible for handling
low-level operations and providing essential services to the system.

1. Definition and Role


1.1 Definition:
• The kernel is the central part of the operating system that manages system resources
such as CPU, memory, and I/O devices. It operates in a privileged mode (kernel
mode) with direct access to hardware.
1.2 Role:
• Resource Management: Allocates and manages system resources, including CPU
time, memory, and I/O devices.
OPERATING SYSTEMS 14
• Process Management: Handles process creation, scheduling, and termination.
• Memory Management: Manages memory allocation and deallocation, including
virtual memory.
• Device Management: Provides an interface for device drivers to interact with
hardware devices.
• System Calls: Provides an interface for user applications to request services from the
kernel.

2. Types of Kernels
2.1 Monolithic Kernels
• Definition: A monolithic kernel includes all essential services and device drivers
within a single large kernel module.
• Characteristics:
o All system services run in kernel mode.
o Direct communication between components.
• Advantages:
o High performance due to direct interactions.
o Simplicity in design.
• Disadvantages:
o Complexity in maintenance and extension.
o Stability risks if any part of the kernel fails.
• Examples:
o Linux (traditional design)
o MS-DOS
2.2 Microkernels
• Definition: A microkernel contains only the most basic functionalities, such as process
scheduling and inter-process communication (IPC). Other services run in user space.
• Characteristics:
o Minimal kernel with essential functions.
o Services run as separate user-space processes.
• Advantages:
o Enhanced modularity and flexibility.

OPERATING SYSTEMS 15
o Better stability as user-space services are isolated.
• Disadvantages:
o Potential performance overhead due to IPC.
o More complex design and management.
• Examples:
o MINIX
o QNX
o Mach (used in macOS)
2.3 Hybrid Kernels
• Definition: Hybrid kernels combine elements of both monolithic and microkernel
architectures, incorporating some services within the kernel while running others in
user space.
• Characteristics:
o Mixes features of monolithic and microkernel designs.
o Aims to balance performance and modularity.
• Advantages:
o Combines the benefits of both architectures.
o More stable and flexible than monolithic kernels.
• Disadvantages:
o Increased complexity due to hybrid nature.
o Potential overhead from combining different approaches.
• Examples:
o Windows NT (modern versions)
o macOS (XNU kernel)

3. Functions of the Kernel


3.1 Process Management
• Scheduling: Determines which processes run and when.
• Creation and Termination: Manages the lifecycle of processes.
3.2 Memory Management
• Allocation: Allocates memory to processes.

OPERATING SYSTEMS 16
• Virtual Memory: Manages virtual address space and paging.
3.3 Device Management
• Drivers: Interfaces with hardware devices.
• I/O Operations: Handles input and output operations.
3.4 System Calls
• API: Provides an interface for applications to request kernel services.
3.5 Security and Protection
• Access Control: Manages permissions and access to resources.
• Isolation: Ensures processes are isolated and cannot interfere with each other.

Types of Operating Systems (OS)


Operating Systems are software that manage computer hardware and software resources
while providing common services for computer programs. There are various types of
operating systems, each designed to meet specific needs. Below is a comprehensive overview
of the different types of operating systems:

1. Batch Operating Systems


1.1 Definition:
• Batch operating systems execute batches of jobs without user interaction. The jobs are
collected in a batch and processed sequentially, often in the order they are received.
1.2 Key Features:
• No Interaction: Users do not interact directly with the computer while jobs are
processed.
• Efficient for Repetitive Tasks: Suitable for processing large amounts of data where
tasks are similar and repetitive.
• Job Scheduling: Jobs are scheduled by the OS based on job control language
instructions.
1.3 Examples:
• IBM OS/360
• Early versions of UNIX
1.4 Advantages:
• Efficient Resource Utilization: Maximizes the use of system resources by running
jobs one after another without idle time.

OPERATING SYSTEMS 17
• Simplified Operations: Easy to automate and manage large volumes of similar jobs.
1.5 Disadvantages:
• No User Interaction: Lack of real-time user interaction means users must wait until
the entire batch is processed.
• Complex Debugging: Errors in batch jobs can be difficult to debug as they are
processed in sequence.

2. Time-Sharing Operating Systems


2.1 Definition:
• Time-sharing operating systems allow multiple users to share system resources
simultaneously. The CPU time is divided among users based on a time-sharing
principle, giving the impression that each user has their own dedicated system.
2.2 Key Features:
• Multitasking: Supports multiple tasks or users simultaneously by switching the CPU
among them rapidly.
• Interactive: Users can interact with the system in real-time.
• Time Slices: The CPU's time is divided into small slices and allocated to different
users or processes.
2.3 Examples:
• UNIX
• Multics
2.4 Advantages:
• Increased Efficiency: Better utilization of CPU as it serves multiple users.
• User Interaction: Real-time interaction is possible, enhancing user experience.
2.5 Disadvantages:
• Overhead: Context switching and maintaining multiple processes can introduce
overhead.
• Security Risks: Multiple users sharing the same system may lead to security concerns.

3. Distributed Operating Systems


3.1 Definition:

OPERATING SYSTEMS 18
• Distributed operating systems manage a group of independent computers and make
them appear as a single computer to users. The system coordinates the operation of
multiple machines working on different parts of a task.
3.2 Key Features:
• Resource Sharing: Resources such as files and printers are shared across multiple
systems.
• Transparency: Users interact with the system as if it were a single unit, even though it
consists of multiple computers.
• Scalability: The system can easily be expanded by adding more machines.
3.3 Examples:
• Amoeba
• LOCUS
• Microsoft Azure
3.4 Advantages:
• Resource Sharing: Efficient use of resources across a network.
• Fault Tolerance: System can continue functioning even if one or more machines fail.
3.5 Disadvantages:
• Complexity: Management and coordination of multiple machines can be complex.
• Security and Privacy: Distributed systems are vulnerable to security breaches due to
the networked nature.

4. Real-Time Operating Systems (RTOS)


4.1 Definition:
• Real-time operating systems are designed to process data as it comes in, typically for
systems that require immediate processing and responses, often within strict time
constraints.
4.2 Key Features:
• Deterministic: Guaranteed response times for specific tasks, critical for systems
requiring timely processing.
• Prioritized Scheduling: High-priority tasks are executed first, ensuring critical
operations are performed on time.
• Minimal Latency: Designed to minimize latency and handle high-priority events
promptly.
4.3 Types of RTOS:

OPERATING SYSTEMS 19
• Hard Real-Time: Systems where missing a deadline is catastrophic (e.g., pacemakers,
airbag systems).
• Soft Real-Time: Systems where deadlines are important but not critical (e.g.,
multimedia systems).
4.4 Examples:
• VxWorks
• RTLinux
• QNX
4.5 Advantages:
• Reliability: Predictable and reliable performance for critical applications.
• Efficiency: Optimized for real-time tasks, ensuring timely and efficient processing.
4.6 Disadvantages:
• Complex Development: Developing RTOS applications can be challenging due to
strict timing constraints.
• Limited Multitasking: RTOS may limit multitasking capabilities to ensure critical
tasks meet deadlines.

5. Embedded Operating Systems


5.1 Definition:
• Embedded operating systems are designed to operate on small, specialized hardware
devices like routers, automotive systems, or industrial machines. They are optimized
for specific tasks and usually have minimal resource requirements.
5.2 Key Features:
• Small Footprint: Designed to run on devices with limited memory and processing
power.
• Specific Functionality: Tailored to perform specific tasks related to the device it runs
on.
• Real-Time Capabilities: Often include real-time capabilities to handle time-sensitive
operations.
5.3 Examples:
• FreeRTOS
• Embedded Linux
• Windows CE

OPERATING SYSTEMS 20
5.4 Advantages:
• Efficiency: Optimized for the specific device, making efficient use of limited
resources.
• Reliability: Designed for stability and reliability in the specific environment.
5.5 Disadvantages:
• Limited Flexibility: Generally limited to specific tasks and may not be adaptable to
other uses.
• Dependency on Hardware: Often closely tied to the hardware, making it difficult to
transfer to other platforms.

6. Network Operating Systems (NOS)


6.1 Definition:
• Network operating systems provide services to computers connected to a network,
allowing shared resources like files, printers, and applications.
6.2 Key Features:
• Centralized Control: A server manages network resources and user permissions.
• File Sharing: Facilitates sharing of files and printers across the network.
• User Management: Provides user account management and security across the
network.
6.3 Examples:
• Novell NetWare
• Windows Server
• UNIX/Linux with NFS
6.4 Advantages:
• Centralized Management: Easier to manage resources and users across the network.
• Resource Sharing: Efficient sharing of resources, reducing redundancy and costs.
6.5 Disadvantages:
• Single Point of Failure: The server can become a single point of failure, affecting the
entire network if it goes down.
• Complex Setup: Setting up and managing a NOS can be complex and requires
specialized knowledge.

7. Mobile Operating Systems

OPERATING SYSTEMS 21
7.1 Definition:
• Mobile operating systems are designed specifically for mobile devices such as
smartphones and tablets. They are optimized for touchscreen interfaces and mobile
hardware.
7.2 Key Features:
• Touch Interface: Designed to be used with touch screens.
• App Store Ecosystem: Supports downloading and managing apps via an app store.
• Connectivity: Built-in features for cellular, Wi-Fi, Bluetooth, and other connectivity
options.
7.3 Examples:
• Android
• iOS
• Windows Phone (Discontinued)
7.4 Advantages:
• User-Friendly: Intuitive interfaces designed for easy use on mobile devices.
• Large Ecosystem: Extensive app availability and developer support.
7.5 Disadvantages:
• Hardware Limitations: Mobile devices have less processing power and memory
compared to PCs.
• Security Concerns: Mobile OSs are often targeted by malware due to their widespread
use.

8. Cloud Operating Systems


8.1 Definition:
• Cloud operating systems manage resources and services for cloud computing
environments, enabling virtualization, resource pooling, and remote access to
computing resources.
8.2 Key Features:
• Virtualization: Manages virtual machines and containers in the cloud.
• Scalability: Automatically scales resources up or down based on demand.
• Remote Management: Enables remote management and access to computing
resources.
8.3 Examples:

OPERATING SYSTEMS 22
• OpenStack
• VMware ESXi
• Amazon Web Services (AWS) EC2
8.4 Advantages:
• On-Demand Resources: Provides resources as needed, optimizing costs and
efficiency.
• Flexibility: Easily adapts to changing workloads and business needs.
8.5 Disadvantages:
• Security Risks: Cloud environments are prone to security breaches if not properly
managed.
• Dependency on Internet: Requires a stable internet connection for optimal
performance.
Operating Systems Structure
The structure of an operating system (OS) refers to how its various components are organized
and interact with each other to manage hardware resources and provide services to
applications. The structure can significantly impact the OS's performance, flexibility, and
complexity. Here’s a detailed overview of common OS structures:

1. Monolithic Structure
1.1 Definition:
• In a monolithic OS structure, the entire operating system is built as a single, large
program running in a single address space (kernel mode). All core functionalities such
as process management, memory management, file systems, and device drivers are
integrated into this single unit.
1.2 Key Components:
• Process Management: Handles process creation, scheduling, and termination.
• Memory Management: Manages memory allocation and deallocation.
• File System: Provides file operations and storage management.
• Device Drivers: Interfaces with hardware devices.
1.3 Advantages:
• Performance: Direct communication between components can lead to high efficiency.
• Simplicity: Easier to design and implement as all functions are part of one large
module.
1.4 Disadvantages:

OPERATING SYSTEMS 23
• Lack of Modularity: Modifying or extending functionality can be difficult.
• Stability and Security Risks: A failure in one part can affect the entire system.
• Complex Debugging: Intertwined components can make debugging challenging.
1.5 Examples:
• Linux (traditional monolithic design)
• MS-DOS

2. Layered Structure
2.1 Definition:
• The layered OS structure organizes the operating system into layers, where each layer
performs a specific set of functions and interacts with the layer directly above and
below it. Each layer only needs to interact with adjacent layers.
2.2 Key Layers:
• Hardware Layer: Includes the physical hardware components.
• Kernel Layer: Manages core functionalities like process management and memory
management.
• System Call Interface: Provides a set of system calls for user-space applications.
• Application Layer: Includes user applications and services.
2.3 Advantages:
• Modularity: Easier to manage and update individual layers without affecting others.
• Isolation: Problems in one layer do not necessarily affect others.
2.4 Disadvantages:
• Performance Overhead: Layered architecture can introduce overhead due to the
abstraction layers.
• Complexity: Design and implementation can be more complex due to the multiple
layers.
2.5 Examples:
• Windows NT (modern versions)

3. Microkernel Structure
3.1 Definition:

OPERATING SYSTEMS 24
• In a microkernel structure, only the most fundamental functionalities (such as inter-
process communication and basic scheduling) are included in the kernel. Other
services like device drivers, file systems, and network protocols run in user space as
separate processes.
3.2 Key Components:
• Microkernel: Manages low-level operations and communication between user-space
processes.
• User-Space Services: Includes drivers, file systems, and network protocols.
3.3 Advantages:
• Modularity: Allows easy modification and extension of services.
• Fault Isolation: Crashes or bugs in user-space services do not bring down the entire
system.
• Flexibility: Easier to adapt and expand.
3.4 Disadvantages:
• Performance Overhead: Communication between user-space processes and the
microkernel can introduce latency.
• Complex Design: Requires efficient inter-process communication and careful design.
3.5 Examples:
• MINIX
• QNX
• Mach (used in macOS and GNU Hurd)

4. Hybrid Structure
4.1 Definition:
• A hybrid OS structure combines elements of both monolithic and microkernel
designs. It incorporates some services in the kernel space for performance reasons
while running other services in user space for modularity and stability.
4.2 Key Components:
• Kernel Space: Contains core functionalities that require high performance.
• User Space: Hosts additional services and drivers to ensure modularity and
extensibility.
4.3 Advantages:
• Balanced Performance: Combines the performance benefits of monolithic kernels
with the modularity of microkernels.

OPERATING SYSTEMS 25
• Improved Stability: More stable than monolithic systems due to isolated user-space
services.
4.4 Disadvantages:
• Complexity: More complex than purely monolithic or microkernel designs due to the
mix of both approaches.
• Potential Overhead: Some overhead may still exist due to the combination of kernel
and user-space services.
4.5 Examples:
• Windows NT (modern versions)
• macOS (using XNU, which combines Mach microkernel and elements from
FreeBSD)

5. Exokernel Structure
5.1 Definition:
• Exokernels provide only minimal abstractions and allow applications to manage
hardware resources directly. The OS provides a basic interface for hardware access,
and applications are responsible for implementing higher-level abstractions.
5.2 Key Components:
• Exokernel: Manages low-level hardware operations and provides a basic interface for
resource management.
• Library Operating Systems: Implement higher-level abstractions and functionality in
user space.
5.3 Advantages:
• Fine-Grained Control: Applications have more control over hardware resources.
• Customization: Allows for highly customized operating environments based on
specific application needs.
5.4 Disadvantages:
• Complex Development: Application developers need to handle resource management,
which can be complex.
• Limited Abstractions: The OS provides fewer abstractions, which might make
development harder for some applications.
5.5 Examples:
• MIT Exokernel

OPERATING SYSTEMS 26
6. Nano Kernel Structure
6.1 Definition:
• Nano kernels are an even more minimalistic approach than microkernels, containing
only the most essential components, such as interrupt handling and basic resource
management.
6.2 Key Components:
• Nano Kernel: Provides only the most basic functions needed to manage hardware and
interrupts.
• User-Space Services: All other services, such as device drivers and file systems, run
in user space.
6.3 Advantages:
• Minimalistic Design: Extremely lightweight and focused on core functionalities.
• Modularity: Similar benefits of microkernels with an even smaller kernel.
6.4 Disadvantages:
• Increased Complexity for Developers: Requires developers to implement more
functionalities in user space.
• Limited Functionality: Only suitable for systems where minimal functionality is
acceptable.
6.5 Examples:
• Some specialized embedded systems
5. Virtual Machines (VMs)
A Virtual Machine (VM) is an emulation of a computer system that provides the functionality
of a physical computer. Virtual machines are used to run multiple operating systems on a
single physical machine, each operating as if it were running on its own separate hardware.

1. Concept and Definition


1.1 Definition:
• A virtual machine is a software-based simulation of a physical computer that runs an
operating system and applications as if it were an actual physical machine. VMs are
created and managed by a hypervisor, which allocates resources to each VM and
manages their operation.
1.2 Purpose:
• Isolation: VMs provide isolation between different operating systems or applications
running on the same physical hardware.

OPERATING SYSTEMS 27
• Resource Utilization: Improve resource utilization by allowing multiple virtual
systems to run on a single physical system.
• Testing and Development: Facilitate testing and development environments without
requiring additional physical hardware.

2. Types of Virtual Machines


2.1 System Virtual Machines
• Definition: System virtual machines emulate an entire physical computer system.
They provide a complete virtual environment with its own operating system (guest
OS) and applications.
• Use Cases: Running different operating systems on a single physical machine, server
consolidation, and development/testing environments.
• Examples: VMware Workstation, VirtualBox, and Hyper-V.
2.2 Process Virtual Machines
• Definition: Process virtual machines provide a virtual environment for running a
specific process or application. They abstract the execution environment of a single
process rather than an entire system.
• Use Cases: Running applications in a controlled environment, enabling platform-
independent execution of programs.
• Examples: Java Virtual Machine (JVM), .NET Common Language Runtime (CLR).

3. Components of Virtual Machines


3.1 Hypervisor (Virtual Machine Monitor)
• Definition: The hypervisor is the software layer responsible for creating, managing,
and running virtual machines. It sits between the hardware and the virtual machines,
allocating resources and ensuring isolation.
• Types:
o Type 1 Hypervisor (Bare-Metal): Runs directly on the hardware and manages
multiple VMs. Examples include VMware ESXi, Microsoft Hyper-V, and
Xen.
o Type 2 Hypervisor (Hosted): Runs on top of a host operating system and
manages virtual machines from within the host OS. Examples include
VMware Workstation, VirtualBox, and Parallels Desktop.
3.2 Virtual Machine Image

OPERATING SYSTEMS 28
• Definition: A virtual machine image is a file or set of files that contain the virtual
machine’s operating system, applications, and data. It is used to create and restore
virtual machines.
• Types:
o Full VM Image: Includes the complete system state, including the OS,
applications, and all data.
o Snapshot: A point-in-time copy of the VM’s state, allowing rollback to a
previous state.
3.3 Guest Operating System
• Definition: The operating system installed and running within a virtual machine. It
functions as if it were running on a physical machine.
• Characteristics: Each VM can run a different guest OS, including different versions or
distributions of operating systems.

4. Benefits of Virtual Machines


4.1 Resource Efficiency
• Consolidation: Multiple virtual machines can run on a single physical machine,
leading to better utilization of hardware resources.
• Cost Savings: Reduces the need for additional physical servers, saving on hardware,
power, and cooling costs.
4.2 Isolation and Security
• Isolation: VMs are isolated from each other, preventing one VM’s issues from
affecting others.
• Security: Provides a controlled environment for running applications, reducing the
risk of security breaches affecting the host system.
4.3 Flexibility and Scalability
• Snapshot and Cloning: Allows for quick snapshots and cloning of VMs for backup,
testing, and deployment.
• Scalability: VMs can be easily scaled up or down based on resource needs.
4.4 Testing and Development
• Environment Replication: Enables the creation of consistent testing environments that
can be easily replicated and modified.
• Cross-Platform Testing: Facilitates testing across different operating systems and
configurations without needing separate physical machines.

OPERATING SYSTEMS 29
5. Challenges and Limitations
5.1 Performance Overhead
• Resource Contention: Multiple VMs sharing the same physical resources can lead to
performance degradation due to resource contention.
• Overhead: Hypervisors introduce some overhead, which can affect the performance of
VMs compared to running directly on physical hardware.
5.2 Complexity
• Management: Managing multiple VMs, including their resource allocation, updates,
and security, can become complex.
• Networking: Networking configurations for VMs can be complex, especially in multi-
VM and multi-network environments.
5.3 Licensing and Cost
• Licensing: Some virtualization platforms and guest operating systems may have
licensing costs associated with them.
• Resource Management: Requires careful planning and management to ensure efficient
use of physical resources.

6. Applications of Virtual Machines


6.1 Server Virtualization
• Definition: The use of VMs to consolidate multiple server roles onto a single physical
server, improving efficiency and reducing hardware costs.
6.2 Desktop Virtualization
• Definition: Running virtual desktops on a central server, allowing users to access their
desktop environment from any device.
6.3 Cloud Computing
• Definition: Virtual machines are fundamental to cloud computing platforms, enabling
scalable and flexible resource allocation in the cloud.
6.4 Disaster Recovery
• Definition: Using VM snapshots and backups to quickly restore systems and
applications in the event of a failure or disaster.
Process Management in Operating Systems
Process management is a crucial function of an operating system (OS) that involves the
creation, scheduling, and termination of processes. It ensures efficient execution of
multiple processes, managing the allocation of CPU time and system resources to
maintain optimal system performance.

OPERATING SYSTEMS 30
1. What is a Process?
1.1 Definition:
• A process is an instance of a program in execution. It includes the program code,
current activity, and the state of the program.
1.2 Components of a Process:
• Program Code (Text Section): The actual code to be executed.
• Program Counter: Indicates the next instruction to be executed.
• Process Stack: Contains temporary data like function parameters, return addresses,
and local variables.
• Data Section: Includes global variables and dynamic memory allocations.
• Process Control Block (PCB): Contains information about the process state, program
counter, CPU registers, memory management information, and I/O status.

2. Process States
A process can be in one of the following states:
2.1 New:
• The process is being created.
2.2 Ready:
• The process is loaded into memory and is waiting to be executed by the CPU.
2.3 Running:
• The process is currently being executed by the CPU.
2.4 Waiting (Blocked):
• The process is waiting for an event or I/O operation to complete.
2.5 Terminated:
• The process has finished execution or has been terminated by the OS.

3. Process Scheduling
Process scheduling is the method by which the OS decides which process runs at any
given time. It is vital for multitasking and efficient CPU utilization.
3.1 Schedulers:

OPERATING SYSTEMS 31
• Long-Term Scheduler (Job Scheduler): Decides which processes are admitted to the
system for processing.
• Short-Term Scheduler (CPU Scheduler): Determines which of the ready processes
should be executed next by the CPU.
• Medium-Term Scheduler: Swaps processes in and out of memory to manage the
degree of multiprogramming.
3.2 Scheduling Algorithms:
• First-Come, First-Served (FCFS): Processes are scheduled in the order they arrive.
• Shortest Job Next (SJN): The process with the smallest execution time is selected
next.
• Priority Scheduling: Processes are scheduled based on priority.
• Round Robin (RR): Each process is assigned a fixed time slice, and processes are
scheduled in a cyclic order.
• Multilevel Queue Scheduling: Processes are divided into multiple queues, each with
different priority levels.
• Multilevel Feedback Queue: Processes can move between queues based on their
behavior and execution history.

4. Context Switching
4.1 Definition:
• Context switching is the process of saving the state of a currently running process and
loading the state of the next process to be executed.
4.2 Steps Involved:
• Save the state of the current process in its PCB.
• Update the process state to "waiting" or "ready."
• Load the state of the next process from its PCB.
• Update the process state to "running."
4.3 Overhead:
• Context switching introduces overhead, as it requires time to save and load process
states. Minimizing context switches is crucial for system efficiency.

5. Process Synchronization

OPERATING SYSTEMS 32
In a multitasking environment, processes may need to cooperate or share resources, which
can lead to race conditions and data inconsistency. Process synchronization mechanisms
ensure that processes execute in a safe and predictable manner.
5.1 Critical Section:
• A section of code where shared resources are accessed. Proper synchronization is
required to avoid conflicts.
5.2 Synchronization Mechanisms:
• Semaphores: Integer variables used to solve synchronization problems. They can
signal and wait to ensure mutual exclusion.
• Mutexes: Locks that ensure only one process can access a critical section at a time.
• Monitors: High-level synchronization constructs that manage access to shared
resources.
• Barriers: Synchronization points where processes must wait until all involved
processes reach the barrier.

6. Process Communication
Processes may need to communicate with each other for data sharing and coordination.
6.1 Inter-Process Communication (IPC):
• Shared Memory: Processes share a common memory space for communication.
• Message Passing: Processes communicate by sending and receiving messages.
• Pipes: Unidirectional data channels used for communication between processes.
• Sockets: Network-based communication endpoints for processes on different systems.
• Signals: Notifications sent to processes to inform them of events or conditions.

7. Process Creation and Termination


7.1 Process Creation:
• Processes can create child processes through system calls like fork() (in Unix-like
systems). The parent and child processes can execute concurrently.
7.2 Process Termination:
• A process can terminate itself or be terminated by another process. This releases
resources held by the process.
Process Model in Operating Systems

OPERATING SYSTEMS 33
The process model in operating systems is a conceptual framework that describes how
processes are managed, executed, and interact within a system. It defines how the
operating system abstracts the hardware to create an environment where multiple
processes can run concurrently, ensuring efficient resource utilization and isolation.

1. What is a Process Model?


1.1 Definition:
• The process model refers to the abstraction that the operating system uses to manage
running programs. It includes the representation of processes, their life cycle, and how
they interact with each other and the system.
1.2 Key Concepts:
• Process: An instance of a program in execution.
• Concurrency: The ability of the system to handle multiple processes simultaneously.
• Isolation: Ensuring that each process operates in its own environment, protecting it
from others.

2. Components of the Process Model


2.1 Process Control Block (PCB):
• The PCB is a data structure used by the operating system to store information about a
process. It includes:
o Process State: The current state of the process (new, ready, running, waiting,
terminated).
o Program Counter: The address of the next instruction to be executed.
o CPU Registers: Includes register values when the process is not running.
o Memory Management Information: Data on memory allocation for the
process.
o I/O Status Information: Information on the I/O devices allocated to the
process.
o Process ID (PID): A unique identifier for the process.
2.2 Process Life Cycle:
• The life cycle of a process includes several stages:
o Creation: The process is created and its PCB is initialized.
o Ready: The process is waiting to be assigned to a CPU for execution.
o Running: The process is currently being executed by the CPU.

OPERATING SYSTEMS 34
o Waiting: The process is waiting for some event (like I/O completion) to occur.
o Termination: The process has finished execution and is being removed from
the system.

3. Process State Diagram


The process state diagram visually represents the possible states of a process and the
transitions between these states.
3.1 States:
• New: Process is being created.
• Ready: Process is waiting for CPU allocation.
• Running: Process is being executed.
• Waiting: Process is waiting for some event.
• Terminated: Process has finished execution.
3.2 State Transitions:
• Dispatch: Moving a process from ready to running.
• Timeout: Moving a process from running to ready due to time slice expiration.
• Event Wait: Moving a process from running to waiting due to an I/O request.
• Event Occur: Moving a process from waiting to ready when the awaited event
occurs.
• Completion: Moving a process from running to terminated when the process
completes execution.

4. Types of Process Models


4.1 Single-Threaded Process Model:
• Each process has a single thread of execution.
• Simpler to manage but less efficient in utilizing system resources.
• Switching between processes requires context switching, which can be time-
consuming.
4.2 Multi-Threaded Process Model:
• Processes contain multiple threads, each capable of executing independently.
• More efficient as threads share the same process resources (memory, file handles).
• Allows for better resource utilization and responsiveness.

OPERATING SYSTEMS 35
• Requires synchronization mechanisms to manage access to shared resources.

5. Process Interaction
5.1 Inter-Process Communication (IPC):
• Shared Memory: Processes communicate by sharing a region of memory.
• Message Passing: Processes send and receive messages through the operating system.
• Pipes, Sockets, Signals: Other mechanisms to facilitate communication between
processes.
5.2 Synchronization:
• Ensures that processes interact with shared resources in a safe manner.
• Avoids race conditions and ensures data consistency.
• Tools include semaphores, mutexes, and monitors.

6. Process Scheduling and Management


6.1 Schedulers:
• Long-Term Scheduler: Decides which processes are admitted to the system for
execution.
• Short-Term Scheduler: Selects which process in the ready state should be executed
next.
• Medium-Term Scheduler: Manages swapping of processes in and out of memory.
6.2 Scheduling Algorithms:
• Determine the order in which processes are executed to optimize CPU utilization.
• Common algorithms include First-Come, First-Served (FCFS), Shortest Job Next
(SJN), Round Robin (RR), and Priority Scheduling.

7. Advantages of the Process Model


• Isolation: Each process runs in its own environment, which enhances security and
stability.
• Concurrency: Allows multiple processes to run simultaneously, improving system
throughput.
• Resource Sharing: Enables efficient sharing and management of system resources.

OPERATING SYSTEMS 36
8. Challenges in Process Management
• Deadlock: Situations where processes are unable to proceed because they are waiting
on each other to release resources.
• Starvation: A process may be perpetually denied resources due to the scheduling
policy.
• Race Conditions: Incorrect behaviour due to the timing of events in concurrent
processes.
Process Scheduling in Operating Systems
Process scheduling is a fundamental aspect of operating systems (OS) that determines the
order in which processes access the CPU and other system resources. Effective
scheduling is crucial for achieving optimal system performance, maximizing CPU
utilization, and ensuring a responsive user experience.

1. Objectives of Process Scheduling


1.1 Maximizing CPU Utilization:
• Keeping the CPU as busy as possible by ensuring there is always a process ready to
execute.
1.2 Maximizing Throughput:
• Increasing the number of processes completed in a given time frame.
1.3 Minimizing Turnaround Time:
• Reducing the total time taken for a process to complete from the time of submission
to completion.
1.4 Minimizing Waiting Time:
• Decreasing the amount of time a process spends in the ready queue waiting for CPU
allocation.
1.5 Minimizing Response Time:
• Shortening the time it takes from when a request is submitted until the first response
is produced.
1.6 Fairness:
• Ensuring that each process gets a fair share of the CPU time.

2. Types of Schedulers
The OS uses different types of schedulers to manage processes at various stages of their
lifecycle.

OPERATING SYSTEMS 37
2.1 Long-Term Scheduler (Job Scheduler):
• Decides which processes are admitted into the system for execution.
• Controls the degree of multiprogramming (the number of processes in memory).
• Infrequent execution, as it decides the overall job load.
2.2 Short-Term Scheduler (CPU Scheduler):
• Selects which of the ready processes in the ready queue will be executed next by the
CPU.
• Executes frequently (often every few milliseconds).
• Makes decisions quickly to maintain system responsiveness.
2.3 Medium-Term Scheduler:
• Swaps processes in and out of the memory to optimize CPU and memory usage.
• Works in systems with medium-term scheduling needs, such as swapping to manage
multiprogramming.

3. Scheduling Criteria
When choosing a scheduling algorithm, various criteria are considered:
3.1 CPU Utilization:
• The percentage of time the CPU is active and executing processes.
3.2 Throughput:
• The number of processes completed per unit of time.
3.3 Turnaround Time:
• The total time taken from process submission to completion, including waiting,
execution, and I/O time.
3.4 Waiting Time:
• The total time a process spends in the ready queue.
3.5 Response Time:
• The time from process submission until the first output is produced.

4. Scheduling Algorithms
Several scheduling algorithms are used to decide the order of process execution. Each has
its advantages and disadvantages depending on system requirements.
4.1 First-Come, First-Served (FCFS):
OPERATING SYSTEMS 38
• Processes are executed in the order they arrive in the ready queue.
• Simple to implement but can lead to the convoy effect, where short processes get
stuck waiting behind long processes.
• Non-preemptive: Once a process starts executing, it runs to completion.
4.2 Shortest Job Next (SJN) / Shortest Job First (SJF):
• Processes with the shortest burst time are executed first.
• Optimal for minimizing average waiting time but requires accurate prediction of
process burst times.
• Can be non-preemptive or preemptive (Shortest Remaining Time First - SRTF).
4.3 Priority Scheduling:
• Each process is assigned a priority, and the CPU is allocated to the process with the
highest priority.
• Can be preemptive or non-preemptive.
• Risk of starvation: Low-priority processes may never get executed.
• Aging can be used to gradually increase the priority of processes that wait too long.
4.4 Round Robin (RR):
• Each process is assigned a fixed time slice (quantum), and processes are executed in a
cyclic order.
• Suitable for time-sharing systems as it provides a fair distribution of CPU time.
• The choice of time quantum is crucial: too short leads to excessive context switching;
too long behaves like FCFS.
4.5 Multilevel Queue Scheduling:
• The ready queue is divided into several separate queues based on process priority or
type (e.g., system processes, interactive processes).
• Each queue has its own scheduling algorithm, and there’s a scheduling strategy to
choose between queues.
• Processes do not move between queues.
4.6 Multilevel Feedback Queue:
• Allows processes to move between queues based on their behavior and execution
history.
• Provides flexibility and can adapt to different process requirements.
• Helps in mitigating the drawbacks of strict multilevel queue scheduling.
4.7 Earliest Deadline First (EDF):

OPERATING SYSTEMS 39
• Used in real-time systems, where the process with the earliest deadline is selected for
execution.
• Dynamic priority scheduling: process priorities can change over time.

5. Preemptive vs. Non-Preemptive Scheduling


5.1 Preemptive Scheduling:
• The CPU can be taken away from a running process if a higher-priority process
arrives.
• Allows the system to be more responsive but requires more complex handling (e.g.,
context switching).
5.2 Non-Preemptive Scheduling:
• Once a process starts executing, it runs to completion or until it enters a waiting state.
• Simpler to implement but can lead to poor system responsiveness.

6. Context Switching
6.1 Definition:
• The process of storing the state of a currently running process and loading the state of
the next process to be executed.
• Necessary for switching the CPU from one process to another.
6.2 Overhead:
• Context switching is resource-intensive as it involves saving and loading registers,
program counters, and memory mappings.
• Excessive context switches can reduce system performance.

7. Performance Considerations
• Choosing the Right Algorithm: The optimal scheduling algorithm depends on the
system's specific requirements and workload characteristics.
• Balancing Criteria: Trade-offs between CPU utilization, throughput, turnaround
time, waiting time, and response time must be balanced.
• System Type: Different systems (e.g., batch systems, interactive systems, real-time
systems) have varying needs that influence the choice of scheduling algorithms.
Deadlocks in Operating Systems

OPERATING SYSTEMS 40
Deadlocks are a critical issue in operating systems where a set of processes is unable to
proceed because each process is waiting for a resource that another process in the set
holds. This situation leads to a standstill, where none of the processes can continue to
execute. Understanding deadlocks, their causes, and how to handle them is crucial for
system stability and resource management.

1. Conditions for Deadlock


For a deadlock to occur, four necessary conditions, known as Coffman’s conditions, must
be present simultaneously:
1.1 Mutual Exclusion:
• At least one resource must be held in a non-sharable mode; only one process can use
the resource at a time.
• If another process requests the resource, it must wait until the resource is released.
1.2 Hold and Wait:
• A process holding at least one resource is waiting to acquire additional resources that
are currently being held by other processes.
• Processes do not release their held resources while waiting for new ones.
1.3 No Preemption:
• Resources cannot be forcibly removed from a process holding them until the resource
is released voluntarily.
• A process must release its resources only after completing its task.
1.4 Circular Wait:
• A set of processes is in a circular chain where each process holds at least one resource
that the next process in the chain needs.
• This creates a cycle of dependencies that cannot be resolved.

2. Deadlock Prevention
Deadlock prevention involves ensuring that at least one of the necessary conditions for
deadlock cannot hold. Strategies include:
2.1 Mutual Exclusion:
• Avoid mutual exclusion where possible by allowing resources to be shared if they can
be used concurrently.
• For example, read-only files can be shared among processes.
2.2 Hold and Wait:

OPERATING SYSTEMS 41
• Require processes to request all the resources they need at once before execution
begins, ensuring they don't hold any resources while waiting.
• Another approach is to require processes to release all held resources before
requesting new ones.
2.3 No Preemption:
• Allow the system to preempt resources from processes.
• If a process holding resources requests additional resources that cannot be
immediately allocated, preempt the process's current resources and allocate them to
others.
2.4 Circular Wait:
• Impose an ordering on resource acquisition to prevent circular wait.
• Require that each process requests resources in a predefined order, and release
resources before requesting those with a higher order.

3. Deadlock Avoidance
Deadlock avoidance requires the system to have additional information about how
resources are to be requested. The most commonly used algorithm for this is Banker’s
Algorithm:
3.1 Banker’s Algorithm:
• Designed for systems with multiple instances of each resource type.
• Each process must declare the maximum number of instances of each resource type it
may need.
• The system evaluates each resource allocation request to determine whether granting
it would leave the system in a safe state.
• A system is in a safe state if there exists a safe sequence of processes where each
process can finish executing with the available resources.
3.2 Safe State:
• A state is safe if the system can allocate resources to each process in some order and
still avoid a deadlock.
• If a system is in a safe state, there is no possibility of a deadlock.

4. Deadlock Detection and Recovery


In systems where deadlocks are allowed to occur, the OS must have mechanisms to detect
and recover from them.
4.1 Deadlock Detection:
OPERATING SYSTEMS 42
• The OS periodically checks the system for the presence of a deadlock.
• It can use algorithms similar to those used in graph theory to detect cycles in a
resource allocation graph.
• Resource Allocation Graph:
o Vertices represent processes and resources.
o Edges indicate resource allocation and requests.
o If a cycle is detected in the graph, a deadlock exists.
4.2 Recovery from Deadlock:
• Process Termination: Terminate one or more processes involved in the deadlock to
break the cycle.
o Abort all deadlocked processes: Ensures recovery but is costly as it wastes a
lot of resources.
o Abort one process at a time: Repeatedly abort processes until the deadlock is
resolved.
• Resource Preemption: Selectively preempt resources from processes and reallocate
them to others to break the deadlock.
o Selecting a victim: The OS uses criteria to choose which process to terminate
or preempt.
o Rollback: Roll back the process to some earlier safe state and restart from
there.
o Starvation: Ensure that preempted processes do not starve by implementing
aging or other priority mechanisms.

5. Deadlock Handling Strategies


5.1 Ignoring Deadlocks:
• Also known as the "Ostrich Algorithm."
• The OS assumes that deadlocks are rare and handles them manually if they occur.
• Suitable for systems where the cost of deadlock prevention, avoidance, or detection
outweighs the benefits.
5.2 Combined Approach:
• In practice, operating systems may use a combination of prevention, avoidance,
detection, and recovery strategies.
• For instance, deadlocks involving certain resources may be prevented, while
deadlocks in other scenarios are detected and recovered.

OPERATING SYSTEMS 43
6. Examples of Deadlocks
• Resource Deadlocks: Multiple processes wait for resources held by others, leading to
a cycle.
o Example: Two processes, A and B, each holding a resource needed by the
other.
• Communication Deadlocks: Processes wait indefinitely for messages from each
other.
• Database Deadlocks: Transactions lock database rows in different orders, resulting in
a cycle of dependencies.
Memory Management in Operating Systems
Memory management is a crucial function of an operating system (OS) that manages
computer memory, including the allocation and deallocation of memory spaces to
processes. Efficient memory management ensures optimal system performance, stability,
and multitasking capabilities.

1. Objectives of Memory Management


• Efficient Utilization of Memory: Maximizing the use of available memory to
accommodate as many processes as possible.
• Protection: Ensuring that processes do not interfere with each other's memory spaces.
• Relocation: Allowing processes to move in memory while ensuring that they still run
correctly.
• Logical and Physical Address Mapping: Converting logical addresses generated by
the CPU into physical addresses in memory.
• Swapping and Virtual Memory Management: Managing situations where the
system's physical memory is insufficient to handle all active processes.

2. Memory Management Techniques


2.1 Single Contiguous Allocation:
• The entire process is loaded into a single contiguous block of memory.
• Simplest memory management scheme, but inefficient as it limits the system to one
process at a time.
2.2 Partitioned Allocation:
• Divides memory into fixed or variable-sized partitions.
• Fixed-Partitioning:
OPERATING SYSTEMS 44
o Memory is divided into fixed-size partitions.
o A process is loaded into a partition large enough to accommodate it.
o Can lead to internal fragmentation: unused memory within a partition.
• Dynamic/Variable-Partitioning:
o Memory is divided into partitions dynamically, based on process requirements.
o Can lead to external fragmentation: small memory blocks become scattered,
making it difficult to allocate large processes.
2.3 Paging:
• Divides the process's logical memory and the system's physical memory into fixed-
size blocks called pages and frames, respectively.
• When a process is executed, its pages are loaded into any available memory frames.
• Eliminates external fragmentation and allows the physical address space of a process
to be non-contiguous.
• Uses a page table to map logical addresses to physical addresses.
2.4 Segmentation:
• Divides the process's memory into segments based on the logical divisions like
functions, data, stacks, etc.
• Each segment can vary in size.
• The logical address consists of a segment number and an offset within the segment.
• Segments are mapped to physical memory, potentially leading to external
fragmentation.
2.5 Paged Segmentation:
• Combines paging and segmentation to gain the advantages of both.
• Segments are divided into pages, which are then loaded into memory frames.
• Helps manage both segmentation's logical division and paging's physical memory
efficiency.

3. Virtual Memory
Virtual memory allows an OS to run applications that require more memory than is
physically available by using disk space to extend the apparent amount of available
memory.
3.1 Concepts:

OPERATING SYSTEMS 45
• Paging: Divides virtual memory into pages, which are loaded into frames in physical
memory as needed.
• Page Replacement Algorithms: Determine which pages to swap in and out of
physical memory.
o FIFO (First-In-First-Out): Replaces the oldest page in memory.
o LRU (Least Recently Used): Replaces the page that has not been used for the
longest period.
o Optimal Page Replacement: Replaces the page that will not be used for the
longest period in the future (requires future knowledge).
3.2 Benefits:
• Enables the execution of large applications that exceed physical memory capacity.
• Provides an abstraction of a large, contiguous memory space for applications.
• Reduces the need for physical memory, thus allowing multiple processes to run
simultaneously.
3.3 Swapping:
• Swapping is a technique where the OS swaps out entire processes from physical
memory to disk to free up space.
• When the swapped-out process is needed again, it is swapped back into physical
memory.

4. Memory Allocation Strategies


4.1 First Fit:
• Allocates the first block of memory that is large enough to satisfy the request.
• Fast and simple but can lead to fragmentation.
4.2 Best Fit:
• Allocates the smallest block of memory that is sufficient to satisfy the request.
• Reduces wasted space but can leave small unusable fragments.
4.3 Worst Fit:
• Allocates the largest available block of memory.
• Leaves large leftover spaces but can also lead to fragmentation.
4.4 Next Fit:
• Similar to First Fit but starts searching from the last allocated block.
• Can help distribute memory usage more evenly.

OPERATING SYSTEMS 46
5. Fragmentation
5.1 Internal Fragmentation:
• Occurs when fixed-size memory partitions are allocated to processes that are smaller
than the partition size.
• The unused memory within the partition is wasted.
5.2 External Fragmentation:
• Occurs when free memory is scattered in small blocks throughout the system.
• Can make it difficult to allocate large contiguous blocks to processes.
5.3 Compaction:
• A technique to reduce external fragmentation by relocating processes to consolidate
free memory into a single contiguous block.
• Time-consuming and resource-intensive.

6. Memory Protection and Sharing


6.1 Protection:
• Ensures that processes do not interfere with each other's memory space.
• Implemented using hardware mechanisms like base and limit registers or page
tables.
6.2 Sharing:
• Allows multiple processes to access the same memory region, useful for shared
libraries or inter-process communication.
• Achieved through shared pages in paging or shared segments in segmentation.

7. Hardware Support for Memory Management


7.1 Memory Management Unit (MMU):
• Hardware device that translates logical addresses to physical addresses at runtime.
• Uses page tables and segment tables to perform address translation.
7.2 Translation Lookaside Buffer (TLB):
• A small, fast cache that stores recent address translations to speed up memory access.
• Reduces the time taken to perform address translation in paging systems.
Device I/O Management in Operating Systems
OPERATING SYSTEMS 47
Device I/O management is a crucial component of operating systems, responsible for
managing the interaction between the system and various input/output devices such as
disks, printers, network interfaces, and keyboards. Effective I/O management ensures
smooth communication between hardware devices and applications, maximizing
performance and resource utilization.

1. Objectives of Device I/O Management


• Efficiency: Maximizing throughput and minimizing latency for I/O operations.
• Device Independence: Allowing applications to perform I/O operations without
needing to know the specifics of the hardware.
• Uniform Naming: Providing a consistent naming scheme for devices to ensure easy
identification and access.
• Error Handling: Detecting, reporting, and recovering from I/O errors.
• Buffering and Caching: Reducing the performance gap between slow I/O devices
and faster system components by using buffering and caching techniques.
• Multiplexing: Sharing I/O devices among multiple processes efficiently.

2. Components of I/O System


2.1 Device Controllers:
• Hardware interfaces that manage the operation of a particular type of device.
• Translate high-level commands from the operating system into device-specific
operations.
• Interact with the CPU and memory through the system bus.
2.2 Device Drivers:
• Software modules that provide a uniform interface between the OS and hardware
devices.
• Translate OS I/O requests into device-specific commands understood by the device
controller.
• Handle device-specific tasks like initializing the device, managing data transfer, and
error handling.
2.3 I/O Devices:
• The physical hardware components like hard drives, printers, keyboards, and network
cards.
• Categorized into block devices (e.g., disk drives) and character devices (e.g.,
keyboards).

OPERATING SYSTEMS 48
3. I/O Operations
3.1 Programmed I/O:
• The CPU is actively involved in I/O operations, polling the device to check if it is
ready for data transfer.
• Simple but inefficient as the CPU is busy-waiting, resulting in wasted CPU cycles.
3.2 Interrupt-Driven I/O:
• Devices generate interrupts to signal the CPU when they are ready for data transfer,
freeing the CPU to perform other tasks while waiting.
• More efficient than programmed I/O as it reduces CPU idle time.
• Requires an interrupt handler to manage and prioritize interrupts from multiple
devices.
3.3 Direct Memory Access (DMA):
• Allows devices to transfer data directly to and from memory without involving the
CPU for each byte of data.
• The CPU initializes the DMA controller, specifying the memory address and the
number of bytes to transfer.
• Once the transfer is complete, the DMA controller sends an interrupt to the CPU.
• Greatly improves efficiency, especially for large data transfers.

4. I/O Scheduling
I/O scheduling optimizes the order in which I/O requests are processed to enhance system
performance.
4.1 First-Come, First-Served (FCFS):
• Processes I/O requests in the order they arrive.
• Simple but can lead to suboptimal performance, especially with disk I/O where
requests are scattered across the disk.
4.2 Shortest Seek Time First (SSTF):
• Selects the I/O request that requires the least movement of the disk arm.
• Minimizes seek time, improving disk I/O efficiency.
• Can lead to starvation for requests far from the current disk head position.
4.3 SCAN and LOOK:

OPERATING SYSTEMS 49
• SCAN: The disk arm moves back and forth across the disk, servicing requests in one
direction until it reaches the end, then reversing direction.
• LOOK: Similar to SCAN but only goes as far as the last request in each direction
before reversing.
• Reduces the average seek time by distributing movement more evenly across the disk.
4.4 C-SCAN and C-LOOK:
• C-SCAN: The disk arm moves in one direction only, servicing requests, and jumps
back to the start once it reaches the end.
• C-LOOK: Similar to C-SCAN but only goes as far as the last request, then jumps
back to the start.
• Provides more uniform wait times compared to SCAN and LOOK.

5. Buffering and Caching


5.1 Buffering:
• Temporarily holds data while it is being transferred between two devices or between a
device and an application.
• Helps to accommodate speed mismatches between devices and the CPU.
• Can use single, double, or circular buffers to optimize data flow.
5.2 Caching:
• Stores frequently accessed data in a faster storage medium to reduce access times.
• Uses techniques like write-through (data is written to both cache and disk) and
write-back (data is only written to cache and later to disk).
5.3 Spooling:
• Stores data for devices that cannot handle interleaved data streams, like printers.
• Allows processes to send data to a spool (buffer), from which the device reads the
data at its own pace.

6. Error Handling
• Detection: Identifying errors during I/O operations, such as disk read/write errors or
device malfunctions.
• Reporting: Notifying the OS or applications about errors to take corrective actions.
• Recovery: Techniques like retrying operations, using error-correcting codes (ECC), or
marking bad sectors on disks to handle errors.

OPERATING SYSTEMS 50
7. Device Independence and Naming
• Device Independence: Allows applications to perform I/O operations without
knowing the details of the hardware. The OS provides a uniform interface for
accessing devices.
• Uniform Naming: Devices are assigned logical names (e.g., /dev/sda1) that are used
to access them, abstracting away the hardware-specific details.

8. Virtual File Systems (VFS)


• VFS provides a common interface for different file systems, allowing the OS to
interact with different storage devices in a uniform manner.
• It abstracts the details of various file systems, making it easier to implement support
for different types of storage.

9. I/O Protection and Security


• Ensures that processes cannot access devices directly, preventing unauthorized or
potentially harmful operations.
• The OS enforces access control, allowing only authorized processes to perform I/O
operations.
File Management in Operating Systems
File management is a fundamental responsibility of an operating system (OS), involving
the organization, storage, retrieval, naming, sharing, and protection of files on storage
devices. Efficient file management ensures data is accessible, secure, and well-organized,
enabling users and applications to interact with stored information effectively.

1. Objectives of File Management


• Data Organization and Storage: Structuring data for easy access and efficient
storage.
• File Naming and Identification: Providing a consistent naming scheme to identify
files.
• File Manipulation: Allowing creation, deletion, reading, writing, and modification of
files.
• Access Control: Managing permissions to ensure security and proper access to files.
• Data Integrity and Recovery: Ensuring data integrity and providing mechanisms for
recovery in case of errors or system failures.

OPERATING SYSTEMS 51
2. File System Concepts
2.1 File:
• A logical unit of storage that contains data or programs.
• Has attributes such as name, type, size, creation date, and permissions.
2.2 File Structure:
• Unstructured: A simple stream of bytes (e.g., text files).
• Structured: Contains a predefined format or structure (e.g., databases).
2.3 File Types:
• Text Files: Contain readable characters (e.g., .txt).
• Binary Files: Contain binary data, typically not human-readable (e.g., executable
files).
• Directory Files: Contain references to other files, organizing them into a hierarchy.
• Special Files: Represent devices, sockets, or other resources.
2.4 File Attributes:
• Name: The identifier for the file.
• Type: The kind of file (e.g., text, binary).
• Location: The storage device and path where the file resides.
• Size: The size of the file in bytes.
• Protection: Permissions indicating who can read, write, or execute the file.
• Timestamps: Dates of creation, modification, and last access.

3. File Operations
The OS provides various file operations to manage files, including:
• Create: Creating a new file in the file system.
• Delete: Removing a file from the file system.
• Open: Accessing a file for reading or writing.
• Close: Terminating access to an open file.
• Read: Reading data from a file.
• Write: Writing data to a file.
• Append: Adding data to the end of a file.
OPERATING SYSTEMS 52
• Rename: Changing the name of a file.
• Seek: Moving the read/write pointer to a specific location in a file.

4. File Access Methods


4.1 Sequential Access:
• Data is accessed in a linear order, from the beginning to the end.
• Suitable for simple applications like reading logs or processing text files.
4.2 Direct/Random Access:
• Data can be read or written at any location in the file without accessing it sequentially.
• Supports operations like reading a specific record from a database.
4.3 Indexed Access:
• Uses an index to locate records or data blocks in a file.
• Facilitates fast searching and retrieval of data.

5. Directory Structure
Directories provide a way to organize files into a hierarchical structure, making it easier
to manage and locate files.
5.1 Single-Level Directory:
• All files are stored in a single directory.
• Simple but can lead to difficulties in file organization and name conflicts.
5.2 Two-Level Directory:
• Each user has their own directory, under which they can create and manage files.
• Reduces name conflicts and improves file organization.
5.3 Tree-Structured Directory:
• Directories are organized in a hierarchical tree structure, where each directory can
contain files and subdirectories.
• Offers flexibility and supports complex file organization.
5.4 Acyclic-Graph Directory:
• Allows directories to share subdirectories and files, creating an acyclic graph
structure.
• Facilitates sharing but requires mechanisms to handle multiple references to files and
directories.
OPERATING SYSTEMS 53
5.5 General Graph Directory:
• Supports arbitrary links between files and directories, creating a general graph
structure.
• Complex and may require cycle-detection algorithms to prevent circular references.

6. File System Mounting


• Mounting: The process of making a file system accessible at a certain point in the
directory hierarchy.
• Allows integration of different storage devices and file systems into a single directory
structure.
• Mount Points: Directories where the file system is attached.

7. File Allocation Methods


7.1 Contiguous Allocation:
• Allocates a single contiguous block of space on the disk for a file.
• Provides fast access but can lead to fragmentation and difficulty in expanding files.
7.2 Linked Allocation:
• Each file is a linked list of disk blocks, with each block containing a pointer to the
next block.
• Eliminates fragmentation but provides slower access due to pointer traversal.
7.3 Indexed Allocation:
• Uses an index block to store pointers to the data blocks of the file.
• Offers direct access and efficient storage but requires additional space for the index.

8. File Protection and Security


• Access Control Lists (ACLs): Specify the permissions for each user or group,
controlling who can read, write, or execute a file.
• File Permissions: Define the level of access (e.g., read, write, execute) that users and
groups have to a file.
• Encryption: Protects file data by encoding it, ensuring that only authorized users can
read the file's contents.
• Backups and Recovery: Creating copies of files and directories to prevent data loss
and facilitate recovery in case of failures.

OPERATING SYSTEMS 54
9. File System Implementation
9.1 File Control Block (FCB):
• A data structure that contains information about a file, such as its name, location, size,
and access permissions.
• The OS maintains an FCB for each file in the system.
9.2 Free Space Management:
• Keeps track of unallocated disk space to efficiently allocate space for new files.
• Methods include bitmaps, linked lists, and free space tables.
9.3 Disk Scheduling:
• Algorithms like FCFS, SSTF, SCAN, and C-SCAN are used to optimize the order of
disk I/O operations, improving overall system performance.

OPERATING SYSTEMS 55

You might also like