Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
226 views68 pages

Operating System Notes

Uploaded by

Rhea Pandey
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
226 views68 pages

Operating System Notes

Uploaded by

Rhea Pandey
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 68

OPERATING SYSTEM NOTES (REF: READING 1- FROM UNIT 1-5)

UNIT 1: Introduction
An operating system is a program that manages a computer’s hardware. It also provides a basis for
application programs and acts as an intermediary between the computer user and the computer
hardware. An amazing aspect of operating systems is how they vary in accomplishing these tasks.

1.1 What Operating Systems Do


1. Components of a Computer System

 Hardware: CPU, memory, and I/O devices are the fundamental resources.

 Operating System: Controls hardware, coordinating between applications and users.

 Application Programs: Enable specific tasks (e.g., word processors, browsers).

 Users: End users interacting with the system.

2. Role of an Operating System

 Acts as a resource manager for CPU, memory, storage, and I/O devices.

 Functions similarly to a government by providing a functional environment but not


performing tasks directly.

 Balances resource allocation, especially important in multi-user systems.

3. Views of Operating Systems

 User View:
o Single-user systems (e.g., PCs): Focus on user experience and ease of use.

o Multi-user systems (e.g., mainframes): Emphasize resource utilization and fairness


among users.

o Networked workstations: Balance usability with shared resources.

o Mobile devices: Designed for personal use with a touch interface and limited
hardware.

o Embedded systems: Minimal user interface, designed to function autonomously.

 System View:

o Resource Allocator: Manages CPU time, memory, and I/O to ensure efficient, fair
use.

o Control Program: Manages program execution and controls hardware use to prevent
errors.

4. Defining an Operating System

 Core Purpose: To make hardware usable and facilitate program operation.

 Kernel: The essential part of the OS that always runs, handling core functions.

 System Programs: Provide supporting functions for the OS but are not part of the kernel.

 Application Programs: All non-system-specific programs used by users.

5. Scope of Operating Systems

 Variability: OS features vary by device, from minimal for embedded systems to extensive for
desktops and mobile devices.

 Middleware in Mobile OS: Extends OS functionality by supporting services for developers


(e.g., databases, multimedia).

 Complexity and Legalities: As OS features expanded, regulatory issues arose (e.g.,


Microsoft's inclusion of browsers in Windows), emphasizing the impact of OS on competition
and software ecosystems.

1.4 Operating-System Structure

1. Multiprogramming

 Purpose: Keeps the CPU busy by having multiple jobs (programs) in memory.

 Process: Jobs are kept in a job pool on disk until ready for memory allocation.

 Job Switching: When one job waits (e.g., for I/O), the OS switches to another, ensuring CPU
is never idle.

2. Time-Sharing Systems

 Extension of Multiprogramming: Allows multiple users to interact with their programs.


 Interactive System: Users provide commands (via keyboard, mouse, etc.) and receive
immediate responses.

 User Experience: Rapid switching between jobs creates the impression of a dedicated system
per user.

 Process: Programs running are called processes, and the OS allocates a small CPU time slice
per user.

3. CPU Scheduling and Memory Management

 CPU Scheduling: Decides which job runs first among jobs ready to execute.

 Memory Management: Manages several programs in memory simultaneously and may use
techniques like job scheduling for loading jobs.

 Swapping: Temporarily moves processes in and out of main memory to manage space and
ensure quick response.

4. Virtual Memory

 Definition: Enables processes to run without being fully loaded in physical memory.

 Advantages: Supports larger programs than physical memory allows and simplifies memory
management for developers.

5. File System and Disk Management

 File System: Organized storage for data on disks.

 Disk Management: Essential for managing file systems in a time-sharing environment.

6. Resource Protection and Synchronization

 Resource Protection: Ensures that resources are used appropriately, preventing misuse.

 Synchronization: Coordinates jobs to prevent conflicts, supports communication, and avoids


deadlock (processes waiting indefinitely for each other).

1.5 Operating-System Operations

1. Interrupt-Driven Operation

 Nature: Modern OSs are interrupt-driven; they wait for interrupts or traps to occur.

 Traps: Software-generated interrupts due to errors (e.g., division by zero) or service requests.

 Service Routine: Each interrupt type has a specific service routine in the OS that determines
the response.

2. Error Protection

 Goal: Prevent errors in one user program from affecting others or the OS.

 Challenges: Infinite loops or errors in multiprogramming could disrupt other processes.

 Solution: The OS must isolate errors, allowing it to manage each process independently.

3. Dual-Mode and Multi-Mode Operation


 Modes: User Mode (1) for user programs and Kernel Mode (0) for OS execution.

 Mode Bit: A hardware bit distinguishes between user and kernel modes.

 Transition: Occurs on system calls or interrupts to switch control to the OS in Kernel Mode.

 Privileged Instructions: Certain critical instructions (I/O control, timer management) are
restricted to Kernel Mode for system protection.

4. Extended Modes and Virtualization

 Multimode Operation: CPUs may support multiple privilege levels (e.g., Intel 64 with four
levels) or separate modes for virtualization.

 Virtual Machine Manager (VMM): Requires a unique privilege level to manage virtual
machines without full kernel-level access.

5. System Calls

 Purpose: Allow user programs to request OS services.

 Mechanism: A trap to the OS service routine sets the mode to Kernel Mode.

 Process: The OS validates the request, executes it, and returns control to the user program.

6. Hardware-Dependent OS Protection

 Example: Lack of dual-mode (e.g., in MS-DOS) makes the OS vulnerable to user errors, risking
the system’s stability.

 Modern CPUs: Provide dual-mode, enhancing protection in contemporary OSs like Windows
and Linux.

7. Error Handling

 Handling Violations: Errors such as illegal instructions or invalid memory access trigger a trap
to the OS.

 Abnormal Termination: The OS terminates faulty programs, may generate an error message,
and optionally dumps memory for debugging.

8. Timer for CPU Control


 Purpose: Ensures the OS regains control, preventing infinite loops in user programs.

 Variable Timer: Often implemented with a clock and decrementing counter.

 Example: A 10-bit counter with a 1 ms clock enables time limits from 1 ms to 1,024 ms.

 Time Limit Enforcement: Counters initialize based on allowed runtime; if the counter reaches
zero, the OS can terminate the program.

1.6 Process Management

1. Definition of Process

o Process: An active instance of a program in execution (e.g., compiler, word processor,


system tasks like printing).

o Program vs. Process: A program is passive (stored data), while a process is active (in
execution).

o Single-threaded Process: One program counter; instructions execute sequentially.


o Multithreaded Process: Multiple program counters; each thread executes its own
instruction sequence.

2. Resources Required by a Process

o Essential Resources: CPU time, memory, files, and I/O devices.

o Resource Allocation: Given at process creation or allocated dynamically during


execution.

o Initialization Data: Processes may receive input (e.g., filename) to perform specific
tasks (e.g., displaying file status).

3. Process as a System Work Unit

o Collection of Processes: System includes OS processes (system code) and user


processes (user code).

o Concurrent Execution: Multiple processes can run concurrently through CPU


multiplexing.

4. OS Responsibilities in Process Management

o Scheduling: Determines the execution order of processes and threads on CPUs.

o Process Lifecycle Management:

 Creation and Deletion: OS can create/delete user and system processes.

 Suspension and Resumption: Temporarily halt and resume processes as


needed.

o Synchronization: Provides tools for processes to work in coordination without


interference.
o Communication: Enables inter-process communication (IPC) for data exchange
between processes.

1.7 Memory Management

Memory Management Detailed Notes

1. Main Memory Basics

o Structure: Main memory consists of an addressable array of bytes, ranging from


thousands to billions in size.

o Access: CPU and I/O devices use main memory for rapid data access, where
instructions are fetched and data is read/written during respective cycles
(instruction-fetch and data-fetch).

o CPU Dependency: Main memory is often the only large storage directly accessible by
the CPU; data from storage (e.g., disk) must first be transferred to memory.

2. Program Execution in Memory

o Mapping and Loading: Programs need absolute addresses to be mapped and loaded
into memory for execution.

o Memory Access: Programs generate absolute addresses to access instructions and


data during execution.

o Memory Release: Upon program termination, its allocated memory space is


released for reuse.

3. Need for Memory Management

o Multi-Program Support: Keeping multiple programs in memory improves CPU


utilization and response speed.

o Varied Approaches: Different memory management schemes suit different systems


based on specific hardware designs and situational needs.

4. OS Responsibilities in Memory Management

o Tracking Memory Usage: Monitors which memory sections are occupied and by
whom.

o Managing Memory Allocation: Decides which processes/data should be moved


in/out of memory.

o Allocating/Deallocating Space: Provides or reclaims memory as processes demand.

1.8 Storage Management

Overview: The operating system provides a logical, user-friendly view of storage by abstracting
physical storage properties into logical units called files, facilitating organized file access through
storage devices.

1. File-System Management:

 Role: Manages data storage across various media (e.g., magnetic disks, optical disks).
 Key Media Characteristics: Differ in access speed, capacity, data-transfer rate, and access
method (sequential vs. random).

 File Organization: Files represent collections of related information, including program code
and data in forms like numeric or text. Files may have rigid (fixed fields) or flexible (text)
formats.

 Directory Structure: Files are organized into directories to enhance usability.

 Access Control: OS may regulate file access (e.g., read, write) based on user permissions.

 Core Activities:

o Creation/deletion of files and directories.

o File and directory manipulation support.

o Mapping files to secondary storage.

o File backup on stable, non-volatile media.

2. Mass-Storage Management:
 Purpose: Backs up volatile main memory, typically with disk-based storage for persistent
data.

 Secondary Storage: Essential for most applications and data storage; efficient disk
management is crucial.

 OS Responsibilities:

o Free-Space Management: Tracking available storage.

o Storage Allocation: Assigning storage for files.

o Disk Scheduling: Optimizing access to improve system performance.

 Tertiary Storage: Lower-cost storage (e.g., tapes, DVDs) for backups and archives. OS may
handle mounting/unmounting and data migration.

3. Caching:

 Principle: Temporarily stores frequently accessed data in a faster storage level (cache) to
speed up access.

 Hierarchy: Data may reside across multiple levels (registers, cache, main memory, secondary
storage).

 Cache Management: Efficient cache size and replacement policies are critical.

 Data Consistency: Ensuring updated data consistency across all levels is crucial, especially in
multi-processing and distributed systems (addressing cache coherency).
4. I/O Systems:

 Purpose: Hides hardware-specific details from users.

 Components:

o Memory Management: Buffering, caching, spooling.

o Device Drivers: Interface managing hardware-device-specific functions.

 Interrupts & Handlers: Manage efficient communication and operation between OS and I/O
devices.

Unit 2: OS Structures

An operating system provides the environment within which programs are executed. We can view an
operating system from several vantage points. One view focuses on the services that the system
provides; another, on the interface that it makes available to users and programmers; a third, on its
components and their interconnections.

2.1 Operating-System Services

1. User-Oriented Services:

 User Interface (UI): Provides access to the OS via:

o Command-Line Interface (CLI): Uses text-based commands.


o Batch Interface: Commands are pre-written in files and executed together.

o Graphical User Interface (GUI): Interactive, window-based system for easy navigation
and selections.

 Program Execution: Loads, executes, and terminates programs (both normal and error-based
exits).

 I/O Operations: Manages data exchanges with I/O devices, preventing direct user control for
efficiency and security.

 File-System Manipulation: Allows creation, deletion, searching, and access control for files
and directories, supporting multiple file system types.

 Communication: Facilitates data exchange between processes through:

o Shared Memory: Processes read/write in shared space.

o Message Passing: OS manages data packets sent between processes, either on the
same or different systems.

 Error Detection: Identifies and addresses errors across CPU, memory, I/O devices, and
programs. Actions include:

o Halting the system.

o Terminating faulty processes.

o Providing error codes for program handling.

2. System-Oriented Services:

 Resource Allocation: Manages resources (CPU, memory, file storage, I/O devices) among
users and tasks via:

o CPU Scheduling: Allocates CPU based on job requirements, speed, registers, etc.

o I/O Allocation: Assigns peripheral devices (e.g., printers, USB drives) as needed.

 Accounting: Tracks user activity and resource usage for billing or performance analysis. This
data aids in system optimization.

 Protection and Security:

o Protection: Controls process interactions and resource access to prevent


interference and unauthorized access.

o Security: Includes user authentication, safeguards for network and I/O devices, and
logging access attempts for threat detection.

This structured approach allows the OS to provide a secure, efficient, and user-friendly environment
for program execution and resource management.
2.3 System Calls

1. Purpose of System Calls

 System calls provide an interface for programs to access OS services.

 Typically written in C/C++; some low-level tasks require assembly language.


 Example: Copying data from one file to another uses multiple system calls for actions like
reading/writing files, opening/closing files, handling errors, and displaying messages.

2. Example Flow of System Calls in a File Copy Program

 Obtain File Names: Use system calls to display prompts and read user input.

 Open/Create Files: Requires system calls for both files; errors may lead to further calls for
error handling or program termination.
 Error Handling: System calls manage cases like missing files or write-protected files.

 File Reading and Writing: Loop with system calls for each read and write operation, handling
errors (e.g., end of file, disk full).

 Close Files and Terminate: System calls finalize file operations and terminate the program.

3. Application Programming Interface (API)

 APIs (e.g., Windows API, POSIX API, Java API) provide high-level access to system calls.

 Benefits of Using APIs:

o Portability: Programs using a standard API can run on systems with the same API.
o Abstraction: APIs simplify system calls, making them easier to use than direct system
calls.

 APIs map to system calls (e.g., CreateProcess() in Windows API calls NTCreateProcess() in the
kernel).

4. System Call Interface

 Role: Acts as a mediator between the API and OS, handling function calls and invoking
system calls.

 Process:

o System calls are assigned unique numbers stored in a table.

o The interface uses these numbers to identify and execute the appropriate OS kernel
function.

 Transparency: Programmers interact with the API, without needing knowledge of the system
call implementation.

5. Parameter Passing in System Calls

 Methods:

o Registers: Simple and fast, but limited by register count.


o Memory Block/Table: Used when parameters exceed register space (e.g., Linux,
Solaris).

o Stack: Parameters are pushed onto the stack, allowing flexible parameter handling.

 Memory block or stack methods are preferred due to no limit on parameter count or size.

These system calls form the backbone of OS operations, managing interactions between user
programs and system resources efficiently and securely.

2.4 Types of System Calls:

• Process control

◦ end, abort

◦ load, execute
◦ create process, terminate process

◦ get process attributes, set process attributes

◦ wait for time

◦ wait event, signal event

◦ allocate and free memory

• File management

◦ create file, delete file

◦ open, close

◦ read, write, reposition

◦ get file attributes, set file attributes

• Device management

◦ request device, release device

◦ read, write, reposition

◦ get device attributes, set device attributes

◦ logically attach or detach devices

• Information maintenance

◦ get time or date, set time or date

◦ get system data, set system data

◦ get process, file, or device attributes

◦ set process, file, or device attributes

• Communications

◦ create, delete communication connection

◦ send, receive messages

◦ transfer status information

◦ attach or detach remote devices


System calls are essential functions provided by an operating system, allowing user-level processes to
request services from the OS. These system calls are generally categorized into six types: Process
Control, File Management, Device Management, Information Maintenance, Communication, and
Protection. Here’s a breakdown of each type:

1. Process Control

System calls under Process Control manage the execution of processes, including starting, halting,
and handling process behavior.

 Ending a Process:

o end(): Terminates a process normally.

o abort(): Terminates a process abnormally, often triggering a memory dump and error
message.

o Error Codes: Processes may assign error levels to signal the severity of issues (e.g., 0
for normal termination, higher levels for errors).

 Starting and Running Programs:

o Loading and Executing: A process can use system calls like load() and execute() to
start another program.

o Process Creation: Calls such as create process() or submit job() allow concurrent
program execution, essential for multitasking.

 Process Control Attributes:

o get process attributes(): Retrieves process attributes, such as priority and max
allowable time.

o set process attributes(): Sets process characteristics like priority levels.

 Waiting and Signaling:

o wait time() and wait event(): Processes may need to wait for a certain duration or
specific event.

o signal event(): A process can signal the occurrence of an event for other processes to
continue execution.

 Locking and Synchronization:

o Acquire/Release Locks: Calls like acquire lock() and release lock() manage access to
shared resources, ensuring data integrity in concurrent processing.

2. File Management

System calls in File Management handle file operations and management tasks, such as creating,
deleting, and managing file data.
 File Operations:

o create(), delete(): Creating and deleting files by specifying file names and attributes.

o open(), close(): Opening files for use and closing them when done.

o read(), write(), reposition(): Reading, writing, and repositioning within files (e.g.,
rewinding or jumping to file ends).

 Directory and File Attributes:

o get file attributes(): Retrieves attributes like name, type, protection codes, etc.

o set file attributes(): Allows modifications of file attributes.

 Additional File Operations:

o move() and copy(): Some OS provide direct calls or use APIs to handle these tasks.

3. Device Management

These calls are responsible for managing hardware resources, both physical (e.g., disk drives) and
virtual (e.g., files treated as devices).

 Device Request and Release:

o request(), release(): Processes request access to a device before usage and release it
afterward.

o Managed vs. Unmanaged Devices: Unmanaged access can lead to issues like device
contention or deadlocks, discussed in detail in Chapter 7.

 Device Operations:

o read(), write(), reposition(): These actions are similar to file operations but applied to
device interactions.

 Unified File-Device Structure: Some systems, like UNIX, combine files and devices under one
system, allowing similar calls to handle both types of resources.

4. Information Maintenance
These system calls provide and manage various forms of system and process information, assisting in
debugging and performance profiling.

 Time and Date:

o time(), date(): System calls return the current date and time.

 System and Process Information:

o Get/Set Attributes: Calls like get process attributes() and set process attributes()
retrieve or set process-related data.

o Debugging:
 dump(): Records memory contents for debugging.

 Tracing and Profiling: Tracing lists each system call as it occurs; profiling
collects data on execution time spent in different code sections.

5. Communication

Interprocess communication (IPC) allows processes to exchange information, either through message
passing or shared memory.

 Message-Passing Model:

o Establishing Connection: Calls like open connection() and close connection() handle
communication setup and teardown.

o Client-Server Communication: The client uses accept connection() to establish a


connection with a server process.

o read message(), write message(): For message exchange between client and server
processes.

 Shared Memory Model:

o shared memory create(): Establishes a shared memory region accessible by multiple


processes.

o shared memory attach(): Allows a process to attach to an existing shared memory.

o Protection and Synchronization: Processes using shared memory must coordinate


access to prevent conflicts.

6. Protection

Protection system calls manage access control to resources, ensuring secure multi-user or networked
environments.

 Setting Permissions:

o set permission(), get permission(): System calls for assigning and retrieving resource
access permissions.

 User Access Control:

o allow user(), deny user(): Specify which users can access certain resources.
2.5 System Programs

System Programs: Overview

System programs, also known as system utilities, are essential components of a modern computing
system. These programs offer a user-friendly environment for development and execution, providing
interfaces to system calls and often adding further functionality. System programs operate at a level
above the operating system and include tools for file management, status information, file
modification, programming-language support, program loading and execution, communications, and
background services. These utilities simplify user interactions with the system by streamlining
complex tasks, providing direct access to core system functions, and ensuring efficient management
of resources.

Categories of System Programs

1. File Management

o Handle file operations such as creating, deleting, copying, renaming, printing,


dumping, and listing files and directories.

o Make file and directory manipulation straightforward for users through commands
and GUIs.

2. Status Information

o Provide system status details, including date, time, available memory, disk space, and
number of users.

o More advanced tools may display detailed performance data, logging information,
and debugging details.

o Output can be directed to terminals, files, or windows in the GUI.


o Some systems include a registry for configuration storage and retrieval.

3. File Modification

o Text editors and commands allow for creating and editing files.

o Specialized commands may assist with searching file contents and applying text
transformations.

4. Programming-Language Support

o Include compilers, assemblers, debuggers, and interpreters for popular programming


languages (e.g., C, C++, Java, Perl).

o These tools may be provided within the OS or as separate downloads, facilitating


program development.

5. Program Loading and Execution

o Once programs are compiled or assembled, they need to be loaded into memory for
execution.

o Various loaders (absolute, relocatable, linkage editors, and overlay loaders) help in
loading programs.

o Debugging tools are also necessary for high-level and machine-level languages.

6. Communications

o Enable virtual connections between processes, users, and systems.

o Support communication functions like sending messages, browsing, email, remote


login, and file transfer.

7. Background Services

o Launch essential system processes at startup.

o Processes may terminate upon completion or continue running until the system is
shut down, known as services, subsystems, or daemons.

o Examples include network daemons, process schedulers, error monitors, and print
servers.

o Operating systems may use daemons to run user context activities that would
otherwise run in kernel context.

Application Programs in Operating Systems

Most OSs include additional programs to assist users with common tasks. These application
programs typically include:

 Web browsers, word processors, and spreadsheets

 Database systems, compilers, plotting, and statistical analysis packages

 Games and other utility software


These applications are key to how users interact with the system. They abstract system calls,
providing interfaces that vary depending on the operating system in use. For example:

 A user might interact with a GUI (e.g., Mac OS X with a mouse-and-window interface) or a
command-line shell (UNIX), both utilizing the same system calls but presenting them
differently.

 Users may experience different interfaces and applications on the same hardware, such as
switching from Mac OS X to Windows, which alters the interface but uses the same physical
resources.

In conclusion, system programs form the bridge between the operating system and the user, offering
necessary functionalities to manage files, display system information, edit content, support
programming, handle program execution, facilitate communication, and provide background
services, making them an integral part of the user’s experience with the computer.

2.7 Operating-System Structure

Operating System Design and Implementation Overview


Modern operating systems are complex systems requiring careful engineering to function properly
and facilitate easy modifications. A typical approach to building an OS is to partition it into smaller,
manageable components or modules. Each module is well-defined, with clear inputs, outputs, and
functions, making the system easier to understand, debug, and maintain.

2.7.1 Simple Structure

1. Lack of Modular Design:

o Some operating systems, especially older ones, lacked structured modularity.

o MS-DOS:

 Initially designed as a small system without modular boundaries, where


application programs could directly access hardware (e.g., disk drives,
display).

 Vulnerable to system crashes due to lack of separation between user


programs and OS functions.

 Hardware limitations (Intel 8088 chip) provided no protection mechanisms,


forcing developers to make hardware directly accessible.

2. UNIX System:

o Originally constrained by hardware limits, similar to MS-DOS.

o Kernel and System Programs:

 The OS consists mainly of a monolithic kernel and system programs.

 Kernel handles crucial functions like file systems, CPU scheduling, and
memory management.

o Monolithic Design Advantages:


 Minimal overhead for system calls, benefiting performance.

 Limited modularity, making it harder to implement and maintain.

o This monolithic design approach is still seen in systems like UNIX, Linux, and
Windows.

2.7.2 Layered Approach

1. Layered Structure:

o Layers separate OS functionality, simplifying construction and debugging.

o Layer 0: Hardware.

o Layer N: User interface.

o Each layer only interacts with the layers immediately below, creating a structured
hierarchy.

2. Advantages:

o Simplicity: Layers allow independent debugging, as lower layers are verified before
moving up.

o Data Encapsulation: Each layer hides its internal data and functions, ensuring
controlled access.
o Information Hiding: Lower-level details are hidden from higher layers, enabling
flexibility in low-level implementation.

3. Challenges:

o Layer Planning: Layers must be carefully defined (e.g., memory management may
depend on disk I/O).

o Performance: Each layer adds overhead, potentially impacting system-call execution


time.

o Modern Adaptations: Systems now use fewer layers with more functionality to
balance modularity and performance.
2.7.3 Microkernels

1. Concept:

o Microkernel Approach: Removes all nonessential components from the kernel,


implementing them as user-level programs.

o Kernel is minimized, handling only basic functions like minimal process and memory
management and communication.

2. Communication:

o Message Passing: Facilitates communication between client programs and services.

o Example: A client program accesses a file by sending a message to the file server, not
by direct interaction.

3. Advantages:

o Modularity: Easier to extend the OS by adding new user-space services without


modifying the kernel.

o Portability: Simplified OS adaptation across hardware platforms.

o Security and Reliability: User-space services reduce the impact of service failures on
the kernel.

4. Examples:

o Mach: Used in Tru64 UNIX, Mac OS X (Darwin).

o QNX: Real-time OS for embedded systems, where the microkernel handles


messaging, scheduling, and hardware interrupts.

o Windows NT: Originally used a microkernel structure, which was later modified to
improve performance.

5. Performance Challenges:
o Increased system-function overhead due to message passing, as seen with Windows
NT’s performance issues compared to Windows 95.

2.7.4 Modules

1. Modular Design Using Loadable Kernel Modules:

o OS components are linked dynamically at runtime, allowing easy feature additions


without kernel recompilation.

o Core services are built into the kernel; additional services are loaded as modules
when needed.

2. Structure and Flexibility:


o Kernel modules resemble a layered system with protected interfaces, while
remaining flexible as any module can communicate with others.

o Benefits include modularity and efficiency by reducing message-passing overhead


compared to microkernels.

3. Examples in Practice:

o Solaris OS: Utilizes core kernel with 7 types of loadable modules:

 Scheduling classes, file systems, loadable system calls, executable formats,


STREAMS modules, miscellaneous, device and bus drivers.

o Linux: Primarily uses loadable modules for device drivers and file systems, providing
flexibility and customization.
Summary

Operating system design varies from monolithic structures, which offer performance advantages but
lack modularity, to more modular designs like the layered approach, microkernels, and loadable
kernel modules, each with trade-offs in terms of performance, security, and extensibility. Modern OS
design often blends these approaches, aiming for a balance between efficiency and modularity.

Unit 3: Process Management

3.1 Process Concept

1. Definition of Processes:

o A "process" is any active program or CPU activity in a system, including user tasks
and internal operating system tasks.

o Terms like “job” and “process” are often used interchangeably; originally "job"
referred to batch processes in early computing, while "process" is now more
common.

o In modern systems, processes encompass applications (e.g., word processors, web


browsers), tasks on single-user systems, and even OS-managed tasks like memory
management.

2. Process vs. Program:

o Program: Passive code stored as a file (often called an executable).

o Process: An active entity with a program counter and associated resources.

o A program becomes a process when loaded into memory and ready for execution.

3. Multiple Instances:

o The same program can run as multiple processes, each independent of the others.

o Each process has its own memory sections (data, heap, stack), making them isolated,
even if they share the same code section.

4. Execution Environments:
o Example: Java programs run within the Java Virtual Machine (JVM), which is itself a
process. The JVM interprets Java code and maps actions to machine-level
instructions.

3.1.1 The Process

1. Components of a Process in Memory:

o Text Section: The program code itself.

o Program Counter and Registers: Track execution state.

o Process Stack: Stores temporary data like function parameters, return addresses,
and local variables.

o Data Section: Contains global variables.

o Heap: Memory dynamically allocated during execution.

2. Process Lifecycle:

o A process is initiated when a program is loaded into memory.


o Common initiation methods include double-clicking an executable file or entering a
command in a terminal.

3.1.2 Process State

1. States of a Process:

o New: Process is being created.

o Running: Process is actively executing instructions.

o Waiting: Process is waiting for a specific event (e.g., I/O completion).

o Ready: Process is prepared to run and waiting to be assigned to a processor.

o Terminated: Process has completed execution.

2. State Diagram:
o The state transitions form a cycle, as shown in Figure 3.2, where a process can shift
between states depending on events and system requirements.

3.1.3 Process Control Block (PCB)

1. Definition:

o A PCB is a data structure in the OS that stores information about each process,
essentially a record of the process’s attributes and status.

2. Components of a PCB:

o Process State: Current state (new, ready, running, etc.).

o Program Counter: Tracks the next instruction to be executed.

o CPU Registers: Contains registers like accumulators, index registers, stack pointers,
and general-purpose registers, essential for resuming the process post-interruption.

o CPU Scheduling Information: Contains priority level, pointers to scheduling queues,


and scheduling parameters.

o Memory-Management Information: Contains base and limit registers, page tables,


segment tables, etc.

o Accounting Information: CPU time used, real-time usage, time limits, process
numbers, etc.

o I/O Status Information: Lists allocated I/O devices, open files, etc.

3. Purpose of PCB:

o Ensures that the process can be paused and resumed without loss of progress or
state, enabling multi-tasking in an OS.
3.1.4 Threads

1. Single vs. Multithreaded Processes:

o A traditional process is single-threaded, performing one task at a time.

o Multithreading: Allows a process to execute multiple sequences (threads) of


instructions concurrently, enhancing functionality and efficiency, especially in multi-
core systems.

2. Implications of Multithreading:

o PCB expands to accommodate information for each thread.

o Multithreading allows parallel task execution, like typing and spell-checking in a word
processor simultaneously.

o Requires additional OS support and infrastructure for managing threads.

3.2 Process Scheduling

Objectives

 Multiprogramming: Aim to have some process running at all times to maximize CPU
utilization.

 Time-sharing: Switch the CPU frequently among processes, allowing user interaction with
running programs.

 Process Scheduler: Selects an available process for CPU execution. In single-processor


systems, only one process runs at a time, while others wait for CPU access.

3.2.1 Scheduling Queues


 Job Queue: All processes in the system are initially put here as they enter the system.

 Ready Queue: Contains processes residing in main memory, ready, and waiting to execute.
Implemented as a linked list, with each process’s PCB (Process Control Block) linking to the
next.

 Device Queues: Separate queues for processes waiting for specific I/O devices, like a disk.
Each device has its own queue.

Queue Flow:

 Processes are initially in the ready queue and get dispatched when CPU is available.

 Events affecting processes:

o I/O request: Process moves to an I/O queue.

o Child creation: Process waits for child termination.

o CPU preemption: Process returns to ready queue due to an interrupt.

 Processes repeat this cycle until they terminate, at which point they’re removed from all
queues, and their PCB and resources are deallocated.
3.2.2 Schedulers

 Long-term Scheduler (Job Scheduler):

o Selects processes from job queue and loads them into memory for execution.

o Controls degree of multiprogramming (number of processes in memory).

o Executes infrequently (minutes between invocations).

o Process Selection: Maintains a balance between I/O-bound and CPU-bound


processes for optimal system performance.

 Short-term Scheduler (CPU Scheduler):

o Frequently selects from ready processes for CPU allocation.

o Must be fast due to frequent executions (e.g., every 100 milliseconds).

o Efficiency impacts CPU usage as time spent scheduling is time CPU is unavailable for
processes.

 Medium-term Scheduler:

o Optional scheduler in some systems, responsible for swapping processes in and out
of memory.

o Swapping: Temporarily removes processes from memory to manage degree of


multiprogramming.

o Useful for adjusting process mix or when memory is overcommitted.


3.2.3 Context Switch

 Interrupts and Context Switching:

o Interrupts temporarily stop a process, enabling the OS to perform kernel operations


or switch to another process.

o Context of a process (CPU state, registers, memory info) is saved in its PCB during an
interrupt, allowing resumption afterward.

 Context Switch:

o Definition: Saving the state of the current process and restoring the state of the next
scheduled process.
o Overhead: Context-switching is non-productive as it doesn’t contribute to process
execution.

o Speed Factors: Dependent on hardware; a few milliseconds is typical.

o Hardware Support: Some CPUs have multiple register sets (e.g., Sun UltraSPARC),
reducing switching time by changing register set pointers instead of copying all data.

Complexity: Advanced OS features like memory management can increase context-switching


complexity, as address spaces and additional data may need to be managed during switches.

4.1 Overview of Threads

Definition

 A thread is the fundamental unit of CPU utilization.

 Components:

o Thread ID

o Program Counter

o Register Set

o Stack

 Threads within the same process share:

o Code section

o Data section

o Operating-system resources (e.g., open files, signals).

 Traditional (Heavyweight) Process: Contains a single thread.

 Multithreaded Process: Contains multiple threads, allowing concurrent execution of tasks.

4.1.1 Motivation for Multithreading

 Common in Modern Applications: Most applications are multithreaded for better efficiency
and performance.
 Example Applications:

o Web Browsers: Separate threads for displaying content and retrieving data from the
network.

o Word Processors: Threads for UI interactions, background spelling/grammar checks,


and graphic display.

o Multicore Systems: Multithreading enables simultaneous processing of CPU-


intensive tasks across multiple cores.

 Web Servers:

o Serve multiple clients by creating a thread for each client request, avoiding the
inefficiency of creating a new process for each request.

o Traditional single-threaded servers could only handle one client at a time, resulting
in delays.

 Remote Procedure Calls (RPCs):

o Multithreading enables servers to handle multiple RPC requests simultaneously,


enhancing efficiency.

 Operating System Kernels:

o Kernels are often multithreaded, with threads assigned to specific tasks like memory
management, device management, and interrupt handling (e.g., Solaris for interrupt
handling, Linux for memory management).
4.1.2 Benefits of Multithreaded Programming

1. Responsiveness:

o Improves user experience by keeping applications responsive even when performing


lengthy operations.

o In a single-threaded program, a time-consuming operation would block other tasks,


making the program unresponsive.

o A separate thread can handle long tasks, allowing the main interface to remain
responsive.

2. Resource Sharing:
o Threads within the same process share memory and resources automatically, unlike
processes, which require explicit techniques like shared memory or message passing.

o This default sharing within a common address space simplifies resource


management for the application.

3. Economy:
o Threads are less costly to create and manage than processes because they share
resources of their parent process.

o Creating threads is significantly faster than creating processes. For example, in


Solaris, creating a process is about 30 times slower than creating a thread, and
context switching between threads is around five times faster.

4. Scalability:

o Threads enhance performance on multiprocessor systems by enabling concurrent


execution across multiple cores.

o While a single-threaded process can only utilize one CPU core, multithreaded
processes can scale to use all available cores, maximizing resource utilization.
5.1 Process Synchronization Background

1. Concurrent and Parallel Execution Recap

 Concurrent Execution: Processes run in a way that gives the impression of simultaneous
execution by the CPU quickly switching between them.

 Parallel Execution: True simultaneous execution of multiple processes on separate cores.

 Process Scheduling: The CPU scheduler manages which processes execute and switches
rapidly among them, allowing partial execution before moving to another process.

2. Data Integrity and Shared Data Issues

 Data Sharing in Asynchronous Processes: Processes or threads that operate concurrently


may share data.

 Example - Producer-Consumer Problem:

o Bounded Buffer: Used to control data flow between producer and consumer
processes by limiting the number of items.

o Modification for Improvement: Adding a counter variable to track the number of


items in the buffer (incremented by the producer and decremented by the
consumer).

 Concurrent Execution Problem:

o If both producer and consumer modify counter simultaneously, its value may
become incorrect due to race conditions.

3. Illustration of Race Condition

 Race Condition: A situation where the result of execution depends on the specific sequence
of operations by concurrent processes.

 Example Execution of counter++ and counter--:

o Steps:

1. Producer reads counter (5) and increments it.

2. Consumer reads counter (5) and decrements it.

3. Producer writes updated value (6).

4. Consumer writes its updated value (4).

o Incorrect Result: Due to the interleaving of operations, counter ends up as 4 instead


of the correct value (5).

the statement “counter++” may be implemented in machine language (on a typical


machine) as follows:
 Explanation:

o The CPU uses registers (e.g., register1 and register2) to load and update values.

o Interleaving of producer and consumer instructions leads to incorrect updates due to


concurrent manipulation of counter.

4. Need for Synchronization

 Synchronization Requirement: To avoid race conditions, ensure only one process can modify
shared data (like counter) at any time.

 Importance in Operating Systems:

o Shared resources must be carefully managed as different parts of the system (e.g.,
threads and processes) may try to access them concurrently.

 Multithreaded Applications on Multicore Systems:

o As multicore systems increase, so does the need for effective synchronization in


applications, as multiple threads often share data and run in parallel on different
cores.

5.2 The Critical-Section Problem

1. Overview of the Critical-Section Problem

 In a system with multiple processes {P0, P1, ..., Pn−1}, each process has a segment of code,
termed the critical section, where it may be accessing or modifying shared resources.
 The critical-section problem arises because no two processes should execute their critical
sections simultaneously to avoid conflicts or data inconsistencies.

 Goal: To design a protocol ensuring that when one process is in its critical section, no other
process can enter theirs.

2. Structure of Critical-Section Solution

 Entry Section: The code segment where a process requests permission to enter its critical
section.

 Exit Section: The code following the critical section, where the process indicates that it has
finished its work in the critical section.

 Remainder Section: All other code outside the entry and exit sections.

3. Requirements for a Solution to the Critical-Section Problem

 Mutual Exclusion: If a process Pi is in its critical section, no other processes can be in theirs.
This prevents simultaneous access to shared resources.

 Progress: If no process is in the critical section and multiple processes want to enter, only
those processes not in their remainder section should participate in selecting the next
process to enter. This ensures no indefinite postponement.

 Bounded Waiting: There should be a limit on how many times other processes can enter
their critical sections before a process is allowed to enter its own critical section after making
a request. This prevents indefinite waiting for access.

4. Race Conditions in Kernel Mode

 Kernel Race Conditions: Operating system kernel code can also be susceptible to race
conditions due to concurrent access by kernel-mode processes.
 Example - File List: For instance, a shared kernel data structure maintaining open files might
face race conditions if two processes try to open or close files at the same time.
 Other kernel structures, such as those for memory allocation, process management, and
interrupt handling, are also prone to race conditions.

5. Approaches to Handling Critical Sections in Operating Systems

 Preemptive Kernel:

o Allows a process to be preempted while in kernel mode.

o More challenging to design, especially in SMP (Symmetric Multiprocessing)


environments, as it requires careful handling to avoid race conditions with kernel
data structures.

 Nonpreemptive Kernel:

o Does not allow preemption of a process in kernel mode; the process continues until
it exits, blocks, or yields control.

o Generally safer from race conditions on shared kernel data because only one kernel-
mode process runs at a time.
6. Advantages and Disadvantages of Preemptive and Nonpreemptive Kernels

 Nonpreemptive Kernel:

o Advantage: Minimizes race conditions on kernel data as only one process is active in
the kernel at any time.

 Preemptive Kernel:

o Advantage: More responsive and suitable for real-time programming since it allows
real-time processes to preempt ongoing processes in kernel mode.

o Although challenging to design, preemptive kernels reduce the risk of processes


monopolizing CPU time and enhance responsiveness in systems requiring timely
response (like real-time systems).

5.3 Peterson’s Solution

Overview

 Peterson’s Solution is a classic software-based solution for the critical-section problem that
addresses mutual exclusion, progress, and bounded waiting.

 Applicability: Peterson's solution works for only two processes, which are labeled P0P_0P0
and P1P_1P1, alternating between critical and remainder sections.

 Limitation: Modern computer architectures might not guarantee its correctness due to how
machine instructions (like load and store) operate. However, Peterson’s solution is important
because it demonstrates a logical approach to solving the critical-section problem.

Shared Variables

Peterson’s solution uses two shared data items:

1. int turn: Indicates whose turn it is to enter the critical section. If turn == i, process PiP_iPi is
allowed to enter its critical section.

2. boolean flag[2]: Indicates if a process is ready to enter its critical section. If flag[i] == true, it
signifies that PiP_iPi is ready to enter.

Algorithm (Refer to Figure 5.2)

1. Entry Section: Before entering its critical section, PiP_iPi sets flag[i] = true (indicating it wants
to enter) and sets turn = j (allowing the other process a chance to enter).

2. Waiting Mechanism: The process then waits in a while loop (while (flag[j] && turn == j);)
until it can enter its critical section.

3. Critical Section: The process enters the critical section once the conditions of the while loop
are false.
4. Exit Section: After exiting the critical section, PiP_iPi sets flag[i] = false, indicating it no longer
needs access to the critical section.
Explanation of Behavior

 Race Condition Avoidance: If both processes try to enter simultaneously, they set turn to
each other’s IDs (i and j), with one value eventually being retained due to overwriting. This
remaining value in turn determines which process enters first.

Proof of Correctness

The solution satisfies the three essential properties of a critical-section solution:

1. Mutual Exclusion
o For both processes to be in the critical section simultaneously, both flag[0] and
flag[1] would need to be true.
o However, if both processes attempt to enter, only one value will persist in turn
(either 0 or 1), meaning only one process can proceed to its critical section while the
other waits.

o Therefore, mutual exclusion is ensured.

2. Progress

o A process can be blocked from entering only if it encounters the while loop condition
(flag[j] == true && turn == j).

o If the other process PjP_jPj is not ready, flag[j] == false, allowing PiP_iPi to enter.

o If both processes are waiting, only one can proceed based on the turn value,
satisfying the progress requirement.

3. Bounded Waiting

o A process waiting to enter the critical section will wait only until the other process
finishes one turn, then resets its flag to false.

o If the waiting process reattempts entry, it will succeed since the turn variable will be
set in its favor, guaranteeing it does not wait indefinitely.
Unit 4: Memory Management

8.1 Background

Memory is a crucial component in modern computer systems. The CPU relies on memory to fetch
and execute instructions, which often involve loading and storing data at specific memory addresses.

1. CPU and Memory Interaction

o A CPU executes instructions by first fetching them from memory, decoding them,
and then operating on the operands. After execution, results may be stored back in
memory.

o The memory unit only sees a stream of addresses; it doesn't interpret the addresses
or the purpose (instruction vs. data).

o Our focus is on managing memory and ensuring efficient access and protection for
concurrent processes.

2. Memory Management Issues

o Basic Hardware: CPUs have registers (fast, one-cycle access) and main memory
(slower, accessed via a memory bus).

o Cache: To reduce delays due to slow memory access, modern CPUs include cache
memory for faster access.

o Memory Protection: Protection is essential for OS stability and to prevent processes


from accessing each other’s memory.

3. Memory Protection Implementation

o Separate Memory Spaces: Each process must have its own memory space for
protection.

o Base and Limit Registers: These registers determine the legal address range a
process can access.

o Trap on Violations: Any attempt to access memory outside the allowed range
triggers a trap, preventing unintended access.

o Kernel Mode Access: Only the OS can modify base and limit registers, allowing
unrestricted access to system memory and user processes in kernel mode.
8.1.2 Address Binding

Address binding is the process of mapping logical addresses used in programs to physical addresses
in memory. There are three stages of address binding:

1. Compile Time: If the memory location of the process is known at compile time, absolute
addresses can be assigned. This is static binding.

2. Load Time: When memory location is determined at load time, relocatable code is
generated. The program can be reloaded if memory addresses change.

3. Execution Time: For systems where processes move during execution, binding occurs at
runtime. This requires special hardware support and is known as dynamic binding.

8.1.3 Logical vs. Physical Address Space

 Logical Address (Virtual Address): Generated by the CPU; used by programs.

 Physical Address: The actual address in memory seen by the memory unit.

 Logical and physical addresses are the same in compile-time and load-time binding but differ
in execution-time binding.

 Memory Management Unit (MMU): A hardware device that maps logical addresses to
physical addresses dynamically using a relocation register. This allows processes to be located
flexibly within physical memory.
8.1.4 Dynamic Loading

Dynamic loading loads routines into memory only when they are called. This conserves memory by
loading only necessary parts of a program.

 Process: Main program loads first, and additional routines load on demand.

 Advantages: Memory space utilization is improved, especially when rarely used routines are
not loaded unless necessary.
 Implementation: No special OS support is required; it is up to the programmer to implement
dynamic loading in programs.

8.1.5 Dynamic Linking and Shared Libraries

 Static Linking: Libraries are combined with the executable during load time, resulting in
duplication across programs.

 Dynamic Linking: Linking is postponed until execution. This approach is commonly used with
system libraries.
 Stub Mechanism: A placeholder in the program points to the library. When called, the stub
either loads the library or links to it directly if already loaded.

 Advantages:

o Reduces memory usage as multiple programs share one copy of the library.

o Simplifies library updates—programs automatically link to the new library version


without recompilation.

o Enables version control, allowing different programs to use different versions of the
same library if required.

 OS Support: Dynamic linking requires OS support, especially in protected memory systems,


to manage memory access across processes.
8.2 Swapping

Overview

 Swapping is a technique that allows processes to be temporarily moved out of the main
memory to a backing store (typically a fast disk) and later brought back for continued
execution.

 Enables the total address space of all processes to exceed the system's physical memory.

 Increases the degree of multiprogramming, allowing more processes to run simultaneously.

8.2.1 Standard Swapping

Process of Standard Swapping

1. Backing Store:

o A fast disk that stores memory images of processes.

o Must be large enough for all users' memory images and provide direct access.

2. Ready Queue:

o Contains processes ready to execute, whether they’re in memory or on the backing


store.

o When a process is ready to run but isn’t in memory, if no free memory is available,
the dispatcher swaps out another process to bring in the desired process.

3. Dispatcher Role:

o Checks if the process in the queue is in memory.

o If not, performs the swap-in/swap-out process, loads registers, and transfers control
to the process.

Context-Switch Time in Swapping Systems

 Context-Switch Time is high in systems with standard swapping.

o Example:
 For a 100 MB process on a hard disk with a transfer rate of 50 MB/s:

 Swap time = 100 MB / 50 MB per second = 2 seconds (to transfer


out) + 2 seconds (to transfer in) = 4 seconds total.

o Transfer time is directly proportional to memory size being swapped.

Memory Considerations

 Example:

o In a 4 GB system with a 1 GB OS:

 Maximum user process size = 3 GB.

o Smaller processes (e.g., 100 MB) are faster to swap (2 seconds) compared to larger
ones (e.g., 3 GB, which takes 60 seconds).

 Efficient swapping requires only moving the actively used memory.

o Processes with dynamic memory needs must inform the OS of changes via system
calls (e.g., request memory() and release memory()).

Constraints and Challenges

 Swapping requires a process to be idle, especially concerning pending I/O:

o Processes waiting for I/O cannot be swapped out if the I/O is asynchronously
accessing user memory.

o Solutions:

1. Avoid swapping processes with pending I/O.

2. Use double buffering: Conduct I/O in OS buffers, then transfer data to/from
process memory during swaps (adds overhead).

Modern Systems and Swapping

 Standard Swapping is generally obsolete in modern systems due to:

o High swap time and insufficient execution time.

o Modified swapping variants are used instead, often combined with virtual memory.

Modified Swapping Variants

1. Threshold-based Swapping:

o Swapping only activates when free memory falls below a certain level and halts
when it rises again.

2. Partial Swapping:

o Swaps only parts of processes to reduce swap time.

8.2.2 Swapping on Mobile Systems

Swapping Limitations on Mobile Devices


 Most mobile systems (e.g., iOS and Android) do not support swapping due to:

o Flash memory storage, which has limited space and write durability.

o Lower throughput between main memory and flash memory.

Memory Management on iOS

 When memory is low:

o Read-only data (e.g., code) is removed and reloaded if needed.

o Modified data (e.g., stack) is retained.

 If an app fails to release enough memory, iOS may terminate it.

Memory Management on Android

 No swapping support.

 If memory is insufficient, Android may terminate a process but saves its state to flash
memory for quick restarts.

Developer Considerations

 Developers need to carefully allocate and release memory on mobile systems to avoid
excessive memory usage and leaks.

 Both iOS and Android support paging, providing some level of memory management despite
the lack of swapping.

8.3 Contiguous Memory Allocation

Overview:

 Main memory is divided to accommodate both the operating system (OS) and user
processes.

 Efficient memory allocation is crucial for performance.

 Contiguous memory allocation is an early memory management technique.

Key Points:

1. Memory Partitioning:

o Memory is split into two main partitions:

 Operating System Partition: Typically placed in low memory due to the


location of the interrupt vector.

 User Processes Partition: The rest of the memory is for user processes.

o In contiguous memory allocation, each process occupies a continuous block of


memory.

2. Memory Protection:
o Prevents a process from accessing unauthorized memory areas.

o Utilizes relocation and limit registers:

 Relocation Register: Holds the smallest physical address for the process.

 Limit Register: Sets the range of logical addresses.

o The Memory Management Unit (MMU) dynamically maps logical addresses,


protecting OS and other user processes.

3. Dynamic Operating System Size:


o The OS size can change as needed, particularly with transient OS code (e.g., device
drivers) that is loaded only when necessary.

o This flexibility is beneficial for optimizing memory usage.

8.3.2 Memory Allocation

Fixed-Partition Scheme:

 Memory is divided into fixed-size partitions.

 Each partition can hold only one process.

 Degree of multiprogramming depends on the number of partitions.

 Originally used by systems like IBM OS/360 (MFT).

Variable-Partition Scheme:

 Memory is treated as a single large block initially, known as a hole.

 The OS keeps track of available and used memory.

 New processes are allocated memory based on their requirements.

 As processes terminate, their memory is freed, possibly creating new holes.

 Adjacent holes are merged to form larger blocks, potentially accommodating new processes.
Memory Allocation Strategies

1. First Fit:

o Allocates the first available hole that is large enough.

o Faster search; can stop as soon as a large enough hole is found.

2. Best Fit:

o Allocates the smallest hole that meets the process's requirements.

o May lead to smaller leftover holes, but requires searching the entire list.

3. Worst Fit:

o Allocates the largest hole available.

o Leaves a larger leftover hole, which may be more usable than small fragments.

8.3.3 Fragmentation

Types of Fragmentation:

1. External Fragmentation:

o Occurs when free memory is scattered in small blocks.

o Even with enough total free memory, it may be unusable if not contiguous.

o Example: Free memory scattered between processes, leading to small unusable


fragments.

o 50% Rule: Statistical analysis shows that, on average, 50% of the total allocated
memory is lost to fragmentation.

2. Internal Fragmentation:

o Occurs within allocated blocks due to fixed block sizes.

o Example: Allocating slightly more memory than requested, leading to unused


internal memory.

Solutions to Fragmentation:

1. Compaction:

o Shuffles processes to consolidate free memory into a single large block.

o Requires dynamic relocation (at execution time) to adjust addresses.

o Compaction can be costly and is not always feasible.

2. Non-Contiguous Allocation (Segmentation and Paging):

o Processes are allocated memory in non-contiguous segments, reducing


fragmentation.

o Techniques like segmentation and paging (discussed in later sections) allow more
flexible memory management.
Summary

 Contiguous Memory Allocation: Early method where each process occupies a continuous
block.

 Memory Protection: Achieved through relocation and limit registers to prevent unauthorized
access.

 Fragmentation: A major drawback, with both internal and external types.

 Solutions: Include compaction and non-contiguous allocation methods like segmentation


and paging.

These principles and challenges in contiguous memory allocation highlight the need for more
advanced memory management techniques, as explored in further sections (e.g., segmentation and
paging).

8.4 Segmentation

Introduction

 The user’s and programmer’s view of memory differs from actual physical memory.

 Physical memory management can be inconvenient for both the OS and programmers.

 Segmentation offers a mechanism to map the programmer’s view to actual physical memory,
creating a more natural programming environment.

8.4.1 Basic Method

 Programmer’s View of Memory:

o Programmers do not view memory as a simple linear array of bytes.

o Instead, they view it as a collection of variable-sized segments representing distinct


parts of the program (e.g., stack, code, data structures).

o Segments vary in length based on their function within the program.

 Segments:

o Programmers think of memory as a series of segments named for their function, e.g.,
"the stack" or "main program."

o The programmer does not need to know the exact memory address of each
segment.

 Logical Address Space:

o Defined as a collection of segments, where each segment has a name and a length.

o Logical addresses specify both the segment and the offset within the segment.

o The logical address format is <segment-number, offset>.

 Automatic Segmentation:

o Compilers create segments during compilation.


o Example segments a C compiler might create:

1. Code segment

2. Global variables segment

3. Heap (for dynamically allocated memory)

4. Stack (used by each thread)

5. Standard C library

 Loader’s Role:

o The loader assigns each segment a unique segment number.

8.4.2 Segmentation Hardware

 Mapping Logical to Physical Addresses:

o Physical memory is a one-dimensional array, while segmented memory addresses


are two-dimensional.

o Segment Table is used to map the two-dimensional logical addresses to one-


dimensional physical addresses.

o Each entry in the segment table has:

 Segment Base: Starting physical address of the segment.

 Segment Limit: Length of the segment.

 Addressing in Segmentation:

o A logical address comprises:

 Segment number (s): Index in the segment table.

 Offset (d): Position within the segment.


o To convert a logical address to a physical address:

 The segment number (s) is used to find the segment table entry.

 The offset (d) is checked to ensure it is within the segment limit.

 If d > segment limit, it triggers an error (e.g., attempt to access


memory beyond the segment).

 If d ≤ segment limit, it is added to the segment base to produce the physical


memory address.

 Example:

o Segment Table Entry for Segment 2:

 Base = 4300, Limit = 400 bytes.

 Reference to byte 53 of segment 2:

 Physical address = 4300 + 53 = 4353.

 Reference to byte 852 of segment 3:

 Base = 3200; Physical address = 3200 + 852 = 4052.

 Reference to byte 1222 of segment 0:

 If segment 0 has a limit of 1000 bytes, this request triggers an error


as it exceeds the segment’s size.

Key Concepts

 Segmentation:

o Allows programmers to view memory as logically grouped segments rather than a


continuous array.

o Provides memory management that aligns with how programmers structure their
code and data.
 Segment Table:

o Core to segmentation, enabling conversion of two-dimensional logical addresses to


one-dimensional physical addresses.

o Each segment has a base address and limit to ensure protection and prevent out-of-
bounds memory access.

 Error Handling:

o The system traps to the OS if an offset is beyond the segment limit, enhancing
security by preventing unauthorized memory access.

Segmentation, by aligning memory management with the logical structure of programs, enhances
both efficiency in memory use and programmer convenience.

8.5 Paging

Overview of Paging

 Paging is a memory-management scheme allowing a process's physical address space to be


non-contiguous.

 It addresses limitations in segmentation, such as external fragmentation and compaction


needs.

 Paging is widely used in modern operating systems because it prevents fragmentation issues
common in earlier memory-management techniques.

 Paging is a collaboration between the operating system and computer hardware.

8.5.1 Basic Method

 Frames and Pages:

o Physical memory is divided into fixed-sized blocks called frames.

o Logical memory is divided into blocks of the same size called pages.

 Process Execution:

o When executing a process, its pages are loaded into available frames from storage
(e.g., file system or backing store).

o Backing storage is also divided into blocks matching frame sizes, avoiding the
complications of varying memory chunk sizes.

 Logical and Physical Address Space Separation:

o The logical address space is separated from the physical address space.

o A process can have a large logical address space, even if physical memory is limited.

Hardware Support for Paging

 Address Division:
o Every address generated by the CPU is divided into:

 A page number (p)

 A page offset (d)

 Page Table:

o The page number serves as an index in the page table.

o Each entry in the page table provides the base address of a page in physical memory.

o The base address is combined with the page offset to create the physical address.

 Page and Frame Size:

o Page size is determined by hardware, typically a power of 2, such as 4 KB or 8 KB.


o High-order bits of the logical address specify the page number, and low-order bits
specify the offset.

Example Illustrations (Figures 8.11 and 8.12)

 Figure 8.11: Shows a paging model where pages in logical memory are mapped to frames in
physical memory using a page table.

 Figure 8.12: Demonstrates a 32-byte memory with 4-byte pages, showing how logical
addresses translate to physical addresses.
Dynamic Relocation

 Paging dynamically relocates logical addresses to physical addresses.


 Logical addresses in paging are bound to physical addresses through the page table, similar
to using base registers.

Advantages and Limitations

 No External Fragmentation:

o Any free frame can be allocated to any page, preventing external fragmentation.

 Internal Fragmentation:

o The last frame of a process may have unused space if the process size doesn't align
perfectly with page boundaries.

o Internal fragmentation can be significant if page sizes are large.

o Average internal fragmentation is about half a page per process.

 Page Size Considerations:

o Smaller pages reduce internal fragmentation but increase overhead for page table
entries.

o Larger pages improve disk I/O efficiency but increase internal fragmentation.

o Typical page sizes range from 4 KB to 8 KB, but larger pages are also supported.
System-Level Considerations

 Page Table Entry Size:

o Often, each entry in a 32-bit system's page table is 4 bytes, sufficient to address large
physical memory sizes (up to 16 TB with 4 KB frames).

 Physical Memory vs. Logical Address Space:

o The physical memory size is often different from the logical address space size.

 Page Table Management:

o The OS maintains a page table for each process to handle logical-to-physical address
mapping.

o During context switches, the CPU dispatcher updates the hardware page table based
on the process’s page table, increasing context-switch time.

8.5.2 Hardware Support

Operating System Methods for Page Tables

 Process-specific Page Tables:

o Many operating systems allocate a separate page table for each process.

o A pointer to each process's page table is stored in the process control block (PCB)
along with other register values (e.g., instruction counter).
o During a context switch, the dispatcher reloads user registers and updates the
hardware page-table values from the process's page table.

 Shared or Limited Page Tables:

o Some OSs limit the number of page tables (one or only a few), reducing overhead
during context switching.

Page Table Hardware Implementations


1. Page Table Registers:

o Simplest hardware implementation uses dedicated, high-speed registers for the page
table.

o These registers ensure efficient paging-address translation, as every memory access


passes through the paging map.

o Example: DEC PDP-11 architecture uses this method with an address space of 16 bits
and 8 KB pages, resulting in 8 fast register entries.

o This setup is feasible for small page tables (e.g., 256 entries), but not for larger page
tables in modern systems.

2. Page Table in Main Memory:

o For larger page tables (up to a million entries), storing the page table in high-speed
registers is impractical.

o The page table is kept in main memory, with a Page Table Base Register (PTBR)
pointing to it.

o Context switching becomes faster as only the PTBR needs to be updated.

o Memory Access Delay:

 Accessing memory requires two steps:

 First, access the page table to get the frame number (one memory
access).

 Then, access the actual memory location (second memory access).

 This doubles memory access time, a delay that would be intolerable without
additional solutions.

Translation Lookaside Buffer (TLB)

 Purpose and Functionality:

o TLB is a small, fast cache used to store recently accessed page-table entries,
addressing the delay caused by accessing main memory for the page table.
o It’s an associative, high-speed memory where each entry has a key (tag) and a value
(frame number).

o When the CPU generates a logical address, it checks the TLB for the page number:

 If found (a TLB hit), the frame number is available immediately.

 If not (a TLB miss), the page table in main memory is accessed, and the
result is stored in the TLB.

o Modern TLBs support pipeline integration, adding no penalty for memory


translation.

 Size and Management:


o TLB size is typically between 32 and 1,024 entries.

o Some CPUs have separate TLBs for instruction and data addresses, effectively
increasing the TLB's capacity.

o The TLB uses replacement policies (e.g., Least Recently Used (LRU), round-robin, or
random) when it’s full.

 TLB Miss Handling:

o If a page number is not found in the TLB, the system fetches the frame number from
the page table.

o This may be done automatically in hardware or by an interrupt to the OS.

o After the frame number is retrieved, it is loaded into the TLB for future reference.

 Wired-down Entries:

o Some TLBs allow entries to be "wired down," meaning they cannot be evicted.

o These entries are usually for essential kernel code.

Address-Space Identifiers (ASIDs)

 Process-Specific Entries:

o TLBs may store Address-Space Identifiers (ASIDs) to uniquely identify each process.

o This provides address-space protection, ensuring that TLB entries belong to the
current process.

o If the ASID in the TLB does not match the current process, it results in a TLB miss.

 TLB Flushing:

o In TLBs without ASIDs, a context switch requires TLB flushing to clear old entries and
prevent incorrect translations.

Hit Ratio and Effective Memory Access Time

 Hit Ratio:

o The hit ratio is the percentage of times the desired page number is found in the TLB.

o Higher hit ratios reduce the average memory access time.

 Effective Access Time Calculation:

o Example:

 If TLB hit ratio is 80% and each memory access takes 100 nanoseconds:

 TLB hit (80%) takes 100 ns.

 TLB miss (20%) requires two memory accesses (200 ns total).

 Effective memory access time:


Effective access time=0.80×100+0.20×200=120 nanoseconds\text{Eff
ective access time} = 0.80 \times 100 + 0.20 \times 200 = 120 \text{
nanoseconds}Effective access time=0.80×100+0.20×200=120 nanose
conds

 With a 99% hit ratio, the effective access time improves to 101 nanoseconds.

Advanced TLB Hierarchies

 Modern CPUs may have multi-level TLBs:

o Example: Intel Core i7 has:

 L1 TLB for instructions (128 entries) and data (64 entries).

 L2 TLB with 512 entries.

o If a miss occurs in both L1 and L2 TLBs, the CPU must access the page table in
memory, which can take hundreds of cycles.

Operating System Considerations

 Although TLBs are hardware, OS designers must understand TLB architecture for optimal
paging.

 OS paging strategies may need to adapt to TLB changes (e.g., between CPU generations), as
efficient TLB use directly affects memory performance.
9.2 Demand Paging

Introduction to Demand Paging

 Basic Concept: When an executable program is loaded from disk into memory, a
straightforward approach is to load the entire program at once. However, this can be
inefficient since not all parts of the program may be needed immediately.

 User Interaction Example: For instance, if a program begins with a menu of options, loading
the entire program means all executable code for every option is loaded into memory, which
may not be necessary.

 Demand Paging: Instead of loading everything at once, demand paging loads pages of the
program only when they are required during execution. This means pages that are never
accessed remain off the disk, optimizing memory usage.

How Demand Paging Works

 Lazy Swapper: A lazy swapper only loads pages into memory when they are needed, rather
than swapping in the entire process. This mechanism helps in reducing both swap time and
the physical memory footprint.

 Paging vs. Swapping: The term "pager" is used instead of "swapper" since it deals with
individual pages of a process rather than entire processes.

9.2.1 Basic Concepts

 Page Selection: When a process is swapped in, the pager anticipates which pages will be
needed and loads only those, thereby optimizing memory usage.

 Valid-Invalid Bit Scheme: This hardware support distinguishes between pages in memory
and those on disk.

o Valid Bit: Indicates that a page is both legal and in memory.

o Invalid Bit: Indicates that a page is either not valid or is valid but currently resides on
the disk.
 Page Table Entries: The page table entries for pages in memory are set as usual, while those
not in memory are either marked invalid or contain the address of the page on disk.

Page Fault Handling

 Page Fault: Occurs when a process tries to access a page not currently loaded in memory,
resulting in a trap to the operating system.

 Handling Steps:

1. Check an internal table for the process to confirm if the access was valid or invalid.

2. If invalid, terminate the process; if valid but not in memory, load the page.
3. Find a free frame in memory.

4. Schedule a disk operation to read the page into the free frame.

5. Update internal tables and page tables to reflect the new state.

6. Restart the interrupted instruction, allowing access to the page as if it were always
present in memory.

 Process Execution Without Initial Pages: A process can start with no pages in memory and
will fault as needed, loading each required page dynamically. This represents pure demand
paging.

Performance Considerations
 Locality of Reference: Programs typically exhibit locality of reference, meaning that they
access a relatively small set of pages frequently. This behavior improves demand paging
performance.

 Required Hardware Support:

o Page Table: Must include the capability to mark entries invalid.

o Secondary Memory: Holds pages not currently in physical memory, typically on high-
speed disks.

Restarting Instructions After Page Faults

 State Preservation: When a page fault occurs, the CPU saves the process state (registers,
instruction pointer, etc.) to allow restarting the instruction once the page is loaded.

 Re-execution After Fault:

o If the fault happens on an instruction fetch, the instruction is fetched again.

o If it happens while fetching an operand, the entire instruction must be re-fetched


and decoded.

 Complex Instructions: Issues arise with instructions that modify multiple locations (e.g.,
moves of large blocks of data). Solutions include:

o Microcode Computation: Attempting to access all involved pages beforehand to


catch faults early.

o Temporary Registers: Using registers to hold values before modification, allowing


restoration if a fault occurs.

9.2.2 Performance of Demand Paging

 Effective Access Time Calculation: The effective access time for demand-paged memory
considers both the normal memory access time and the potential delay introduced by page
faults.

 Page Fault Service Time: Typically includes steps like saving state, checking legality, reading
the page from disk, and resuming the process. Average service time can be around 8
milliseconds.

 Performance Degradation: The effective access time can increase significantly with page
faults, making the system slower. For instance, if one access out of 1,000 results in a page
fault, the effective access time can increase to 8.2 microseconds, leading to a 40 times
slowdown.

Swap Space Considerations


 Efficient Disk Usage: Swap space is typically faster for disk I/O than regular file systems
because it uses larger allocation blocks and avoids complex file system operations.

 Demand Paging from Swap Space: Systems can load entire images into swap space at
startup and perform demand paging from there, optimizing performance.

 Handling of Binary Files: Demand paging for binary files can be directly from the file system,
but modified pages are managed in swap space.

Conclusion
 Demand paging optimizes memory usage and performance by loading only necessary pages
into memory as needed. Managing the page-fault rate is crucial to maintaining system
performance. Efficient handling of swap space and the architecture's ability to manage page
faults effectively are essential for a well-functioning demand-paging system.

Unit 5: File System

10.1 Overview of Mass-Storage Structure

10.1.1 Magnetic Disks

 General Structure:

o Magnetic disks are the primary medium for secondary storage in modern computing.

o Each disk platter is flat and circular, similar to a CD, with common diameters ranging
from 1.8 to 3.5 inches.

o Both surfaces of the platter are coated with magnetic material, allowing information
to be recorded magnetically.

 Data Storage:

o Data is stored by using a read-write head that "flies" just above the platter surfaces.
o The heads are connected to a disk arm that moves them collectively.

o The platter’s surface is divided into circular tracks, further subdivided into sectors.

o A set of tracks aligned at one arm position forms a cylinder, with many concentric
cylinders present in a disk drive.

o Disk drives typically offer storage capacities measured in gigabytes.

 Disk Operation:

o A drive motor spins the disk at high speeds, with typical RPM (rotations per minute)
ranging from 5,400 to 15,000 RPM.

o Disk speed comprises two components:

 Transfer Rate: The speed at which data is transmitted between the drive and
the computer.

 Positioning Time (Random-Access Time): This includes:

 Seek Time: The time taken to position the disk arm over the desired
cylinder.

 Rotational Latency: The time taken for the desired sector to rotate
under the disk head.

o Typical data transfer rates can reach several megabytes per second, with seek times
and rotational latencies measured in milliseconds.

 Risks:

o The disk head flies on a very thin cushion of air (measured in microns), posing a risk
of contact with the disk surface, potentially leading to a head crash—a situation
where the head damages the magnetic surface, typically irreparable and
necessitating disk replacement.

 Removable Disks:

o Disks can be removable, allowing different disks to be mounted as required.

o Removable magnetic disks usually consist of a single platter encased in plastic for
protection.

o Other forms of removable storage include CDs, DVDs, Blu-ray discs, and flash-
memory devices (flash drives).

 Connection to Computer:

o A disk drive connects to a computer via an I/O bus, with various types available (e.g.,
ATA, SATA, USB, Fibre Channel).

o Data transfers on the bus are managed by electronic processors called controllers.

 Host Controller: Located at the computer end of the bus.

 Disk Controller: Built into each disk drive.


o To perform I/O operations, the computer sends commands to the host controller,
which relays them to the disk controller for execution.

o Disk controllers often feature built-in caches for faster data transfer between the disk
surface and the host controller.

10.2 Disk Structure

Logical Block Addressing

 Concept:
o Modern magnetic disk drives are structured as large one-dimensional arrays of
logical blocks.
o A logical block is the smallest unit of data transfer, typically sized at 512 bytes. Some
disks may be formatted to utilize a different logical block size, such as 1,024 bytes
(detailed in Section 10.5.1).

 Mapping:

o The one-dimensional array of logical blocks is mapped sequentially onto the disk’s
physical sectors.

o Sector 0 corresponds to the first sector of the first track on the outermost cylinder.

o The mapping process progresses:

 Through all sectors of the first track.

 Then continues through the remaining tracks of that cylinder.

 Finally, it moves to the tracks of the next cylinders, proceeding from the
outermost to the innermost.

 Address Translation:
o This mapping allows for the theoretical conversion of a logical block number into an
old-style disk address format, which includes:

 Cylinder Number

 Track Number (within that cylinder)

 Sector Number (within that track)

Challenges in Address Translation

 Defective Sectors:

o Most disk drives contain some defective sectors. The logical mapping accounts for
these by substituting spare sectors located elsewhere on the disk, complicating the
address translation process.

 Variable Number of Sectors per Track:

o The number of sectors per track can vary on some drives, which is another challenge
to translation:

 Constant Linear Velocity (CLV): In this approach, the density of bits remains
uniform across the track.

 Tracks farther from the center of the disk are longer and can thus
hold more sectors.

 As a result, the number of sectors per track decreases from outer to


inner zones. For example, outermost tracks may hold about 40%
more sectors than innermost tracks.

 The drive compensates by increasing the rotation speed when the


head moves inward to maintain a consistent data flow under the
head.

 This method is commonly employed in CD-ROM and DVD-ROM


drives.

 Constant Angular Velocity (CAV):

 Here, the disk rotation speed remains constant.

 The density of bits decreases from inner to outer tracks to maintain


a consistent data rate.

 This method is primarily used in hard disks.

Advances in Disk Technology

 Increasing Sector Density:

o The number of sectors per track has been growing as disk technology advances.

o The outer zone of modern disks typically contains several hundred sectors per track.

 Increasing Cylinder Count:


o The number of cylinders per disk has also been on the rise, with large disks featuring
tens of thousands of cylinders.

10.3 Disk Attachment

10.3.1 Host-Attached Storage

 Definition:

o Host-attached storage refers to storage devices accessed through local I/O ports
directly connected to the computer system.

 I/O Port Technologies:

o IDE/ATA:

 Commonly used in typical desktop PCs.

 Supports a maximum of two drives per I/O bus.

o SATA:

 A newer protocol that simplifies cabling and improves connectivity over


IDE/ATA.

 High-End Solutions:

o Fibre Channel (FC):

 A high-speed serial architecture capable of operating over optical fiber or a


four-conductor copper cable.

 Two main variants:

 Switched Fabric:

 Supports a 24-bit address space.

 Allows multiple hosts and storage devices to connect,


enhancing flexibility in I/O communications.

 Forms the basis of Storage-Area Networks (SANs), as


discussed in Section 10.3.3.

 Arbitrated Loop (FC-AL):

 Can address up to 126 devices (drives and controllers).

 Types of Storage Devices:

o Suitable for use as host-attached storage include:

 Hard disk drives (HDDs)

 RAID arrays

 Optical drives (CDs, DVDs)

 Tape drives
 I/O Commands:

o Commands that initiate data transfers are typically reads and writes of logical data
blocks.

o These commands are directed to specifically identified storage units, which can be
accessed by bus ID or target logical unit.

10.3.2 Network-Attached Storage

 Definition:

o Network-attached storage (NAS) is a dedicated storage system that is accessed


remotely over a data network.

 Access Method:

o Clients access NAS via a remote procedure call (RPC) interface:

 NFS (Network File System) for UNIX systems

 CIFS (Common Internet File System) for Windows systems


o RPCs are carried over TCP or UDP on an IP network, typically the same local area
network (LAN) used for data traffic.

 Implementation:

o NAS systems are generally implemented as RAID arrays combined with software that
provides the RPC interface.

o Facilitates shared storage among all computers on a LAN, mimicking the access ease
found with local host-attached storage.

 Performance:

o Although NAS allows easy sharing of storage, it tends to be less efficient and exhibit
lower performance compared to some direct-attached storage options.

 iSCSI Protocol:

o iSCSI (Internet Small Computer Systems Interface):

 The latest NAS protocol that uses IP networks to carry the SCSI protocol.

 Allows storage to be treated as if directly attached to the host, even if it is


located at a distance.
10.3.3 Storage-Area Network

 Challenges with NAS:

o One drawback of NAS systems is that storage I/O operations consume bandwidth on
the data network, which can increase latency in network communication.

o This issue is especially problematic in large client-server environments, where


communication among servers, clients, and storage devices can compete for
bandwidth.

 Definition:

o A storage-area network (SAN) is a private network designed to connect servers and


storage units using storage protocols instead of standard networking protocols.

 Advantages of SAN:

o Flexibility:

 Allows multiple hosts and storage arrays to connect to the same SAN.

 Storage can be dynamically allocated to hosts, enabling efficient resource


management.

 A SAN switch controls access between hosts and storage, facilitating the
allocation of storage based on demand.

 Example: If a host is running low on disk space, additional storage can be


allocated dynamically.

 Cluster Sharing:

o SANs enable clusters of servers to share the same storage, supporting multiple direct
host connections to storage arrays.

 Interconnect Protocols:

o Common interconnects for SANs include:

 Fibre Channel (FC): The most prevalent interconnect method.

 iSCSI: Gaining traction due to its simplicity.

 InfiniBand: A specialized bus architecture that offers high-speed connections


for servers and storage units.

12.1 File-System Structure

Characteristics of Disks for File Systems

1. In-Place Rewriting:

o Disks allow data blocks to be read, modified, and rewritten at the same location.

2. Direct Access:
o Disks can access any block of information directly, enabling both sequential and
random access.

o Switching between files involves moving the read-write heads and waiting for disk
rotation, making access efficient.

I/O Efficiency

 Block Transfers:

o I/O transfers between memory and disk are performed in units of blocks, which can
consist of one or more sectors.

o Sector sizes range from 32 bytes to 4,096 bytes, with 512 bytes being the most
common size.

Design Problems of File Systems

1. User Interface:

o Designing how the file system appears to users, which includes defining:

 File attributes

 Allowed operations on files

 Directory structure for organizing files

2. Mapping Logical to Physical Storage:

o Creating algorithms and data structures to translate logical file system operations to
the underlying physical storage.

Layered Structure of File Systems

 Composition:

o File systems are organized into multiple levels, each utilizing lower-level features to
provide new functionalities for higher levels.

1. I/O Control Level:

o Composed of device drivers and interrupt handlers that facilitate data transfer
between main memory and disk.

o Device Drivers:

 Function as translators, converting high-level commands (e.g., “retrieve


block 123”) into low-level instructions specific to the hardware controller.

 Device drivers write specific bit patterns to the I/O controller’s memory,
instructing it on which actions to perform.

2. Basic File System:

o Issues generic commands to the appropriate device driver for reading and writing
physical blocks on the disk.
o Manages memory buffers and caches for various file-system, directory, and data
blocks.

o Ensures that a block in the buffer is allocated before a disk block transfer occurs.

o When buffers are full, the buffer manager must either find more memory or free up
space for I/O completion.

o Caches store frequently used file-system metadata to enhance performance.

3. File-Organization Module:

o Understands both logical and physical blocks of files.

o Translates logical block addresses to physical block addresses, allowing the basic file
system to transfer data.

o Maintains the free-space manager, which tracks unallocated blocks and allocates
them as needed.

4. Logical File System:

o Manages metadata, which includes all file-system structure information except for
the actual file contents.

o Maintains the directory structure, providing necessary information to the file-


organization module using symbolic file names.

o Utilizes File-Control Blocks (FCBs) (similar to inodes in UNIX) to store information


about files, including ownership, permissions, and data location.

o Responsible for file protection mechanisms.

Advantages and Challenges of Layering


 Code Duplication Minimization:

o Layering helps reduce code duplication, allowing the I/O control and basic file-
system code to be reused across different file systems.

o Each file system can have its own logical file system and file-organization modules.

 Performance Overhead:

o While layering simplifies development and maintenance, it may introduce overhead,


potentially reducing performance.

Common File Systems

 Many operating systems support multiple file systems:

o CD-ROMs typically use the ISO 9660 format.

o UNIX utilizes the UNIX File System (UFS), derived from the Berkeley Fast File System
(FFS).

o Windows supports FAT, FAT32, and NTFS.


o Linux has over forty supported file systems, with the standard being the extended
file system (ext), specifically ext3 and ext4.

 Distributed File Systems:

o Allow file systems on a server to be accessed by client computers across a network.

Current Research and Innovations

 File-System Research:

o Continues to be a vital area in operating system design and implementation.

o Example: Google developed its own file system tailored for high-performance access
across a large number of disks.

 FUSE (Filesystem in Userspace):

o A project that allows users to implement and run file systems as user-level code
rather than kernel-level, enhancing flexibility and ease of use.

You might also like