OS Endsem Notes
OS Endsem Notes
Deadlocks | PPT
Memory
Relocation
Protection
Sharing
Local organization
Physical organization
Memory partitioning
Contiguous , non-contiguous
2)
3) Here, the number of processes that can be executed at a time (degree of
multiprogramming) are limited to number for partitions of memory
Allocation Policies:
a. Best fit Find the Best possible memory fit slower
b. Worst fit Find worst possible memory fit More fragmentation
c. First fit Find first available memory block satisfying memory
requirement fast
d. Next fit Start finding the first fit, after the allocation of previous
process avoids clustering
4) Variable sized partitioning (Dynamic partitioning)
a. Here memory is allocated to the process at the run time to its size
dynamically.
b. Causes External fragmentation
c.
d. External fragmentation Total enough space is available, but it is
not available contiguously
e. Solution Compaction (defragmentation) This method
consolidates all free memory into one large block, which can then
be allocated to new processes. Compaction is also known as
defragmentation.
1) Paging
a. Process divided into equal chunks called PAGES
b. The memory is also divided into equal chunks (same size as PAGE
size) called FRAMES
c. Pages are scattered in frames
d. To know which page is where , Page table is used where frame
number is mapped with the page number, each process has its own
page table and it is stored in main memory
e.
f. Each entry is frame number + extra bits
g. The page table is also stored in main memory , and the register
pointing to the page table of the current process is called the page
table base register (PTBR)
h. The page-table base register (PTBR) in an operating system
(OS) stores the starting address of the page table in the main
memory. The PTBR contains a physical address, not a virtual
address.
i.
j. Adress translation:
k.
l. Demand paging:
i. Here not all pages are loaded at once, only some pages are
loaded , and new pages are loaded from secondary memory
as needed , this is called demand paging
ii. In pure demand paging, NO pages are loaded initially, all
pages are loaded are demanded.
iii. When page is demanded and it is not there in the memory, it
is called page fault. When a page fault occurs page is loaded
from secondary memory.
iv. If new page is loaded from secondary memory and the
frames are full , page are replaced by page replacement
algorithms
1. FIFO
2. LRU
3. Optimal
m. Allocation of frames
i. How many frames should we give to one process?
ii. If the system is single user single tasking, then all to that one
process
iii. There are two ways to allocate frames ,
1. Equal each process is given equal number of frams
2. Proportional Number of frames are according to the
proportion
3. Local Allocation We replace the page of a process by
its own new page only
a. The number of frames allocated to the process
will not change
b. This the process can adjust its own fault rate
4. Global allocation We can replace the page by any
other page in the memory
a. High priorithy pages can replace the low priority
pages
b. Increases memory efficiency and throughput
5. Thrashing High level paging activity
Solutions to thrashing
1) Working set
2) Paging fault frequency
3)
2) Segmentation
a. Unlike paging , process is not divided into equal size chunks.
b. Process is divided into logical parts of unequal size called segments.
c. For ex. For a process , it may be divided into main function, stack,
other function like this.
d. Here physical memory is not divided into frames.
e.
f. Segment table is maintained
g. Now segment table contains , base address of the segment
(physical address) and limit(size)
h.
i.
j.
k.
l. Now, how many bytes to give for displacememt, it was clear in the
page case, where pages are of fix size, here frame can be of any
size. Thus OS give a max limit for the size of the segment.
3) Virtual memory
a. Used when physical memory capacity is very low
b. It is not physical
c. When a process is executing it is not necessary to load all pages at
once or load entire process at once
d. Using secondary memory as extension of physical memory
e.
f. OS thinks all pages are available, but they are scatterd in secondary
and physical memory
g. When a new page is demanded that is not in physical memory, a
page fault occurs.it is a internal interrupt in CPU ISR is called
h. DMA is required to replace the page
i. It is brought from the secondary memory, to main memory, this is
called demand paging
j. In the page table , some entries are empty i.e they have garbage
value, how would the system know if it is garbage ofr actual?
k. For that, aling with frame number protection bits are stored, if it is
0, then invalid else valid
l.
m. Dirty or modified bits
i. With frame number and the protection bit, one more bit
called dirty bit is stored.
ii. It is used to indicate if the page was modified or not when it
was taken in main memory
iii. Now when the page was brought in the main memory, if it
was not modified in the main memory, then we can directly
overwrite the page in case of page fault.
iv. If modified bit is 1, i.e. it was modified when it was brought
into main meory, then we cannot overwrite, here we have to
copy the page first into the secondary memory and then
overwrite with new one
Page replacement algorithms
1) FIFO
a.
b. Here, the page faults are 9 for 3 frames, Can page fault reduce if no
of frame increase? Lets see
c. Lets try with 4 frames
d.
e. OO!! Page faults increased!! , this is called belady’s anomaly
f. It does not occur always, might happen for some page references.
g. Only occurs in case of FIFO
h. For some page reference sequences in FIFO, increasing the number
of frames increases the number of page faults
i.
2) Optimal policy
a. Replace that will be be referred or refereed very far in furtue,
b. If there is a tie, then apply fifo.
c.
d. Not practically implementable
e. It is used to mesure performance of other
f.
g. It gives minimum possible page faults
3) LRU
a. Replace the page that was not referenced longest
b.
c.
Buddy system
The system uses a binary tree structure, with nodes representing blocks and
leaves representing the smallest allocatable units. It keeps a record of free
blocks and handles memory requests through efficient splitting and merging.
Types of Buddy System
Features
Advantages
Easy to implement.
Allocates blocks of accurate size.
Quickly merges adjacent free blocks.
Suitable for dynamic systems like embedded or real-time systems.
Handles frequent small memory allocations efficiently.
Helps prevent memory leaks, improving system reliability.
Disadvantages
Relocation (remaining)
Unit 5
File Named collection of related information in secondary storage
File directory Collection of files
File system
Module of OS that manages,organizes and controls the files and related
structures
Disk
Disk scheduling algorithms are used to minimize the seek time.
FCFS, Look , c look, scan , c scan, sstf
Scan
Go to extreme (Even if the extreme point is not in request) end either left or right
Look
Don’t go to extreme , just go to the last
C scan
C look
I/o devices:
Human Readable: Devices intended for interaction with the user, such as
printers and terminals, which may include a video display, keyboard, and
additional devices like a mouse.
Machine Readable: Devices used for interaction with electronic
equipment, including disk drives, USB keys, sensors, controllers, and
actuators.
Communication Devices: Devices that facilitate communication with
remote systems, such as digital line drivers and modems.
Data Rate:
o Data transfer rates vary significantly between devices, often
spanning several orders of magnitude.
o Example,
Gigabit Ethernet: ~10⁹ bps
Mouse, Keyboard: ~10³ bps
Application:
o The device's purpose impacts OS policies and software support.
o Example:
Disks used for file storage require file management software.
Disks used for virtual memory depend on specialized
hardware/software.
o Terminal use varies based on user role (regular user vs. system
administrator), affecting privileges and priorities.
Complexity of Control:
o Devices like printers require simple control interfaces.
o Devices like disks involve complex controls, often managed by
advanced I/O modules, which influence how the OS handles them.
Unit of Transfer:
o Data may be transferred as:
Streams of bytes or characters (e.g., terminal I/O).
Blocks of data (e.g., disk I/O).
Data Representation:
o Devices use different encoding schemes, including variations in
character codes and parity conventions.
o These differences must be managed to ensure compatibility.
Error Conditions:
o Error nature, reporting methods, impacts, and response strategies
vary widely across devices.
o Example: Printers may report paper jams, while disks might signal
read/write failures.
Basic Scenario
Solution: Buffering
Types of Buffering
1. Single Buffering
What happens?
o The OS assigns a single buffer in system memory for the I/O
operation.
o For input:
Data is first read into the system buffer.
Once the input is complete, the data is moved to the
process’s memory.
During this move, the process can request another block of
data (called read-ahead).
o For output:
Data is first copied from the process’s memory to the system
buffer and then written to the device.
Advantages:
o The process can perform computations on one block while the
system reads the next block.
o The OS can swap out the process since I/O occurs in system
memory.
Disadvantages:
o Overhead in moving data between the system buffer and process
memory.
o Limited performance gain if the computation time (C) is much
shorter than the I/O time (T).
2. Double Buffering
What happens?
o Two buffers are used:
While one buffer is being filled (input) or emptied (output) by
the OS, the process can work on the other.
o For input:
The process can read data from one buffer while the system
reads new data into the other.
o For output:
The process writes to one buffer while the system sends data
from the other to the device.
Advantages:
o I/O and processing can happen in parallel.
o Better performance than single buffering when computation and
I/O times overlap.
Disadvantages:
o Increased complexity in managing two buffers.
o Higher memory overhead.
3. Circular Buffering
What happens?
o More than two buffers are used, forming a circular queue.
o For input:
Data is read into the buffers in sequence.
The process reads data from the buffers as needed.
o For output:
Data is written to the buffers by the process and sent to the
device sequentially.
Advantages:
o Handles bursty workloads better than double buffering.
o Minimizes idle time for both the process and I/O device.
Disadvantages:
o Even more complexity in buffer management.
o Inefficient if the I/O demand consistently exceeds device capacity.
Block-Oriented vs. Stream-Oriented Devices
Block-Oriented Devices:
o Data is stored and transferred in fixed-size blocks.
o Examples: Disks, USB drives.
o Buffering improves performance significantly because data is
usually accessed sequentially.
Stream-Oriented Devices:
o Data is a continuous stream (e.g., bytes or lines).
o Examples: Printers, terminals, communication ports.
o Buffering strategies differ based on whether data is handled line-
by-line or byte-by-byte.
Utility of Buffering
Performance Metrics
Files systems
1. Long-term existence
o Files are stored persistently on storage devices and don’t disappear
when you log off.
2. Sharing between processes
o Files can be accessed by multiple users or programs with controlled
permissions for reading, writing, or editing.
3. Structured organization
o Files can be organized hierarchically (like folders and subfolders) or
in complex relationships to reflect their purpose or use.
1. Create
o Define a new file and place it in the directory structure.
2. Delete
o Remove a file from the system permanently.
3. Open
o Load a file into memory for a process to perform operations on it.
4. Close
o Free up resources associated with the file after operations are
complete.
5. Read
o Access data from a file.
6. Write
o Add or modify data within a file.
File Attributes
1. Field
oThe smallest unit of data, like a single name or number.
oCan be of fixed or variable length.
oExamples: Name (ASCII string), Age (decimal).
2. Record
o A collection of related fields grouped into one logical unit.
o Example: An employee record with fields like name, ID, and hire
date.
3. File
o A collection of similar records.
o Example: A company’s employee database file.
4. Database
o A collection of related files designed for multiple applications.
o Relationships between data elements are explicitly defined.
1. Retrieve_All
o Fetch all records in a file sequentially.
o Example: Generating a summary report.
2. Retrieve_One
o Fetch a single record based on specific criteria.
o Example: Searching for a customer’s record in a banking system.
3. Retrieve_Next/Previous
o Access the next or previous record in a logical sequence.
o Example: Navigating through search results or form entries.
4. Insert_One
o Add a new record, potentially preserving file order.
o Example: Adding a new employee to a company database.
5. Delete_One
o Remove a record while maintaining the sequence or structure of the
file.
6. Update_One
o Modify an existing record's fields and save changes.
o Example: Updating a customer's address.
7. Retrieve_Few
o Fetch multiple records based on specific criteria.
o Example: Retrieving records of students scoring above a certain
grade.
2 pass assemblers
OPTAB will store information regarding mnemonic opcodes. SYMTAB will store
information regarding symbols used in the program.
A block-oriented device
stores information in blocks of fixed size
transfers are made one block at a time
possible to reference data by its block number. Disks and USB keys are
examples
A stream-oriented device
transfers data in and out as a stream of bytes no block structure.
Terminals, printers, communications ports, mouse and other pointing
devices, and most other devices that are not secondary storage are
stream oriented.
File Access Methods: Techniques to read/write file data. Types: Sequential,
Direct, Indexed.
Sequential Access:
Direct Access:
Indexed Access:
Comparison:
Key Tip: Match the access method to application needs (e.g., logs →
Sequential, databases → Direct/Indexed).
File access refers to the methods used to read or write data from files. The choice of
access method depends on the application and data usage. The three main file access methods
are sequential, direct, and indexed, each having its specific use cases, advantages, and
disadvantages.
Sequential access is the simplest method where data is read or written one record at a
time, starting from the beginning and proceeding to the end in order. This method is best
suited for applications that process data linearly, such as log file processing or batch
operations.
The main advantage of sequential access is its simplicity, making it easy to implement
and use. However, its disadvantage is inefficiency when performing random access
operations or dealing with large files, as it requires traversing all preceding records to reach
the desired one.
Direct access, also called random access, allows reading or writing data directly at any
location in the file without going through previous records. It achieves this by using the
physical address or position of the data within the file. This makes it highly efficient for
applications like databases, where quick retrieval of specific records, such as accessing a
customer record by ID, is required.
The advantage of direct access is its speed and efficiency for random operations,
enabling direct retrieval or modification of data. However, its disadvantage is its complexity,
as implementing and maintaining a direct access system is more difficult than sequential
access.
Indexed access combines the benefits of both sequential and direct access. In this method,
an index file is created, which maps logical keys (like a record name or ID) to their
corresponding physical addresses in the main data file. The index allows quick location of
records for direct access while still supporting sequential operations.
Indexed access is ideal for large datasets or applications that require frequent access
to specific data, such as file systems where files are accessed by name or location. For
example, the index can quickly point to a file’s location, enabling efficient access.
The advantage of indexed access is its flexibility and efficiency in handling both random
and sequential access operations. However, the disadvantage is that it requires additional
storage space to maintain the index file, which can increase the system’s cost and complexity.
In summary, sequential access is simple and linear but slow for random operations.
Direct access is fast for random access but complex to implement. Indexed access provides
a balance, combining the benefits of both methods while requiring extra storage for the index.
The choice of file access method should depend on the application’s needs, considering
factors such as the frequency of random access, the size of the data, and the implementation
complexity. This ensures an efficient and appropriate solution for handling data in various
scenarios.
Record Blocking
Fixed-Length Blocks:
o Blocks are of a constant size across the system.
o Simplifies I/O operations and buffer allocation.
o May result in internal fragmentation (unused space in a block if
records don't fully fill it).
Variable-Length Blocks:
o Blocks can have different sizes depending on the records.
o More flexible but increases the complexity of I/O operations and
buffer management.
o Can be spanned or unspanned depending on the organization.
Larger Blocks:
o Transfer more records in one I/O operation, reducing the number of
I/O operations.
o Beneficial for sequential access, speeding up processing.
o Can result in unnecessary data transfer when accessing records
randomly, leading to wasted bandwidth.
Smaller Blocks:
o Efficient for random access as less unused data is transferred.
o Increases the number of I/O operations, slowing down sequential
processing.
Fixed Blocking:
o Fixed-length records are stored in fixed-size blocks.
o Simple to implement but may leave unused space, causing internal
fragmentation.
o Commonly used for sequential files with fixed-length records.
Variable-Length Spanned Blocking:
o Records can span across multiple blocks if they don’t fit in one.
o Ensures no space is wasted within blocks.
o Involves multiple I/O operations if a record spans multiple blocks.
o More complex to implement, especially for updates.
Variable-Length Unspanned Blocking:
o Records that don’t fit in one block cannot span to another block.
o This leads to unused space in blocks.
o Simpler than spanned blocking but results in wasted space.
o Limits record size to the size of a block.
Short notes
Types of Blocks
Fixed-Length Blocks:
o Same size across the system.
o Simple but may cause internal fragmentation (unused space in
blocks).
Variable-Length Blocks:
o Sizes vary depending on records.
o More flexible but complex for I/O and buffer management.
Larger Blocks:
o More records transferred in one I/O.
o Good for sequential access (fewer I/O operations).
o Bad for random access (unused records transferred).
Smaller Blocks:
o Efficient for random access (less unused data).
o Slower for sequential access (more I/O operations).
Blocking Methods
Fixed Blocking:
o Fixed-length records in fixed-size blocks.
o Simple but causes internal fragmentation.
Variable-Length Spanned Blocking:
o Records span multiple blocks if needed.
o Efficient in space but requires multiple I/Os for spanning records.
Variable-Length Unspanned Blocking:
o Records can't span blocks.
o Wastes space if record doesn’t fit but simpler to implement.
o
In virtual memory:
o Uses pages to transfer data between memory and storage.
o Combine multiple pages into a larger block for I/O (e.g., IBM VSAM
files).
File Allocation Overview
1. Contiguous Allocation:
o A single contiguous set of blocks allocated to the file.
o Simple allocation, fast I/O performance for sequential access.
o Issues: External fragmentation, difficult to find large contiguous
spaces as files grow.
o File Allocation Table (FAT) stores the starting block and the file
length.
o Problem: Need to declare file size upfront.
Pros:
Cons:
External fragmentation.
o
Requires compacting or reorganizing free space over time.
o
2. Chained Allocation:
o Each block contains a pointer to the next block in the chain.
o No need to preallocate the size, blocks allocated as needed.
o Pros: No external fragmentation.
o Cons: Random access is slower as you have to traverse the chain.
3. Indexed Allocation:
o Each file has its own index (a separate block contains addresses for
allocated blocks).
o Pros: No external fragmentation, flexible space allocation.
o Cons: Larger allocation tables, slower random access (due to
indirection).
Two Types:
File
Preallocatio Portion Compaction
Method Allocation Performance
n? Size Needed
Table
Contiguo One entry Fast Yes (External
Yes Variable
us per file Sequential Fragmentation)
Slower
Fixed One entry No (But possible
Chained No Random
Blocks per file consolidation)
Access
Separate
Fixed or Flexible
Indexed Yes/No Index for No
Variable Access
each file
Classes of Users
1. Locking Methods:
o Entire File Lock: One user locks the entire file to prevent others
from accessing it during updates.
o Record Locking: Locks individual records in the file for concurrent
access by different users.
2. Concurrency Issues:
o Mutual Exclusion: Prevents conflicting access (e.g., two users
editing the same record).
o Deadlock: Avoids situations where two users wait for each other
indefinitely (e.g., User A waits for User B and vice versa).
3. Readers/Writers Problem: Ensures that multiple readers can access the
file at the same time, but writers (updaters) have exclusive access.
1. Basic Information
o File Name: Unique within a directory.
o File Type: e.g., text, binary.
o File Organization: Specifies file system organization type.
2. Address Information
o Volume: The device where the file is stored.
o Starting Address: Physical address on storage (e.g., disk block).
o Size Used: Current file size.
o Size Allocated: Max file size.
3. Access Control Information
o Owner: File's controlling user (can grant or deny access).
o Permitted Actions: Read, write, execute permissions, and network
access.
4. Usage Information
o Date Created: When the file was added.
o Identity of Creator: Typically the owner.
o Date Last Read: Last time the file was accessed.
o Date Last Modified: Last update made to the file.
o Date of Last Backup: Last backup date.
o Current Usage: Info about ongoing file activities.
1. Simple List:
o Sequential list of file entries.
o Efficient for single users but inefficient for multiple users or large
file systems.
2. Two-Level Directory:
o One directory for each user with a master directory.
o Easier to enforce access control and organize files, but lacks deeper
organization.
3. Tree-Structured Directory (Most Common):
o Hierarchical structure: a master directory with subdirectories and
files.
o Pathname: Series of directory names leading to the file. Example:
User_B/Word/Unit_A/ABC.
o
4. Hashed Structure:
o For large directories, hashing improves search efficiency by
reducing search time.
Directory Operations
Refers to the logical structuring of records, which affects how they are
accessed.
Criteria for choosing file organization:
o Short access time: How fast the file can be read/written.
o Ease of update: How easy it is to modify records.
o Economy of storage: How efficiently space is used.
o Simple maintenance: How easy it is to manage the file.
o Reliability: How fault-tolerant the file organization is.
These criteria may conflict depending on the application. For example, for
batch processing, fast access for individual records is less critical.
Pile:
o Records are stored in the order they arrive.
o No structure; data is accumulated in an unorganized manner.
o Pros: Simple, easy to update.
o Cons: Requires exhaustive search for retrieval (inefficient for most
applications).
Sequential File:
o Records have a fixed format and are stored in a specific order,
usually based on a key field.
o Pros: Good for batch processing, easy to store on tape or disk.
o Cons: Slow for random access, difficult to update.
Indexed Sequential File:
o Records are organized sequentially, but an index is added for faster
access.
o Pros: Faster access compared to sequential files, still maintains
sequence order.
o Cons: Complex to maintain with overflow and indexing.
Indexed File:
o No sequential structure; records are accessed through multiple
indexes based on different search attributes.
o Pros: Highly flexible for multiple search attributes.
o Cons: Can become complex and difficult to maintain.
Hashed File:
o Records are placed into "buckets" using a hash function, which
allows direct access to records.
o Pros: Fast access for specific records.
o Cons: Not suitable for range queries (records cannot be retrieved in
order).
Requirements for Mutual Exclusion
Any facility or capability that is to provide support for mutual exclusion should
meet
its critical section, among all processes that have critical sections for the same
6. A process remains inside its critical section for a finite time only