Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
9 views14 pages

Unit 3 Word

Device Management in Operating Systems involves controlling I/O devices to ensure efficient resource allocation and coordination among processes. It includes functions like tracking device status, enforcing access policies, and optimizing performance, with devices categorized as dedicated, shared, or virtual. Additionally, swap space management and RAID techniques are discussed for enhancing system performance and data redundancy.

Uploaded by

kairaroyqueen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views14 pages

Unit 3 Word

Device Management in Operating Systems involves controlling I/O devices to ensure efficient resource allocation and coordination among processes. It includes functions like tracking device status, enforcing access policies, and optimizing performance, with devices categorized as dedicated, shared, or virtual. Additionally, swap space management and RAID techniques are discussed for enhancing system performance and data redundancy.

Uploaded by

kairaroyqueen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 14

Device Management in Operating System

Definition:

Device Management in an Operating System (OS) refers to the process of


controlling and coordinating Input/Output (I/O) devices such as
disks, keyboards, microphones, printers, magnetic tapes, USB
ports, camcorders, scanners, and other peripheral or support units
(like control channels). It ensures that hardware resources are used
efficiently and are shared correctly among processes.

Role in Process Execution:

During execution, a process may require resources such as:

 Main memory

 File access

 Disk drive access, etc.

If the required resource is available, it is allocated, and control returns to


the CPU.
If not available, the process is delayed until the resource becomes
available.

To manage these devices (physical or virtual), the OS uses a specialized


program known as the Device Controller. This controller:

 Manages communication with devices

 Checks the availability of requested devices

Categories of I/O Devices:

Device management handles three fundamental types of I/O devices:

1. Boot Devices

o Store data in fixed-size blocks, each with a unique address

o Example: Hard Disks

2. Character Devices

o Transmit or receive a stream of characters

o These characters cannot be addressed individually

o Example: Keyboard, Printer


3. Network Devices

o Used to transmit data packets

o Example: Network Interface Cards (NICs)

Functions of Device Management:

1. Tracking Information:

o Keeps track of data, device status, location, and usage

o Managed through a component called the file system

2. Enforcing Policies:

o Decides which process gets access, when, and for how


long, based on pre-defined policies

3. Performance Optimization:

o Improves the performance of specific devices using various


optimization techniques

4. Monitoring Device Status:

o Continuously monitors devices such as printers, storage


devices, and others to track usage and detect issues

5. Allocation and Deallocation:

o Allocates devices to processes when required

o Deallocates devices in two stages:

 Temporary release after an I/O command is issued

 Permanent release when the job is completed

Types of Devices in Device Management:

1. Dedicated Devices

Definition:
Dedicated devices are devices assigned exclusively to a single process
at a time. While one process is using the device, no other process can
access it until it's released.

Examples:

 Printers
 Plotters

 Tape drives

 CD/DVD burners (in some cases)

How They Work:

 A process requests the device.

 The OS checks availability.

 If free, it allocates the device.

 Other processes must wait until the device is released.

Advantages:

 Eliminates resource conflict or interference.

 Predictable device behavior for the process.

Disadvantages:

 Low efficiency: If the process uses the device only occasionally,


the rest of the time is wasted.

 Can become a bottleneck in multitasking environments.

🔄 2. Shared Devices

Definition:
Shared devices are those that can be accessed by multiple processes
concurrently, usually through controlled interleaving of operations.

Examples:

 Hard disks (used by multiple processes/filesystems)

 SSDs

 Network file systems

 Scanners (with proper driver support)

How They Work:

 OS uses scheduling algorithms (e.g., FCFS, SCAN) to manage


access.

 Access is controlled to avoid race conditions or data


inconsistency.

 May involve buffering, locking, or queueing mechanisms.


Advantages:

 Better utilization of resources.

 Supports multitasking and multiprocessing.

 Reduced waiting time if managed properly.

Disadvantages:

 Requires complex OS logic to manage concurrent access.

 Can lead to data corruption or conflicts if synchronization is not


properly handled.

💡 3. Virtual Devices

Definition:
Virtual devices are dedicated devices managed by the OS to behave
like shared devices through software techniques like spooling.

Examples:

 Printers using print spoolers

 Virtual printers (like PDF writers)

 Emulated hardware (e.g., virtual serial ports)

How They Work:

 The OS creates a software queue (usually on disk).

 When a process sends a request (like a print job), it's stored in the
queue.

 The dedicated device (e.g., printer) processes jobs one at a time.

 Users get the illusion of parallelism or sharing.

Advantages:

 Enables efficient use of a non-shareable device.

 Increases system throughput.

 Provides better user experience in multitasking environments.

Disadvantages:

 Introduces latency (delays) as jobs wait in the queue.

 Requires extra storage space (for spooling).


 May fail if the queue gets corrupted or overflows.

Features of Device Management in OS:

1. Interaction via Device Drivers:

o The OS communicates with device controllers through


device drivers

2. Role of Device Drivers:

o Act as system software that bridge the gap between


processes and device controllers

3. API Implementation:

o One of the key roles is to implement the Application


Programming Interface (API) for device handling

4. Control of Device Operations:

o Drivers allow the OS to effectively control multiple


devices

5. Registers in Device Controllers:

o Each device controller contains three important registers:

 Command Register – for issuing instructions

 Status Register – for checking device state

 Data Register – for data transfer

Conclusion:

Device Management in Operating Systems is a core function that


ensures efficient, conflict-free, and policy-driven usage of all
hardware devices. It bridges the gap between hardware and software,
ensuring proper resource allocation, optimization, and coordination
across processes.
SWAP SPACE MANAGEMENT
Swap space is a portion of the hard disk used as virtual memory when
the physical (RAM) memory is full.

🧠 How it is used:

 In swapping systems, the entire process (code, data, and


image) may be moved to swap space.

 In paging systems, only the pages removed from RAM are


stored in swap space.

💾 Size of Swap Space:

 Swap space size can range from megabytes to gigabytes.

 It depends on:

o Amount of physical RAM

o Amount of virtual memory in use

o How the OS manages memory

⚠️Why it's important:

 If the system runs out of swap space, it may:

o Abort processes

o Or even crash

 It’s better to overestimate swap space than underestimate:

o Overestimating only wastes some disk space

o Underestimating can crash the system

SWAP SPACE EXAMPLE

Swap space is used to extend a system’s memory by using disk space


when RAM is full. Different operating systems manage it in different ways.

 In Solaris, swap space is set equal to the extra virtual memory


beyond what can be stored in physical (RAM) memory that can be
paged.
 Earlier versions of Linux recommended setting swap space to twice
the size of physical memory (2× RAM).

 Today, that rule is outdated. Most modern Linux systems use


much less swap space, depending on actual needs and available
RAM.

🔁 Multiple Swap Spaces

 Operating systems like Linux allow multiple swap spaces:

o These can be dedicated swap partitions

o Or simply swap files

 Using multiple swap spaces helps distribute I/O load during


paging and swapping.

o This spreads the memory workload across the system,


improving performance and bandwidth usage.

Types of Swap Space

Swap space can be located in two places:

1. Normal File System

2. Separate Disk Partition

1️ Swap in Normal File System:

 Swap space is just a large file in the file system.

 You can create and allocate it like any other file.

 Easy to set up, but slower and less efficient.

 Why slower?

o The system must go through directories and file system


structures.

o Causes extra disk access.

o External fragmentation can make swapping slow by


needing multiple disk seeks.
2️Swap in Separate disk Partition/ Raw partition:

 A raw partition is a part of the disk with no file system.

 Special swap-space manager handles block allocation directly.

 Much faster, because:

o No need to go through file system structures.

o Uses fast algorithms focused on speed, not space


efficiency.

 Downside: Internal fragmentation increases (some unused


space), but it’s okay because:

o Swap space is temporary and used only when needed.

 Fixed size because it’s pre-allocated during disk partitioning.

💡 Example:

Linux supports both types of swap — in file system and raw partition.

RAID (Redundant Arrays of Independent Disks)


It is a technique that uses multiple hard disks together instead of a
single disk.

RAID is used to:

 Increase performance (faster access)

 Provide data redundancy (backup and safety)

❓ Why is Data Redundancy Important?

Data redundancy means storing the same data in more than one
place (like backups).

 ✅ If one disk fails, the data can still be recovered from another disk.

 ❌ Without RAID, if a single disk is lost, all data may be lost.

Even though redundancy uses more space, it keeps your data safe and
makes the system more reliable.

Key Evaluation Points of a RAID System:


1. Reliability

o Refers to how tolerant the system is to disk failures.

o Example: Some RAID levels can keep working even if one or


more disks fail.

2. Availability

o Refers to how often the system is up and running (not


under maintenance or failure).

o A highly available system ensures users can access data


almost all the time.

3. Performance

o Measures how fast the system works.

o Includes:

 Response time (how quickly it reacts to a request)

 Throughput (how much data it can handle in a given


time)

o RAID can improve performance by distributing work across


multiple disks.

4. Capacity

o Measures the actual usable space available to the user.

o Example: If you have N disks, each with B blocks, the useful


capacity depends on the RAID level used (some disks may be
used for redundancy).

Transparency of RAID:

 RAID is transparent to the operating system.

 The OS sees RAID as one large disk made up of a linear series of


blocks.

 This means older systems or software can use RAID without


needing major changes.

Advantages of RAID

1. Data Redundancy

o Keeps multiple copies of data on different disks.


o Protects against disk failures—data remains safe.

2. Performance Enhancement

o Spreads data across disks.

o Allows faster read/write by using multiple disks at once.

3. Scalability

o Easy to increase storage by adding more disks.

4. Versatility

o Can be used in servers, desktops, workstations, etc.

❌ Disadvantages of RAID

1. Cost

o Expensive, especially for high-capacity RAID levels.

2. Complexity

o Setting up and managing RAID can be complicated.

3. Decreased Performance in Some Types

o RAID 5 and RAID 6 may be slower on writes due to parity


calculations.

4. Single Point of Failure

o If the RAID controller fails, all data can be lost—RAID is


not a full backup.

Stable Storage Implementation


Stable Storage (Definition)

Stable storage is a type of reliable storage where data is never lost,


even during disk or system failures.

🔧 How Stable Storage Is Achieved:

To make storage stable:

1. Data is stored on multiple devices (replication).


2. Devices must fail independently (not all at once).

3. Two physical blocks are used for each logical block (backup).

4. Updates are written safely and carefully to both copies.

💾 Disk Write Scenarios:

When writing to a disk, 3 things can happen:

1. ✅ Successful Write – Data is written correctly.

2. ⚠️Partial Failure – Only part of the data is written; may cause


corruption.

3. ❌ Total Failure – Write didn’t happen; old data is still there.

Stable Write Process:

1. Write to first physical block.

2. If successful, write to second block.

3. If both succeed → Operation is complete.

🔁 Recovery After Failure:

After a crash, check both physical blocks:

 If both are the same and correct → Nothing to do.

 If one is corrupted → Copy the good block over the bad one.

 If both are okay but different → Copy the second block to the first.

➡️This ensures:

“Either data is saved correctly or not changed at all.”

💡 More Copies = More Safety

 You can use more than 2 copies for higher fault tolerance.

 Usually, 2 copies are enough for practical use.

🚀 Using NVRAM (Non-Volatile RAM):


 NVRAM is used as a fast cache.

 It stores data temporarily before writing to disk.

 Since it doesn’t lose data on power off, it's considered part of


stable storage.

 Makes writing faster and safer.

✅ Summary:

Stable storage protects data using:

 Multiple copies,

 Safe writing steps, and

 Recovery checks.
It ensures data is either written correctly or stays as it was.

1. Buffering

🟢 What it is: A temporary storage area in memory used to hold data


while it's being transferred between two devices or between a device
and an application.

🟢 Why it’s used: To match the speed difference between a fast and a
slow device (e.g., CPU is fast, printer is slow).

🟢 Example: When you’re printing a document, the data is first stored in a


buffer, so the CPU can move on while the printer prints it slowly from the
buffer.

2. Caching

🔵 What it is: A memory area that stores frequently accessed data so it


can be retrieved quickly.

🔵 Why it’s used: To speed up future accesses by avoiding repeated


reads from a slower device (like a hard disk).

🔵 Example: If you open a file and read it multiple times, the OS might
cache it in RAM so it doesn’t need to fetch it from the disk again and
again.

3. Spooling
🟠 What it is: A process where output data is temporarily stored
(usually on disk) and then sent to the device (like a printer) in order.

🟠 Why it’s used: To queue multiple jobs and avoid making the device
wait; it makes a dedicated device appear shared.

🟠 Example: When many users send print commands, they are spooled to
a disk queue, and the printer prints them one by one.

🔁 Quick Comparison Table:

Feature Buffering Caching Spooling

Speed up
Match speed of Manage device access
Purpose repeated
producer/consumer for multiple jobs
access

Location Usually RAM RAM Disk

Temporary holding
Scope Reuse of data Queuing jobs
during transfer

Common File system, Printers, batch


Printers, keyboards
Use browser cache processing

Disk Structure (with Performance Terms)

Disk structure refers to the way data is organized and accessed on a


hard disk drive (HDD). It includes the physical layout of the disk and the
time factors involved in reading/writing data.

🔹 Physical Structure of a Disk:

1. Platters – Circular disks coated with magnetic material. Each


platter stores data on both surfaces.

2. Tracks – Concentric circles on a disk surface where data is recorded.

3. Sectors – Each track is divided into small arcs called sectors


(smallest addressable unit on the disk).

4. Cylinders – A group of tracks on different platters but at the same


head position.

5. Read/Write Head – A head on each platter surface reads and


writes data.
6. Disk Arm & Actuator – Moves the read/write head to the correct
track.

🔹 Logical View:

 Logical Block Addressing (LBA): Instead of referring to


track/sector numbers, data is accessed using block numbers —
simplifying management.

🔹 Performance Terms in Disk Access:

1. Seek Time:
Time taken for the disk arm to move the read/write head to the
desired track.
👉 Example: Moving from track 10 to track 200.

2. Rotational Latency:
Time the platter takes to rotate and bring the correct sector under
the read/write head.
👉 Depends on disk speed (measured in RPM).

3. Transmission Time (Transfer Time):


Time required to transfer data once the head is at the correct sector.
👉 Affected by data size and disk transfer rate.

4. Access Time:
Total time to read/write =
Seek Time + Rotational Latency + Transfer Time

You might also like