Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
44 views57 pages

Device Management in Operating Systems

The operating system manages hardware devices through four main functions: 1. Monitoring device status 2. Enforcing policies to determine which process gets device access 3. Allocating devices to processes 4. Deallocating devices from processes once I/O is complete Devices can be dedicated to a single process, shared among multiple processes, or virtualized to improve sharing. The OS accesses devices through direct I/O, memory mapping, interrupts, or direct memory access.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
44 views57 pages

Device Management in Operating Systems

The operating system manages hardware devices through four main functions: 1. Monitoring device status 2. Enforcing policies to determine which process gets device access 3. Allocating devices to processes 4. Deallocating devices from processes once I/O is complete Devices can be dedicated to a single process, shared among multiple processes, or virtualized to improve sharing. The OS accesses devices through direct I/O, memory mapping, interrupts, or direct memory access.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 57

Operating

System
Tim Pengampu MK Sistem Operasi
Universitas Dian Nuswantoro
Device (I/O) Management
Device Management : the OS component that manages hardware
devices with four basic functions
• Monitoring the status of each device, such as storage drives,
printers, and other peripheral devices.
• Enforcing preset policies to determine which process will get a
device and for how long.
• Allocating the devices.
• Deallocating them at two levels :
• At the process (or task) level when an I/O command has
been executed and the device is temporarily released.
• At the job level when the job is finished and the device is
permanently released.
Device (I/O) Management
• Characteristics
Device (I/O) Management
• Device Categories
• Dedicated
• Are assigned to only one job at a time.
• They serve that job for the entire time the job is active or until it releases them.
• Some devices demand this kind of allocation scheme, because it would be awkward to let several
users share them.
• Example: tape drives, printers, and plotters
• Disadvantages
• They must be allocated to a single user for the duration of a job’s execution, which can be quite
inefficient, even though the device is not used 100% of the time.
• Shared
• Can be assigned to several processes.
• For example – a disk (DASD – Direct Access Storage Device) can be shared by several processes at the
same time by interleaving their requests.
• This interleaving must be carefully controlled by the Device Manager
• All conflicts must be resolved based on predetermined policies.
• Virtual
• A combination of the first two types.
• They’re dedicated devices that have been transformed into shared devices.
• Example : printer
• Converted into a shareable device through a spooling program that reroutes all print
requests to a disk.
• Only when all of a job’s output is complete, and the printer is ready to print out the entire
document, is the output sent to the printer for printing.
• Because disks are shareable devices, this technique can convert one printer into several virtual
printers, thus improving both its performance and use.
Device Addressing
• How the CPU addresses the device registers?
• Two approaches
• Dedicated range of device addresses in the physical
memory
• Requires special hardware instructions associated
with individual devices
• Memory-mapped I/O: makes no distinction
between device addresses and memory addresses
• Devices can be accessed the same way as normal
memory, with the same set of hardware
instructions
Device Addressing Illustrated

Primary Primary
memory memory Memory
addresses
Device Device 1
addresses Device 2
Separate device Memory-mapped I/Os
addresses
Device (I/O) Management
• Approaches (Way to Access a Device)
• Direct I/O – CPU software explicitly transfer data to and from the controller’s data registers
• Direct I/O with polling – the device management software polls the device controller
status register to detect completion of the operation; device management is
implemented wholly in the device driver, if interrupts are not used

• Memory mapped I/O – device addressing simplifies the interface (device seen as a range of
memory locations)
• Memory mapped I/O with polling – the device management software polls the device
controller status register to detect completion of the operation; device management is
implemented wholly in the device driver.

• Interrupt driven direct I/O – interrupts simplify the software’s responsibility for detecting
operation completion; device management is implemented through the interaction of a
device driver and interrupt routine

• Direct memory access – involves designing of hardware to avoid the CPU perform the
transfer of information between the device (controller’s data registers) and the memory
Device (I/O) Management
• Direct I/O versus Memory Mapped I/O

Primary Primary copy_in R3, 0x012, 4


memory memory
will cause the machine to
Memory addresses

copy the contents of data


register 4 in the controller
Memory addresses with address 0x012 into
CPU register R3
Store R3, 0xFFFF0124
Device 1 Device 1
The device is associated
Device addresses

with logical memory


Device 2 Device 2 locations; each component
of the device that is
referenced by software is
Device n Device n
located at a normal
Separate device Memory - mapped I/O memory address
addresses
Device (I/O) Management
• I/O with polling
• Each I/O operation requires that the software and hardware
coordinate their operations to accomplish desired effect
• In direct I/O pooling this coordination is done in the device
driver
• While managing the I/O, the device manager will poll the
busy/done flags to detect the operation’s completion; thus, the
CPU starts the device, then polls the CSR to determine when
the operation has completed
• With this approach is difficult to achieve high CPU utilization,
since the CPU must constantly check the controller status
Device (I/O) Management
• I/O with polling (read) 1 read (device, …);

data
1. Application process requests a read operation
2. The device driver queries the CSR to determine
whether de device is idle; if device is busy, the
driver waits for it to become idle
System Interface
3. The driver stores an input command into the
controller’s command register, thus starting the read function
2

Data transfer
device 3
4. The driver repeatedly reads the content of CSR 4 5
to detect the completion of the read operation write function
5. The driver copies the content of the controller's
data register(s) into the main memory user’s
process’s space.
Hardware interface

command status data


Device controller
Device (I/O) Management
write (device, …);
• I/O with polling (write) 1
data
1. The application process requests a
write operation
2. The device driver queries the CSR to
determine if the device is idle; if System Interface
busy, it will wait to become idle
read function
3. The device driver copies data from
user space memory to the

Data transfer
controller’s data register(s) 3
4. The driver stores an output command write function
into the command register, thus 4
2
starting the device 5
5. The driver repeatedly reads the CSR
to determine when the device Hardware interface
completed its operation
command status data
Device controller
Ways to Access a Device
• How to input and output data to and from an
I/O device?

• Polling: a CPU repeatedly checks the status of a


device for exchanging data
+ Simple
- Busy-waiting
Ways to Access a Device
• Interrupt driven I/O
• In a multiprogramming system the wasted CPU time (in polled
I/O) could be used by another process; because the CPU is used
by other processes in addition to the one waiting for the I/O
operation completion, in multiprogramming system may result
a sporadic detection of I/O completion; this may be remedied
by use of interrupts
• The reason for incorporating the interrupts into a computer
hardware is to eliminate the need for a device driver to
constantly poll the CSR
• Instead polling, the device controller “automatically” notifies
the device driver when the operation has completed
1 read (device, …);

data
9

8b
System Interface

read function Device status


2 table

Data transfer
3 4
7 Device
handler
write function

6
Interrupt
5
handler
Hardware interface

8a

command status data


Device controller

1298a,
76354 -– the
8b
the
when –driver
application
device
interrupt
device
the
thisdevice
stores
part
driver
completes
driver
handler
handler
of
process
driver
an
queries
retrieves
the
returns
input
determines
device
the
copies
requests
the
command
operation
thedriver
CSR
the
control
pending
awhich
content
read
tocompletes
and
into
find
tooperation
I/O
device
interrupts
the
the
out
ofstatus
controller’s
the
application
ifits
caused
the
controller’s
work,
information
the
device
CPU,
the
it is
idle; savesregister(s)
therefore
command
interrupt;
from
data
processifthe
information
busy,
(knowing
device
causing
itregister,
then
then
into
status
branches
it
an
the
regarding
thus
waits
the
interrupt
return
table
user
starting
until
to process’s
address
the
the
handler
the
the
operation
device
device
device
from
space
tohandler
run
becomes
the
it began
device
forinidle
that
status
thedevice
device
table)
status table; this table contains an entry for each device in system
Ways to Access a Device
• Interrupt-driven I/Os: A device controller
notifies the corresponding device driver
when the device is available
+ More efficient use of CPU cycles
- Data copying and movements
- Slow for character devices (i.e., one
interrupt per keyboard input)
Ways to Access a Device
Primary • Direct Memory Access (DMA)
memory • Traditional I/O
• Polling approach:
• CPU transfer data between the controller
data registers and the primary memory
• Output operations - device driver copies data
from the application process data area to the
CPU controller; vice versa for input operations
• Interrupt driven I/O approach - the interrupt
handler is responsible for the transfer task
Controller

Device
Ways to Access a Device
• DMA controllers are able to read and write
Primary
information directly from /to primary memory, with memory
no software intervention

• The I/O operation has to be initiated by the driver

• DMA hardware enables the data transfer to be


accomplished without using the CPU at all
CPU
• The DMA controller must include an address register
(and address generation hardware) loaded by the
driver with a pointer to the relevant memory block;
this pointer is used by the DMA hardware to locate Controller
the target block in primary memory

Device
Ways to Access a Device
• Direct memory access (DMA): uses an
additional controller to perform data
movements
+ CPU is not involved in copying data
- I/O device is much more complicated
(need to have the ability to access
memory).
- A process cannot access in-transit data
Ways to Access a Device
• Double buffering: uses two buffers.
While one is being used, the other is
being filled.
• Analogy: pipelining
• Extensively used for graphics and
animation
• So a viewer does not see the line-by-
line scanning
Device (I/O) Management
• Device Driver
• An OS component that is responsible for hiding the complexity of an I/O device
• So that the OS can access various devices in a uniform manner

• Two categories
• A block device stores information in fixed-size blocks, each one with its own
address
• e.g., disks
• A character device delivers or accepts a stream of characters, and individual
characters are not addressable
• e.g., keyboards, printers, network cards

• Device driver provides interface for these two types of devices


• Other OS components see block devices and character devices, but not the
details of the devices.
• How to effectively utilize the device is the responsibility of the device driver
Device (I/O) Management
• Device Driver Illustration
I/O Scheduler
• I/O scheduler performs same job as Process Scheduler-- it allocates
the devices, control units, and channels.

• Under heavy loads, when # requests > # available paths, I/O


scheduler must decide which request satisfied first.

• I/O requests are not preempted: once channel program has


started, it’s allowed to continue to completion even though I/O
requests with higher priorities may have entered queue.
• Feasible because programs are relatively short (50 to 100 ms).
I/O Scheduler
• Some systems allow I/O scheduler to give preferential
treatment to I/O requests from “high-priority”
programs.
• If a process has high priority then its I/O requests
also has high priority and is satisfied before other I/O
requests with lower priorities.

• I/O scheduler must synchronize its work with traffic


controller to make sure that a path is available to satisfy
selected I/O requests.
I/O Device Handler
• I/O device handler processes the I/O interrupts, handles error
conditions, and provides detailed scheduling algorithms, which are
extremely device dependent.
• Each type of I/O device has own device handler algorithm.
• first come first served (FCFS)
• shortest seek time first (SSTF)
• SCAN (including LOOK, N-Step SCAN, C-SCAN, and C-LOOK)
• Every scheduling algorithm should :
• Minimize arm movement
• Minimize mean response time
• Minimize variance in response time
First Come First Served (FCFS) Device
Scheduling Algorithm
• Simplest device-scheduling algorithm :

• Easy to program and essentially fair to users.

• On average, it doesn’t meet any of the three goals of a seek


strategy.

• Remember, seek time is most time-consuming of functions


performed here, so any algorithm that can minimize it is preferable
to FCFS.
First Come, First Serve
• Request queue: 3, 6, 1, 0, 7
• Head start position: 2
• Total seek distance: 1 + 3 + 5 + 1 + 7 = 17

7
6
5
4
Tracks
3
2
1
Time
0
Shortest Seek Time First (SSTF) Device
Scheduling Algorithm
• Uses same underlying philosophy as shortest job next where
shortest jobs are processed first & longer jobs wait.

• Request with track closest to one being served (that is, one with
shortest distance to travel) is next to be satisfied.

• Minimizes overall seek time.

• Favors easy-to-reach requests and postpones traveling to those


that are out of way.
Shortest Seek Distance First
• Request queue: 3, 6, 1, 0, 7
• Head start position: 2
• Total seek distance: 1 + 2 + 1 + 6 + 1 = 10

7
6
5
4
Tracks
3
2
1
Time
0
SCAN Device Scheduling Algorithm
• SCAN uses a directional bit to indicate whether the arm
is moving toward the center of the disk or away from it.

• Algorithm moves arm methodically from outer to inner


track servicing every request in its path.

• When it reaches innermost track it reverses direction


and moves toward outer tracks, again servicing every
request in its path.
SCAN
• Request queue: 3, 6, 1, 0, 7
• Head start position: 2
• Total seek distance: 1 + 1 + 3 + 3 + 1 = 9

7
6
5
4
Tracks
3
2
1
Time
0
Other Variations of SCAN
• N-Step SCAN -- holds all requests until arm starts on way back.
New requests are grouped together for next sweep.
• C-SCAN (Circular SCAN) -- arm picks up requests on its path during
inward sweep.
• When innermost track has been reached returns to outermost
track and starts servicing requests that arrived during last
inward sweep.
• Provides a more uniform wait time.
• C-LOOK (optimization of C-SCAN) --sweep inward stops at last high-
numbered track request, so arm doesn’t move all the way to last
track unless it’s required to do so.
• Arm doesn’t necessarily return to the lowest-numbered track;
it returns only to the lowest-numbered track that’s requested.
LOOK (Elevator Algorithm) : A Variation of SCAN

• Arm doesn’t necessarily go all the way to either edge


unless there are requests there.

• “Looks” ahead for a request before going to service it.

• Eliminates possibility of indefinite postponement of


requests in out-of-the-way places—at either edge of
disk.

• As requests arrive each is incorporated in its proper


place in queue and serviced when the arm reaches that
track.
Disk Arm Scheduling Policies
• Circular SCAN (C-SCAN): disk arm always
serves requests by scanning in one
direction.
• Once the arm finishes scanning for one
direction
• Returns to the 0th track for the next
round of scanning
C-SCAN
• Request queue: 3, 6, 1, 0, 7
• Head start position: 2
• Total seek distance: 1 + 3 + 1 + 7 + 1 = 13

7
6
5
4
Tracks
3
2
1
Time
0
Look and C-Look

• Similar to SCAN and C-SCAN


• the arm goes only as far as the final request in each
direction, then turns around
• Look for a request before continuing to move in a
given direction.
Which Device Scheduling Algorithm ?

• FCFS works well with light loads, but as soon as load


grows, service time becomes unacceptably long.
• SSTF is quite popular and intuitively appealing. It works
well with moderate loads but has problem of
localization under heavy loads.
• SCAN works well with light to moderate loads and
eliminates problem of indefinite postponement. SCAN is
similar to SSTF in throughput and mean service times.
• C-SCAN works well with moderate to heavy loads and
has a very small variance in service times.
Disk as An Example Device
• 30-year-old storage technology
• Incredibly complicated
• A modern drive
• 250,000 lines of micro code
Disk Characteristics
Disk platters: coated with magnetic materials for
recording

Disk
platters
Disk Characteristics
• Disk arm: moves a comb of disk heads
• Only one disk head is active for reading/writing

Disk heads

Disk
platters

Disk arm
Hard Disk Trivia…
• Aerodynamically designed to fly
• As close to the surface as possible
• No room for air molecules
• Therefore, hard drives are filled with special
inert gas
• If head touches the surface
• Head crash
• Scrapes off magnetic information
Disk Characteristics
Each disk platter is divided into concentric tracks

Disk heads

Disk
platters

Disk arm

Track
Disk Characteristics
A track is further divided into sectors. A sector is the smallest unit of disk
storage

Disk heads

Disk
platters

Sector
Disk arm

Track
Disk Characteristics
A cylinder consists of all tracks with a given disk arm position

Disk heads

Disk
platters

Sector
Disk arm
Cylinder

Track
Disk Characteristics
Cylinders are further divided into zones

Disk heads

Disk
platters

Sector
Disk arm
Cylinder

Track

Zones
Disk Characteristics
Zone-bit recording: zones near the edge of a disk store more information
(higher bandwidth)

Disk heads

Disk
platters

Sector
Disk arm
Cylinder

Track

Zones
More About Hard Drives Than You Ever Want
to Know

• Track skew: starting position of each track is slightly


skewed
• Minimize rotational delay when sequentially transferring bytes
across tracks
• Thermo-calibrations: periodically performed to account
for changes of disk radius due to temperature changes
• Typically 100 to 1,000 bits are inserted between sectors
to account for minor inaccuracies
Disk Access Time
• Seek time: the time to position disk heads (~8
msec on average)
• Rotational latency: the time to rotate the
target sector to underneath the head
• Assume 7,200 rotations per minute (RPM)
• 7,200 / 60 = 120 rotations per second
• 1/120 = ~8 msec per rotation
• Average rotational delay is ~4 msec
Disk Access Time
• Transfer time: the time to transfer bytes
• Assumptions:
• 58 Mbytes/sec
• 4-Kbyte disk blocks
• Time to transfer a block takes 0.07 msec

• Disk access time


• Seek time + rotational delay + transfer time
Disk Performance Metrics

• Latency
• Seek time + rotational delay

• Bandwidth
• Bytes transferred / disk access time
Examples of Disk Access Times
• If disk blocks are randomly accessed
• Average disk access time = ~12 msec
• Assume 4-Kbyte blocks
• 4 Kbyte / 12 msec = ~340 Kbyte/sec

• If disk blocks of the same cylinder are randomly


accessed without disk seeks
• Average disk access time = ~4 msec
• 4 Kbyte / 4 msec = ~ 1 Mbyte/sec
Examples of Disk Access Times

• If disk blocks are accessed sequentially


• Without seeks and rotational delays
• Bandwidth: 58 Mbytes/sec

• Key to good disk performance


• Minimize seek time and rotational latency
Disk Tradeoffs

Sector size Space utilization Transfer rate


1 byte 8 bits/1008 bits (0.8%) 80 bytes/sec (1 byte / 12 msec)
4 Kbytes 4096 bytes/4221 bytes (97%) 340 Kbytes/sec (4 Kbytes / 12 msec)
1 Mbyte (~100%) 58 Mbytes/sec (peak bandwidth)

• Larger sector size  better bandwidth


• Wasteful if only 1 byte out of 1 Mbyte is needed
Disk Controller
• Few popular standards
• IDE (integrated device electronics)
• (P)/(S)ATA (AT attachment interface)
• (SA)SCSI (small computer systems interface)
• SSD (Solid-State Disk)
• NVMe

• Differences
• Performance (Speed unit : IOPS – Input Output Per
Second)
• Parallelism
Redundant Array of Inexpensive / Independent
Disks (RAID)
• RAID is a set of physical disk drives that is viewed as a single logical
unit by OS.
• RAID assumes several smaller-capacity disk drives preferable to few
large-capacity disk drives because, by distributing data among
several smaller disks, system can simultaneously access requested
data from multiple drives.
• System shows improved I/O performance and improved data
recovery in event of disk failure.
• RAID introduces much-needed concept of redundancy to help
systems recover from hardware failure.
• Also requires more disk drives which increase hardware costs.
Six standard levels of RAID fall into 4
categories. Each offers a unique
combination of advantages.

Level Category Description I/O Request Data


Rate Transfer
Rate
0 Data Striping Nonredundant Excellent Excellent
1 Mirroring Mirrored Good/Fair Fair/Fair
2 Parallel Access Redundant Poor Excellent
3 Parallel Access Bit-interleaved parity Poor Excellent
4 Independent Block-interleaved parity Excellent/ Fair Fair/Poor
Access
5 Independent Block-interleaved Excellent/ Fair Fair/Poor
Access distributed parity
Any Question ?

Anything to discuss ?

You might also like