Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
30 views85 pages

Final Exam

The goal of the operating system and how it works

Uploaded by

binmajedshort
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views85 pages

Final Exam

The goal of the operating system and how it works

Uploaded by

binmajedshort
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 85

ADEN

2024
1
EXE.(1)
Consider the following set of processes, with the length of the CPU burst time given in milliseconds:

Process Burst Time Priority


P1 2 2
P2 1 1
P3 8 4
P4 4 2
P5 5 3

The processes are assumed to have arrived in the order P1, P2, P3, P4, P5, all at time 0.

a. a. Draw four Gantt charts that illustrate the execution of these processes using the following scheduling
algorithms: FCFS, SJF, non-preemptive priority (a larger priority number implies a higher priority), and RR
(quantum = 2).

b. What is the waiting time of each process for each of these scheduling algorithms?
Chapter 5
Background
 Processes can execute concurrently
• May be interrupted at any time, partially completing execution
 Concurrent access to shared data may result in data inconsistency
 Maintaining data consistency requires mechanisms to ensure the orderly
execution of cooperating processes
 Processes within a system may be independent or cooperating
 Cooperating process can affect or be affected by other processes, including
sharing data
Race Condition
 Race condition: The situation where several processes access – and
manipulate shared data concurrently. The final value of the shared data
depends upon which process finishes last.
 To prevent race conditions, concurrent processes must be synchronized.
Critical Section Problem

 Consider system of n processes {p0, p1, … pn-1}


 Each process has critical section segment of code
• Process may be changing common variables, updating table, writing file,
etc.
• When one process in critical section, no other may be in its critical section
Critical Section Problem

 Critical section problem is to design protocol to solve this


 Each process must ask permission to enter critical section in entry section,
may follow critical section with exit section, then remainder section
Critical-Section Problem (Cont.)
Requirements for solution to critical-section problem

1. Mutual Exclusion - If process Pi is executing in its critical section, then no


other processes can be executing in their critical sections
2. Progress - If no process is executing in its critical section and there exist some
processes that wish to enter their critical section, then the selection of the
process that will enter the critical section next cannot be postponed
indefinitely
Critical-Section Problem (Cont.)
3. Bounded Waiting - A bound must exist on the number of times that other
processes are allowed to enter their critical sections after a process has made a
request to enter its critical section and before that request is granted
• Assume that each process executes at a nonzero speed
•No assumption concerning relative speed of the n processes
Chapter 6
The Deadlock Problem
 A set of blocked processes each holding a resource and waiting to acquire a
resource held by another process in the set.
Deadlock Characterization
Deadlock can arise if four conditions hold simultaneously.

 Mutual exclusion: only one process at a time can use a resource


 Hold and wait: a process holding at least one resource is waiting to acquire additional
resources held by other processes
 No preemption: a resource can be released only voluntarily by the process holding it, after
that process has completed its task
 Circular wait: there exists a set {P0, P1, …, Pn} of waiting processes such that P0 is waiting
for a resource that is held by P1, P1 is waiting for a resource that is held by P2, …, Pn–1 is
waiting for a resource that is held by Pn, and Pn is waiting for a resource that is held by P0.
System Model
 System consists of resources
 Resource types R1, R2, . . ., Rm
• CPU cycles, memory space, I/O devices
 Each resource type Ri has Wi instances.
 Each process utilizes a resource as follows:
• request
• use
• release
Methods for Handling Deadlocks
 Ensure that the system will never enter a deadlock state:
• Deadlock prevention
• Deadlock avoidance
 Allow the system to enter a deadlock state and then recover
 Ignore the problem and pretend that deadlocks never occur in the system; used by most
operating systems, including UNIX
Deadlock Prevention
Restrain the ways request can be made
 Mutual Exclusion – not required for sharable resources (e.g., read-only files); must hold
for non-sharable resources
 Hold and Wait – must guarantee that whenever a process requests a resource, it does not
hold any other resources
• Require process to request and be allocated all its resources before it begins execution,
or allow process to request resources only when the process has none allocated to it.
• Low resource utilization; starvation possible
Deadlock Prevention (Cont.)
 No Preemption –
• If a process that is holding some resources requests another resource that cannot be
immediately allocated to it, then all resources currently being held are released
• Preempted resources are added to the list of resources for which the process is waiting
• Process will be restarted only when it can regain its old resources, as well as the new
ones that it is requesting
 Circular Wait – impose a total ordering of all resource types, and require that each
process requests resources in an increasing order of enumeration
Deadlock Avoidance
Requires that the system has some additional a priori information available

 Simplest and most useful model requires that each process declare the maximum number
of resources of each type that it may need
 The deadlock-avoidance algorithm dynamically examines the resource-allocation state to
ensure that there can never be a circular-wait condition
 Resource-allocation state is defined by the number of available and allocated resources,
and the maximum demands of the processes
Deadlock Detection
 Allow system to enter deadlock state

 Detection algorithm

 Recovery scheme
Recovery from Deadlock: Resource Preemption
We successively preempt some resources from processes and give these resources to other
processes until the deadlock cycle is broken. Three issues need to be addressed:

 Selecting a victim – minimize cost

 Rollback – return to some safe state, restart process for that state

 Starvation – same process may always be picked as victim, include number of rollback in
cost factor
Questions
?What is the difference between Synchronization and preventing deadlock -1

Synchronization is an agreement between two or more concurrent processes to ensure that


.they do not simultaneously execute some part of the program called critical section
Deadlock is a situation when two or more processes are waiting for another to release a
.resource and the processes are waiting in a circular chain
.And where there is process synchronization there is a chance of deadlock

?What is the difference between deadlock prevention and deadlock avoidance -2

A deadlock prevention method assures that at least one of the four deadlock conditions never
occurs. In contrast, the deadlock avoidance mechanism prevents the system from coming to an
.unsafe state
Chapter 7
Background
 Program must be brought into memory (from disk) and placed within a
process for it to be run.
 Input queue – collection of processes on the disk that are waiting to be
brought into memory to run the program.
 User programs go through several steps before being run.
 Main memory and registers are only storage CPU can access directly
 Memory unit only sees a stream of:
 addresses + read requests, or
 address + data and write requests
Binding of Instructions and Data to Memory
 Address binding of instructions and data to memory addresses can happen at
three different stages
• Compile time: If memory location known a priori, absolute code can be
generated; must recompile code if starting location changes
• Load time: Must generate relocatable code if memory location is not
known at compile time
• Execution time: Binding delayed until run time if the process can be moved
during its execution from one memory segment to another
 Need hardware support for address maps (e.g., base and limit registers)
Logical vs. Physical Address Space
 The concept of a logical address space that is bound to a separate physical
address space is central to proper memory management
• Logical address – generated by the CPU; also referred to as virtual
address
• Physical address – address seen by the memory unit

Name two differences between logical and physical addresses.


A logical address does not refer to an actual existing address; rather, it refers to an abstract
address in an abstract address space. Contrast this with a physical address that refers to an
actual physical address in memory. A logical address is generated by the CPU and is translated
into a physical address by the memory management unit (MMU). Therefore, physical addresses
are generated by the MMU.
Protection
 Need to ensure that a process can access
only those addresses in its address space.

 We can provide this protection by using


a pair of base and limit registers define
the logical address space of a process.
Memory-Management Unit (MMU)
 Hardware device that at run time maps virtual
to physical address
 Consider simple scheme. which is a
generalization of the base-register scheme.
 The base register now called relocation
register
 The value in the relocation register is added to
every address generated by a user process at the
time it is sent to memory
 The user program deals with logical addresses;
it never sees the real physical addresses
Dynamic Loading
 The entire program does need to be in memory to execute
 Routine is not loaded until it is called
 Better memory-space utilization; unused routine is never loaded
 All routines kept on disk in relocatable load format
 Useful when large amounts of code are needed to handle infrequently occurring
cases
 No special support from the operating system is required
Dynamic Linking
 Static linking – system libraries and program code combined by the loader
into the binary program image
 Dynamic linking –linking postponed until execution time
 Operating system checks if routine is in processes’ memory address
 If not in address space, add to address space
 Dynamic linking is particularly useful for libraries
What are the similarities and differences between dynamic loading and
dynamic linking?

Dynamic loading and dynamic linking are both techniques used in operating
systems to manage and execute programs, but they serve different purposes.
Dynamic loading is about loading a program into memory at runtime, while
dynamic linking is about linking a program with external libraries or
modules at runtime rather than at compile time.
Both techniques contribute to efficient memory usage and flexible program
execution in an operating system.
Swapping
 A process can be swapped temporarily out of memory to a backing store, and
then brought back into memory for continued execution.
 Backing store – fast disk large enough to accommodate copies of all memory
images for all users; must provide direct access to these memory images.
 Roll out, roll in – swapping variant used for priority-based scheduling
algorithms; lower-priority process is swapped out so higher-priority process
can be loaded and executed.
 Major part of swap time is transfer time; total transfer time is directly
proportional to the amount of memory swapped.
 Modified versions of swapping are found on many systems, i.e., UNIX,
Linux, and Windows.
What is the difference between overlays and swapping?

1. In overlays there is no need for the whole program to reside in main


memory .here if say if a program has two functions say f1(50k) and f2(70k)
such that they are in mutual exclusion with each other then, the memory of
only 70k will be allocated for both the functions and the one that is needed at
a particular moment can reside in the main memory . this even saves the
memory, but it is not in case of swapping. here the whole program needs to
reside in main memory
2. overlays require careful and time-consuming planning while in swapping
it is not needed
3. swapping needs to issue system calls for requesting and releasing memory
which is not in case of overlays
Contiguous Allocation
 Main memory must support both OS and user processes
 Limited resource, must allocate efficiently
 Contiguous allocation is one early method
 Main memory usually into two partitions:
• Resident operating system, usually held in low memory with interrupt
vector
• User processes then held in high memory
• Each process contained in single contiguous section of memory
Contiguous Allocation (Cont.)
 Relocation registers used to protect user processes from each other, and from
changing operating-system code and data
• Base register contains value of smallest physical address
• Limit register contains range of logical addresses – each logical address
must be less than the limit register
• MMU maps logical address dynamically
• Can then allow actions such as kernel code being transient and kernel
changing size
Explain the difference between
a. Contiguous memory allocation b. Non-Contiguous memory allocation
Non-Contiguous Memory Allocation Contiguous Memory Allocation .NO

Non-Contiguous memory allocation allocates Contiguous memory allocation allocates 1


.separate blocks of memory to a file/process consecutive blocks of memory to a
.file/process
.Slower in Execution .Faster in Execution 2
.It is difficult for the OS to control .It is easier for the OS to control 3
Only External fragmentation occurs in Non- Both Internal fragmentation and external 4
.Contiguous memory allocation method fragmentation occurs in Contiguous memory
allocation method
.It includes paging and segmentation It includes single partition allocation and 5
.multi-partition allocation
.No memory wastage is there .Wastage of memory is there 6
It is of two types: It is of two types: 7
Paging Fixed (or static) partitioning
Segmentation Dynamic partitioning
Variable Partition
 Multiple-partition allocation
• Degree of multiprogramming limited by number of partitions
• Variable-partition sizes for efficiency (sized to a given process’ needs)
• Hole – block of available memory; holes of various size are scattered throughout memory
• When a process arrives, it is allocated memory from a hole large enough to accommodate it
• Process exiting frees its partition, adjacent free partitions combined
• Operating system maintains information about:
a) allocated partitions
• b) free partitions (hole)
Dynamic Storage-Allocation Problem
How to satisfy a request of size n from a list of free holes?

 First-fit: Allocate the first hole that is big enough


 Best-fit: Allocate the smallest hole that is big enough; must search entire list,
unless ordered by size
 Produces the smallest leftover hole
 Worst-fit: Allocate the largest hole; must also search entire list
 Produces the largest leftover hole

First-fit and best-fit better than worst-fit in terms of speed and storage utilization
Questions
1- Given six memory partitions of 300 KB, 600 KB, 350 KB, 200 KB, 750 KB,
and 125 KB (in order), how would the first-fit, best-fit, and worst-fit
algorithms place processes of size 115 KB, 500 KB, 358 KB, 200 KB, and 375
KB (in order)?

2- Given six memory partitions of 100 MB, 170 MB, 40MB, 205 MB, 300
MB, and 185 MB (in order), how would the first-fit, best-fit, and worst-fit
algorithms place processes of size 200 MB, 15MB, 185 MB, 75MB, 175 MB,
and 80MB (in order)? Indicate which—if any—requests cannot be satisfied.
Fragmentation
 External Fragmentation – total memory space exists to satisfy a request, but
it is not contiguous
 Internal Fragmentation – allocated memory may be slightly larger than
requested memory; this size difference is memory internal to a partition, but
not being used
 Reduce external fragmentation by Compaction
Paging
 Physical address space of a process can be noncontiguous; process is allocated physical
memory whenever the latter is available
• Avoids external fragmentation
• Avoids problem of varying sized memory chunks
 Divide physical memory into fixed-sized blocks called frames
• Size is power of 2, between 512 bytes and 16 Mbytes
 Divide logical memory into blocks of same size called pages
 Keep track of all free frames
 To run a program of size N pages, need to find N free frames and load program
 Set up a page table to translate logical to physical addresses
 Still have Internal fragmentation
Address Translation Scheme
 Address generated by CPU is divided into:
• Page number (p) – used as an index into a page table which contains base address of
each page in physical memory
• Page offset (d) – combined with base address to define the physical memory address that
is sent to the memory unit

page number page offset


p d
m -n n

• For given logical address space 2m and page size 2n


Why are page sizes always powers of 2?

Recall that paging is implemented by breaking up an address into a page and


offset number. It is most efficient to break the address into X page bits and Y
offset bits, rather than perform arithmetic on the address to calculate the page
number and offset. Because each bit position represents a power of 2, splitting an
address between bits results in a page size that is a power of 2.
Implementation of Page Table
 Page table is kept in main memory
• Page-table base register (PTBR) points to the page table
• Page-table length register (PTLR) indicates size of the page table
 In this scheme every data/instruction access requires two memory accesses
• One for the page table and one for the data / instruction
 The two-memory access problem can be solved by the use of a special fast-
lookup hardware cache called translation look-aside buffers (TLBs) (also
called associative memory).
Translation Look-Aside Buffer
 Some TLBs store address-space identifiers (ASIDs) in each TLB entry –
uniquely identifies each process to provide address-space protection for that
process
• Otherwise need to flush at every context switch
 TLBs typically small (64 to 1,024 entries)
 On a TLB miss, value is loaded into the TLB for faster access next time
• Replacement policies must be considered
• Some entries can be wired down for permanent fast access
Structure of the Page Table
 Memory structures for paging can get huge using straight-forward methods
• Consider a 32-bit logical address space as on modern computers
• Page size of 4 KB (212)
• Page table would have 1 million entries (232 / 212)
• If each entry is 4 bytes  each process 4 MB of physical address space for the page
table alone
 Don’t want to allocate that contiguously in main memory
• One simple solution is to divide the page table into smaller units
 Hierarchical Paging
 Hashed Page Tables
 Inverted Page Tables
Segmentation
 Memory-management scheme that supports user view of memory.
 A program is a collection of segments. A segment is a logical unit such as:
main program,
procedure,
function,
method,
object,
local variables, global variables,
common block,
stack,
symbol table, arrays
Segmentation Architecture
 Logical address consists of a two tuple:
<segment-number, offset>,
 Segment table – maps two-dimensional physical addresses; each table entry has:
• base – contains the starting physical address where the segments reside in memory.
• limit – specifies the length of the segment.
 Segment-table base register (STBR) points to the segment table’s location in memory.
Segmentation Architecture (Cont.)
 Relocation.
• dynamic
• by segment table

 Sharing.
• shared segments
• same segment number

 Allocation.
• first fit/best fit
• external
fragmentation
Segmentation Architecture (Cont.)
 Protection. With each entry in segment table associate:
• validation bit = 0  illegal segment
• read/write/execute privileges
 Protection bits associated with segments; code sharing occurs at segment level.
 Since segments vary in length, memory allocation is a dynamic storage-allocation
problem.
 A segmentation example is shown in the following diagram
What are the similarities and differences between the paging and
segmentation?

Segmentation Paging No
Segmentation is a memory Paging is a memory 1 Similarities
management scheme management scheme
Deals with the non- Deals with the non- 2
contiguous allocation of contiguous allocation of
memory memory
In Segmentation a process is divided In Paging each process is divided 1 Differences
.into variable-sized segments .into fixed-size pages

Segmentation is managed by Paging is managed by hardware 2


.software through a segment table .through a page table
Segmentation results in more Paging results in less internal 3
internal fragmentation but less fragmentation, but more external
.external fragmentation .fragmentation
Segmentation has lower Paging has better performance 4
performance due to the software- due to the hardware-based
based management and complex management and simpler
.address translation .address translation
Memory address split into segment Memory address split into page 5
.identifier and offset .number and offset
Segment table maps logical Page table maps virtual page 6
.segments to physical memory .numbers to physical frames
Chapter 8
Virtual memory
 Virtual memory – separation of user logical memory from physical
memory
• Only part of the program needs to be in memory for execution
• Logical address space can therefore be much larger than physical address
space
• Allows address spaces to be shared by several processes
• Allows for more efficient process creation
• More programs running concurrently
Virtual memory (Cont.)
 Virtual address space – logical view of how process is stored in memory
• Usually start at address 0, contiguous addresses until end of space
• Meanwhile, physical memory organized in page / frames
• MMU must map logical to physical
 Virtual memory can be implemented via:
• Demand paging
• Demand segmentation
What are the difference between physical and virtual memory?

Physical Memory Virtual Memory


1 It is the actual RAM It is a memory management
technique (logical)
2 It uses the swapping technique It uses paging
3 It is limited to the size of the RAM chip It is limited by the size of the hard
disk.
4 It can directly access the CPU It cannot directly access the CPU
5 It is faster It is slower
Demand Paging
 Could bring entire process into memory at load time
 Or bring a page into memory only when it is needed
• Less I/O needed, no unnecessary I/O
• Less memory needed
• Faster response
• More users
 Similar to paging system with swapping
 Page is needed  reference to it
• invalid reference  abort
• not-in-memory  bring to memory
 Lazy swapper – never swaps a page into memory unless page will be needed
• Swapper that deals with pages is a pager
Page and Frame Replacement Algorithms
 Frame-allocation algorithm determines
• How many frames to give each process
• Which frames to replace
 Page-replacement algorithm
• Want lowest page-fault rate on both first access and re-access
 Evaluate algorithm by running it on a particular string of memory references (reference
string) and computing the number of page faults on that string
• String is just page numbers, not full addresses
• Repeated access to the same page does not cause a page fault
• Results depend on number of frames available
 In all our examples, the reference string of referenced page numbers is
7,0,1,2,0,3,0,4,2,3,0,3,0,3,2,1,2,0,1,7,0,1
First-In-First-Out (FIFO) Algorithm
 Reference string: 7,0,1,2,0,3,0,4,2,3,0,3,0,3,2,1,2,0,1,7,0,1
 3 frames (3 pages can be in memory at a time per process)

15 page faults

 Can vary by reference string: consider 1,2,3,4,1,2,5,1,2,3,4,5


• Adding more frames can cause more page faults!
 Belady’s Anomaly

 How to track ages of pages?


• Just use a FIFO queue
Optimal Algorithm
 Replace page that will not be used for longest period of time
• 9 is optimal for the example
 How do you know this?
• Can’t read the future
 Used for measuring how well your algorithm performs
Least Recently Used (LRU) Algorithm
 Use past knowledge rather than future
 Replace page that has not been used in the most amount of time
 Associate time of last use with each page

 12 faults – better than FIFO but worse than OPT


 Generally good algorithm and frequently used
Thrashing
 A process is thrashing If it is spending more time paging than executing
Questions
1 - Apply the (1) FIFO, (2) LRU, and (3) optimal (OPT) replacement algorithms for the
following page-reference strings:
• 2,6,9,2,4,2,1,7,3,0,5,2,1,2,9,5,7,3,8,5 .
• 0,6,3,0,2,6,3,5,2,4,1,3,0,6,1,4,2,3,5,7 .
Indicate the number of page faults for each algorithm assuming demand paging with three
frames.
2 - Apply the (1) FIFO, (2) LRU, and (3) optimal (OPT) replacement algorithms for the
following page-reference strings:

• 1,2,3,4,5,3,4,1,6,7,8,7,8,9,7,8,9,5,4,5,4,2 .
Indicate the number of page faults for each algorithm assuming demand paging with three
frames.
Chapter 9
Storage Hierarchy
 Storage systems
organized in hierarchy

• Speed
• Cost
• Volatility
Overview of Mass Storage Structure
 Bulk of secondary storage for modern computers is hard disk drives (HDDs) and
nonvolatile memory (NVM) devices
 HDDs spin platters of magnetically-coated material under moving read-write heads
• Drives rotate at 60 to 250 times per second
• Transfer rate is rate at which data flow between drive and computer
• Positioning time (random-access time) is time to move disk arm to desired cylinder
(seek time) and time for desired sector to rotate under the disk head (rotational
latency)
• Head crash results from disk head making contact with the disk surface -- That’s bad
 Disks can be removable
HDD Scheduling
 The operating system is responsible for using hardware efficiently — for the disk drives,
this means having a fast access time and disk bandwidth
 Minimize seek time
 Seek time  seek distance
 Disk bandwidth is the total number of bytes transferred, divided by the total time
between the first request for service and the completion of the last transfer
 There are many sources of disk I/O request
• OS
• System processes
• Users processes
RAID Structure
 RAID – redundant array of inexpensive disks
• multiple disk drives provides reliability via redundancy
 Disk striping uses a group of disks as one storage unit
 RAID is arranged into six different levels
• RAID schemes improve performance and improve the reliability of the storage system
by storing redundant data Mirroring or shadowing (RAID 1) keeps duplicate of each
disk
• Striped mirrors (RAID 1+0) or mirrored stripes (RAID 0+1) provides high
performance and high reliability
• Block interleaved parity (RAID 4, 5, 6) uses much less redundancy
RAID (0 + 1) and (1 + 0)
Other Features
 Regardless of where RAID implemented, other useful features can be added
 Snapshot is a view of file system before a set of changes take place (i.e. at a
point in time)
 Replication is automatic duplication of writes between separate sites
• For redundancy and disaster recovery
Chapter 10
Overview
 I/O management is a major component of operating system design and
operation
• Important aspect of computer operation
• I/O devices vary greatly
• Various methods to control them
• Performance management
• New types of devices frequent
 Ports, busses, device controllers connect to various devices
 Device drivers encapsulate device details
I/O Hardware
 Incredible variety of I/O devices
• Storage
• Transmission
• Human-interface
 Common concepts – signals from I/O devices interface with computer
• Port – connection point for device
• Bus - daisy chain or shared direct access
 PCI bus common in PCs and servers, PCI Express (PCIe)
 expansion bus connects relatively slow devices
 Serial-attached SCSI (SAS) common disk interface
•Controller (host adapter) – electronics that operate port, bus, device
 Sometimes integrated
 Sometimes separate circuit board (host adapter)
 Contains processor, microcode, private memory, bus controller, etc.
 Fiber channel (FC) is complex controller, usually separate circuit board (host-bus
adapter, HBA) plugging into bus
 I/O instructions control devices
 Devices usually have registers where device driver places commands, addresses, and data
to write, or read data from registers after command execution
 Devices have addresses, used by
• Direct I/O instructions
• Memory-mapped I/O
 Device data and command registers mapped to processor address space
 Especially for large address spaces (graphics)
Direct Memory Access
 Used to avoid programmed I/O (one byte at a time) for large data movement
 Requires DMA controller
 Bypasses CPU to transfer data directly between I/O device and memory
 OS writes DMA command block into memory
• Source and destination addresses
• Read or write mode
• Count of bytes
• Writes location of command block to DMA controller
• Bus mastering of DMA controller – grabs bus from CPU
 Cycle stealing from CPU but still much more efficient
• When done, interrupts to signal completion
 Version that is aware of virtual addresses can be even more efficient - DVMA
Chapter 11
File Concept
 Contiguous logical address space
 Types:
• Data
 Numeric

 Character

 Binary

• Program
 Contents defined by file’s creator
• Many types
 text file,
 source file,
 executable file
File Attributes
 Name – only information kept in human-readable form
 Identifier – unique tag (number) identifies file within file system
 Type – needed for systems that support different types
 Location – pointer to file location on device
 Size – current file size
 Protection – controls who can do reading, writing, executing
 Time, date, and user identification – data for protection, security, and
usage monitoring
 Information about files are kept in the directory structure, which is
maintained on the disk
File Operations
 Create
 Write – at write pointer location
 Read – at read pointer location
 Reposition within file - seek
 Delete
 Truncate
 Open (Fi) – search the directory structure on disk for entry Fi, and move the
content of entry to memory
 Close (Fi) – move the content of entry Fi in memory to directory structure
on disk
Access Methods
 A file is fixed length logical records
 Sequential Access
 Direct Access
 Other Access Methods
Operations Performed on Directory
 Search for a file

 Create a file

 Delete a file

 List a directory

 Rename a file

 Traverse the file system


Directory Organization
The directory is organized logically to obtain
 Efficiency – locating a file quickly
 Naming – convenient to users
• Two users can have same name for different files
• The same file can have several different names
 Grouping – logical grouping of files by properties, (e.g., all Java programs,
all games, …)
 Single-Level Directory
 Two-Level Directory
 Tree-Structured Directories
Single-Level Directory Tree-Structured Directories

Two-Level Directory
What are the differences between files and directory?
File Folder .NO

The file is a collection of information and data The folder is a collection of files 1

File has extension eg. (.txt, .xls, .doc, .pdf, .pptx) The folder does not have any extension 2

A file can't contain another file or folder A Folder can contain multiple files and folders 3
The file cannot hold another file or folder The folder can hold the file or another folder

The file consumes a certain amount of memory The folder does not have any specific size. It takes 4
.size up the file size or folder it contains. It does not take
up space on computer memory

Files are not permitted to be shared on the .Folders can be shared over the network 5
network on their own
Protection
 File owner/creator should be able to control:
• What can be done
• By whom
 Types of access
• Read
• Write
• Execute
• Append
• Delete
• List
Allocation Method
 An allocation method refers to how disk blocks are allocated for files:
• Contiguous
• Linked
• File Allocation Table (FAT)
Recovery
 Consistency checking – compares data in directory structure with data
blocks on disk, and tries to fix inconsistencies
• Can be slow and sometimes fails
 Use system programs to back up data from disk to another storage device
(magnetic tape, other magnetic disk, optical)
 Recover lost file or disk by restoring data from backup

You might also like