Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
36 views5 pages

OS Memory Management Notes Complete

The document discusses various memory management concepts in operating systems, starting from basic bare machines to complex techniques like virtual memory and demand paging. It covers the evolution from resident monitors to multiprogramming methods, including fixed and variable partitions, as well as advanced concepts like paging, segmentation, and their hybrid forms. Additionally, it addresses performance issues such as thrashing and the importance of cache memory organization and locality of reference in optimizing system performance.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views5 pages

OS Memory Management Notes Complete

The document discusses various memory management concepts in operating systems, starting from basic bare machines to complex techniques like virtual memory and demand paging. It covers the evolution from resident monitors to multiprogramming methods, including fixed and variable partitions, as well as advanced concepts like paging, segmentation, and their hybrid forms. Additionally, it addresses performance issues such as thrashing and the importance of cache memory organization and locality of reference in optimizing system performance.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Operating System: Memory Management Concepts

Basic Bare Machine

A Basic Bare Machine refers to a computing system that has no operating system loaded or running. It

typically comprises only the hardware components, such as the CPU, memory, input/output devices, and

storage. This type of machine operates with direct interaction between the user or programmer and the

hardware itself. In a bare machine setup, programs must be loaded and executed manually using low-level

programming, such as machine code or assembly language. There are no abstractions, interfaces, or safety

mechanisms to manage hardware resources, making it extremely difficult and error-prone to use. Bare

machines are generally used in embedded systems or during the initial stages of system development, where

complete control over hardware is necessary.

Resident Monitor

A Resident Monitor is one of the earliest forms of operating system programs. It stays resident in memory to

manage and oversee job execution. The monitor automates job sequencing and control by reading job

control language (JCL) instructions and initiating programs accordingly. It serves as a basic form of job

scheduler and memory manager. In early computing, when batch processing was the norm, the Resident

Monitor played a critical role in managing the execution of jobs without human intervention. It could handle

tasks like loading programs into memory, managing input/output, and error handling. Although primitive

compared to modern OSes, Resident Monitors laid the foundation for the development of more complex

operating systems by introducing the concept of automated resource management.

Multiprogramming with Fixed Partitions

Multiprogramming with Fixed Partitions is a memory management technique where the main memory is

divided into fixed-size partitions at system boot time. Each partition can contain exactly one process, and

once a partition is assigned to a process, no other process can use that space until the current one finishes.

This method allows multiple processes to reside in memory simultaneously, enabling CPU to switch between

them for better utilization. However, it suffers from internal fragmentation, where a partition may be larger

than the process it holds, resulting in wasted memory space. The number of processes that can reside in

memory is limited by the number of fixed partitions. Fixed partitioning is simple to implement but not

memory-efficient, especially when process sizes vary greatly.


Operating System: Memory Management Concepts

Multiprogramming with Variable Partitions

In Multiprogramming with Variable Partitions, memory is not divided into fixed-size blocks. Instead, memory is

allocated dynamically based on the size of the process. This eliminates internal fragmentation as each

process gets only as much memory as it requires. However, it introduces external fragmentation, where free

memory exists but is broken into small non-contiguous blocks, making it difficult to allocate memory for larger

processes. To manage external fragmentation, techniques like compaction are used to rearrange memory.

Variable partitioning allows better utilization of memory and can adapt to changing process sizes, offering

more flexibility than fixed partitioning. Memory allocation strategies such as first-fit, best-fit, and worst-fit are

commonly used with this technique.

Protection Schemes

Protection schemes in operating systems are mechanisms that prevent unauthorized access to memory and

other system resources by processes. They ensure that a process can only access its own allocated memory

and not interfere with other processes or the OS itself. Common protection techniques include the use of

base and limit registers, which define the valid address range for a process. Segmentation and paging also

provide logical separation and controlled access. Advanced protection includes access control lists,

hardware-enforced memory protection, and privileged instruction sets. Protection schemes are critical in

multiprogramming environments to maintain system stability, security, and data integrity.

Paging

Paging is a memory management technique that eliminates external fragmentation by dividing both physical

memory and logical memory into fixed-size blocks called frames and pages respectively. When a process is

loaded, its pages can be placed into any available memory frames. A page table keeps track of the mapping

between the process's pages and the memory frames. Paging allows non-contiguous memory allocation,

making memory use more efficient. However, it can introduce overhead due to the need for translation via the

page table. Modern CPUs often include a Translation Lookaside Buffer (TLB) to speed up page table

lookups. Paging is fundamental in implementing virtual memory systems.

Segmentation

Segmentation is a memory management technique that divides the memory of a process into segments
Operating System: Memory Management Concepts

based on logical divisions such as code, data, and stack. Each segment can be of variable size and is

identified by a segment number and offset. Unlike paging, which uses fixed-size blocks, segmentation

matches the program's logical structure. This provides easier protection and sharing of code modules.

Segmentation simplifies handling of growing data structures like stacks. However, it can suffer from external

fragmentation and may require complex memory allocation algorithms. The system uses a segment table to

map logical segments to physical addresses in memory.

Paged Segmentation

Paged Segmentation is a hybrid memory management technique that combines the benefits of both paging

and segmentation. In this approach, each segment is divided into pages, and these pages are loaded into

memory frames. This allows for the logical division of programs via segmentation and the efficient memory

utilization of paging. A segment table is used to locate the page tables, and each page table maps the

segment's pages to physical memory frames. Paged segmentation is complex to implement but offers a

balance between protection, flexibility, and efficient memory usage. It is commonly used in modern computer

architectures to implement virtual memory.

Virtual Memory Concepts

Virtual Memory is a memory management technique that gives an application the illusion of having a large

contiguous block of memory, even if the physical memory is smaller. It allows processes to execute even if

they are not entirely in physical memory. This is achieved by storing parts of the program on disk and loading

them into memory only when required. Virtual memory uses address translation, typically involving paging or

segmentation, to map virtual addresses to physical addresses. It enables larger and more complex

applications to run on limited hardware, improves multitasking, and increases system responsiveness. Key

components include the page table, TLB, and backing store (usually disk).

Demand Paging

Demand Paging is a virtual memory technique where pages of a program are not loaded into memory until

they are needed. When a program accesses a page not currently in memory, a page fault occurs and the OS

loads the required page from disk. This reduces memory usage and improves efficiency by only loading

necessary parts of a program. However, frequent page faults can degrade performance, especially if the disk
Operating System: Memory Management Concepts

access is slow. Demand paging works best with the principle of locality, as programs tend to use a limited set

of pages over a period. Efficient page replacement algorithms and sufficient memory are crucial to

maintaining good performance.

Performance of Demand Paging

The performance of demand paging is highly dependent on the page fault rate. The effective access time

(EAT) is calculated based on the memory access time, page fault service time, and the probability of page

faults. A low page fault rate leads to better performance, while a high rate can cause the system to slow down

dramatically. Optimizations such as prefetching, using fast storage, and efficient replacement algorithms can

improve performance. Demand paging allows systems to handle larger workloads and provides flexibility in

memory usage, but it requires careful tuning to avoid performance degradation due to frequent page faults.

Page Replacement Algorithms

Page Replacement Algorithms decide which memory page to replace when a new page needs to be loaded,

but there is no free space. Common algorithms include FIFO (First-In, First-Out), which replaces the oldest

page; LRU (Least Recently Used), which replaces the page that hasn't been used for the longest time; and

Optimal, which replaces the page that will not be used for the longest time in the future (theoretical). Other

algorithms include Second-Chance, Clock, and LFU (Least Frequently Used). The choice of algorithm affects

system performance, particularly under heavy load. A good algorithm minimizes page faults and ensures

efficient memory utilization.

Thrashing

Thrashing is a condition in virtual memory systems where the CPU spends more time swapping pages in and

out of memory than executing processes. It occurs when there is insufficient memory and too many

processes are competing for it, leading to a high page fault rate. As the system tries to load the required

pages, it keeps replacing pages that will soon be needed again, causing a vicious cycle of faults. This

drastically reduces performance and can make the system nearly unusable. Solutions include reducing the

degree of multiprogramming, increasing RAM, or using better page replacement algorithms. Thrashing is a

sign of system overload and poor memory management.


Operating System: Memory Management Concepts

Cache Memory Organization

Cache memory is a small, fast type of volatile memory located close to the CPU to temporarily store

frequently accessed data and instructions. It reduces the average time to access memory by holding copies

of data from main memory. Cache memory is organized into various mapping techniques such as

direct-mapped cache, fully associative cache, and set-associative cache. Direct-mapped cache maps each

block of main memory to exactly one cache line, while fully associative allows any block to be placed

anywhere. Set-associative cache is a hybrid approach that divides cache into several sets and allows a block

to go into any line in a specific set. Efficient cache design significantly boosts system performance.

Locality of Reference

Locality of Reference refers to the tendency of programs to access a relatively small portion of their address

space at any given time. There are two main types: Temporal Locality and Spatial Locality. Temporal Locality

means recently accessed memory locations are likely to be accessed again soon, while Spatial Locality

means nearby memory locations are likely to be accessed soon. This behavior is leveraged by caching and

memory management systems to optimize performance. Understanding locality helps in designing better

caching algorithms and page replacement policies, as it predicts which data will be used in the near future. It

is a foundational principle in computer architecture and operating systems.

You might also like