IT 402- COMPUTER ARCHITECTURE
UNIT-2 IMPORTANT QUESTION AND ANSWER
Q.1 Explain Auxiliary Memory.
Answer:
An Auxiliary memory is referred to as the lowest-cost,
highest-space, and slowest-approach storage in a computer system.
It is where programs and information are preserved for long-term
storage or when not in direct use. The most typical auxiliary
memory devices used in computer systems are magnetic disks and
tapes.
Magnetic Disks
A magnetic disk is a round plate generated of metal or plastic
coated with magnetized material. There are both sides of the
disk are used and multiple disks can be stacked on one spindle
with read/write heads accessible on each surface. All disks
revolve together at high speed and are not stopped or initiated for
access
purposes. Bits are saved in the magnetized surface in marks along
concentric circles known as tracks. The tracks are frequently divided
into areas known as sectors.
In this system, the lowest quantity of data that can be sent is a
sector. The subdivision of one disk surface into tracks and
sectors is displayed in the figure.
Magnetic Tape
Magnetic tape transport includes the robotic, mechanical, and
electronic components to support the methods and control
structure for a magnetic tape unit. The tape is a layer of plastic
coated with a magnetic documentation medium.
Bits are listed as a magnetic stain on the tape along various
tracks. There are seven or nine bits are recorded together to form
a character together with a parity bit. Read/write heads are
mounted one in each track therefore that information can be
recorded and read as a series of characters.
Magnetic tape units can be stopped, initiated to move forward,
or in the opposite, or it can be reversed. However, they cannot
be initiated or stopped fast enough between single characters.
For this reason, data is recorded in blocks defined as records.
Gaps of unrecorded tape are added between records where the
tape can be stopped.
The tape begins affecting while in a gap and achieves its
permanent speed by the time it arrives at the next record. Each
record on tape has
a recognition bit design at the starting and end. By reading the
bit design at the starting, the tape control recognizes the data
num
Q.2 Define virtual memory. Explain with diagram.
Answer: Virtual memory is a memory management technique used
by operating systems to give the appearance of a large, continuous
block of memory to applications, even if the physical memory
(RAM) is limited. It allows larger applications to run on systems
with less RAM.
The main objective of virtual memory is to support
multiprogramming, The main advantage that virtual memory
provides is, a running process does not need to be entirely in
memory.
Programs can be larger than the available physical memory.
Virtual Memory provides an abstraction of main memory,
eliminating concerns about storage limitations.
A memory hierarchy, consisting of a computer system's memory
and a disk, enables a process to operate with only some portions of
its address space in RAM to allow more processes to be in
memory.
dA virtual memory is what its name indicates- it is an illusion
of a memory that is larger than the real memory. We refer to
the software component of virtual memory as a virtual
memory manager. The basis of virtual memory is the
noncontiguous memory allocation model. The virtual memory
manager removes
some components from memory to make room for other
components.
The size of virtual storage is limited by the addressing scheme
of the computer system and the amount of seconary memory
available not by the actual number of main storage locations.
Working of virtual memory:
Virtual Memory is a technique that is implemented using both
hardware and software. It maps memory addresses used by a
program, called virtual addresses, into physical addresses in
computer memory.
● All memory references within a process are logical addresses
that are dynamically translated into physical addresses at run
time. This means that a process can be swapped in and out of
the main memory such that it occupies different places in the
main memory at different times during the course of
execution.
● A process may be broken into a number of pieces and these
pieces need not be continuously located in the main memory
during execution. The combination of dynamic run-time
address translation and the use of a page or segment table
permits this.
ber.
Q.3 Explain cache memory with any one mapping technique.
Answer: Cache memory is a small, high-speed storage area in
a computer. The cache is a smaller and faster memory that
stores copies of the data from frequently used main memory
locations. There are various independent caches in a CPU,
which store instructions and data.
● The most important use of cache memory is that it is used to
reduce the average time to access data from the main memory.
● The concept of cache works because there exists locality of
reference (the same items or nearby items are more likely to be
accessed next) in processes.
Characteristics of Cache Memory
● Extremely fast memory type that acts as a buffer between
RAM and the CPU.
● Holds frequently requested data and instructions, ensuring
that they are immediately available to the CPU when needed.
● Costlier than main memory or disk memory but more
economical than CPU registers.
● Used to speed up processing and synchronize with the high-
speed CPU.
Levels of Memory:
● Level 1 or Register: It is a type of memory in which data
is stored and accepted that are immediately stored in the
CPU. The most commonly used register is Accumulator,
Program counter , Address Register, etc.
● Level 2 or Cache memory: It is the fastest memory that
has faster access time where data is temporarily stored for
faster access.
● Level 3 or Main Memory: It is the memory on which the
computer works currently. It is small in size and once
power is off data no longer stays in this memory.
● Level 4 or Secondary Memory: It is external memory
that is not as fast as the main memory but data stays
permanently in this memory.
Associative Mapping
Associative mapping is a type of cache mapping where any
block of main memory can be stored in any cache line.
Unlike direct-mapped cache, where each memory block is
restricted to a specific cache line based on its index, fully
associative mapping gives the cache the flexibility to place
a memory block in any available cache line. This improves
the hit ratio but requires a more complex system for
searching and managing cache lines.
The address structure of Cache Memory is different in fully
associative mapping from direct mapping. In fully
associative mapping, the cache does not have an index
field. It only have a tag which is same as Index Field in
memory address. Any block of memory can be placed in
any cache line. This
flexibility means that there’s no fixed position for memory
blocks in the cache.
Cache Memory Structure in Associative Mapping
To determine whether a block is present in the cache, the
tag is compared with the tags stored in all cache lines. If a
match is found, it is a cache hit, and the data is retrieved
from that cache line. If no match is found, it's a cache miss,
and the required data is fetched from main memory.
Q.4 Explain Associative Memory.
Answer:
An associative memory can be treated as a memory unit whose
saved information can be recognized for approach by the
content of the information itself instead of by an address or
memory
location. Associative memory is also known as Content
Addressable Memory (CAM).
The block diagram of associative memory is shown in the
figure. It includes a memory array and logic for m words with
n bits per word. The argument register A and key register K
each have n bits, one for each bit of a word.
The match register M has m bits, one for each memory word.
Each word in memory is related in parallel with the content of
the argument register.
The words that connect the bits of the argument register set an
equivalent bit in the match register. After the matching
process, those bits in the match register that have been set
denote the fact that their equivalent words have been
connected.
Reading is proficient through sequential access to memory for
those words whose equivalent bits in the match register have
been set.
Q.5 What is paging and segmentation?
Answer: Paging
Paging is a method or technique which is used for non-
contiguous memory allocation. It is a fixed-size partitioning
theme (scheme). In paging, both main memory and secondary
memory are divided into equal fixed-size partitions. The
partitions of the secondary memory area unit and main
memory area unit are known as pages and frames respectively.
Features of Paging
● Fixed-Size Division: Memory is divided into fixed-size
pages, simplifying memory management.
● Hardware-Defined Page Size: Page size is set by hardware
and is uniform across all pages.
● OS-Managed: The operating system handles paging,
including maintaining page tables and free frame lists.
● Eliminates External Fragmentation: Paging avoids
external fragmentation but can suffer from internal
fragmentation.
● Invisible to User: Paging is transparent to programmers and
users, simplifying software development.
Paging is a memory management method accustomed to
fetching processes from the secondary memory into the main
memory in the form of pages. in paging, each process is split
into parts wherever the size of every part is the same as the
page size.
The size of the last half could also be but the page size. The
pages of the process area unit hold on within the frames of
main memory relying upon their accessibility.
Segmentation
Segmentation is another non-contiguous memory allocation
scheme, similar to paging. However, unlike paging which
divides a process into fixed-size pages segmentation divides
memory into variable-sized segments that correspond to
logical units such as functions, arrays, or data structures.
Features of Segmentation
● Variable-Size Division: Memory is divided into logical
segments of varying sizes based on program structure.
● User/Programmer-Defined Sizes: Segment sizes are
defined by the programmer or compiler, reflecting logical
program units.
● Compiler-Managed: Segmentation is primarily managed by
the compiler, with OS support for memory allocation.
● Supports Sharing and Protection: Segmentation facilitates
sharing code/data between processes and easy implementation
of protection.
● Visible to User: Segmentation is visible to programmers,
allowing better control over logical memory organization.
In segmentation, both main memory and secondary memory
are not divided into equal-sized partitions. Instead, they are
split into segments of varying sizes. These segments are
tracked using a data structure called the segment table.
The segment table stores information about each segment,
primarily:
● Base: The starting physical address of the segment in
memory.
● Limit: The length (or size) of the segment.
When accessing memory, the CPU generates a logical address
composed of:
● A Segment Number
●A Segment Offset
The MMU (Memory Management Unit) uses the segment
number to find the corresponding base and limit in the
segment table. If the offset is less than the limit, the address is
considered valid, and the physical address is computed by
adding the offset to the base. If the offset exceeds the limit, an
error (segmentation fault) occurs, indicating an invalid address
access attempt.
The above figure shows the translation of a logical address to a
physical address.
The key register supports a mask for selecting a specific field
or key in the argument word. The whole argument is
distinguished with each memory word if the key register
includes all 1's.
Hence, there are only those bits in the argument that have 1's
in their equivalent position of the key register are compared.
Therefore, the key gives a mask or recognizing a piece of data
that determines how the reference to memory is created.
The following figure can define the relation between the
memory array and the external registers in associative
memory.
The cells in the array are considered by the letter C with two
subscripts. The first subscript provides the word number and
the second determines the bit position in the word. Therefore
cell Cij is the cell for bit j in word i.
A bit in the argument register is compared with all the bits in
column j of the array supported that Kj = 1. This is completed
for all columns j = 1, 2 . . . , n.
If a match appears between all the unmasked bits of the
argument and the bits in word i, the equivalent bit Mi in the
match register is
set to 1. If one or more unmasked bits of the argument and the
word do not match, Mi is cleared to 0.