Memory Interleaving
In this method, the main memory is divided into ‘n’ equal-size modules and the CPU has separate
Memory Address Register and Memory Base register for each memory module. In addition, the
CPU has ‘n’ instruction register and a memory access system. When a program is loaded into the
main memory, its successive instructions are stored in successive memory modules.
st
For example if n=4 and the four memory modules are M 1, M2, M3, and M4 then 1 instruction
nd th th th
will be stored in M1, 2 in M2, 3rd in M3, 4 in M4, 5 in M1, 6 in M2 and so on.
Now during the execution of the program, when the processor issues a memory fetch command,
the memory access system creates n consecutive memory addresses and places them in the
Memory Address Register in the right order. A memory read command reads all the ‘n’ memory
modules simultaneously, retrieves the ‘n’ consecutive instructions, and loads them into the ‘n’
instruction registers. Thus each fetch for a new instruction results in the loading of ‘n’ consecutive
instructions in the ‘n’ instruction registers of the CPU.
Since the instructions are normally executed in the sequence in which they are written, the
availability of N successive instructions in the CPU avoids memory access after each instruction
execution, and the total execution time speeds up.
Obviously, the fetch successive instructions are not useful when a branch instruction is
encountered during the course of execution. This is because they require the new set of ‘n’
successive instructions, overwriting the previously stored instructions, which were loaded, but
some of which were not executed. The method is quite effective in minimising the memory-
processor speed mismatch because branch instructions do not occur frequently in a program.
Associative Memory
The time required to find an item stored in memory can be reduced considerably if stored data
can be identified for access by the contents of the data itself rather than by an address. A
memory unit accessed by content of the data is called an associative memory or content addressable
memory (CAM). This type of memory is accessed simultaneously and in parallel on the basis of data
content rather than by specific address or location. When a word is written in an associative
memory, no address is given. The memory is capable of finding an empty unused location to store
the word. When a word is to be read from an associative memory, the content of the word, or part
of the word, is specified. The memory locates all words, which match the specified content, and
marks them for reading.
Because of its organization, the associative memory is uniquely suited to do parallel searches by
data association. Moreover, searches can be done on an entire word or on a specific field within a
word. An associative memory is more expensive than a random access memory because each cell
must have storage capability as well as logic circuits for matching its content with an external
argument. For this reason associative memories are used in applications where the search time is
very critical and must be very short.
VIRTUAL MEMORY
Virtual memory is a concept used in some large computer systems that permit the user to construct programs as though a
large memory space were available, equal to the totality of secondary memory. Each address generated by the CPU goes
through an address mapping from the so-called virtual address to a physical address in the main memory. Virtual memory
is used to give programmers the illusion that they have a very large memory at their disposal, even though the computer
actually has a relatively small main memory. A Virtual memory system provides a mechanism for translating program-
generated addresses into correct main memory locations. This is done dynamically, while programs are being executed in
the CPU. The translation or mapping is handled automatically by the hardware by means of a mapping table.
Address Space and Memory Space
An address used by a programmer will be called a virtual address, and the set of such addresses
the address space. An address in the main memory is called a physical address. The set of such
locations is called the memory space. Thus, the address space is the set of addresses generated
by programs as they reference instructions and data; the memory space consists of the actual
main memory locations directly addressable for processing.
Consider a computer with a main-memory capacity of 64K words (K=1024). 16-bits are needed to
16
specify a physical address in memory since 64K = 2 . Suppose that the computer has auxiliary
memory for storing information equivalent to the capacity of 16 main memories. Let us denote
the address space by N and the memory space by M, we then have for this example N = 16 × 64
K = 1024K and M = 64K.
In a multiprogramming computer system, programs and data are transferred to and from auxiliary
memory and main memory based on demands imposed by the CPU. Suppose that program 1 is
currently being executed in the CPU. Program 1 and a portion of its associated data are moved
from secondary memory into the main memory. Portions of programs and data need not be in
contiguous locations in memory since information is being moved in and out, and empty spaces
may be available in scattered locations in memory.
In our example, the address field of an instruction code will consist of 20 bits but physical memory addresses must be
specified with only 16-bits. Thus CPU will reference instructions and data with a 20 bits address, but the information at
this address must be taken from physical memory because access to auxiliary storage for individual words will be
prohibitively long. A mapping table is then needed, as shown in Figure, to map a virtual address of 20 bits to a physical
address of 16 bits. The mapping is a dynamic operation, which means that every address is translated immediately as a
word is referenced by CPU.