Coa Paper Summer2023
Coa Paper Summer2023
1Q(a)
Q: Write the name of basic computer registers with their functionalities.
ANS:
Register Symbol Number of bits Function
1Q(b)
Q: Discuss 4-bit binary adder with neat diagram.
ANS:
• A Binary Adder is a digital circuit that implements the arithmetic sum of two binary numbers
supported with any length is known as a binary adder.
• It is generated using full-adder circuits connected in sequence.
• The output carries from one full-adder linked to the input carry of the next full-adder.
• The following block diagram demonstrates the interconnections of four full-adder circuits to support
a 4-bit binary adder.
• The augend bits of A and the addend bits of B is created by subscript numbers from right to left, with
subscript 0 indicating the low-order bit. The carries are linked in a chain by the full-adders.
• The input carry to the binary adder is C0 and the output carry is C4. The S outputs of the full-adders
create the needed sum bits.
• An n-bit binary adder is needed n full-adders. The output carries from each full-adder is linked to the
input carry of the next-high-order full-adder. The n data bits for the A inputs come from one register
including R1, and the n data bits for the B inputs come from another register including R2. The sum
can be transferred to a third register or one of the source registers (R 1 or R2), restoring its earlier
content.
1Q(c)
Q: Enlist various kinds of addressing modes. Explain any five of same and support your answer by
taking small example.
ANS:
2Q(c)
Q: ) Write two address, one address and zero address instructions program for the following
arithmetic expression:
X = (A + B) * (C – D / E) + F * G
ANS:
Two-Address Instructions
1. LOAD A, R1 (Load A into register R1)
2. ADD B, R1 (Add B to R1)
3. LOAD D, R2 (Load D into register R2)
4. DIV E, R2 (Divide R2 by E)
5. LOAD C, R3 (Load C into register R3)
6. SUB R2, R3 (Subtract R2 from R3)
7. MUL R1, R3 (Multiply R1 by R3)
8. LOAD F, R4 (Load F into register R4)
9. MUL G, R4 (Multiply R4 by G)
10. ADD R4, R3 (Add R4 to R3)
11. STORE R3, X(Store the result in X)
One-Address Instructions
1. LOAD A (Load A into AC)
2. ADD B (Add B to AC)
3. STORE T1 (Store result in T1)
4. LOAD D (Load D into AC)
5. DIV E (Divide AC by E)
6. STORE T2 (Store result in T2)
7. LOAD C (Load C into AC)
8. SUB T2 (Subtract T2 from AC)
9. STORE T3 (Store result in T3)
10. LOAD T1 (Load T1 into AC)
11. MUL T3 (Multiply AC by T3)
12. STORE T4 (Store result in T4)
13. LOAD F (Load F into AC)
14. MUL G (Multiply AC by G)
15. ADD T4 (Add T4 to AC)
16. STORE X (Store the result in X)
Zero-Address Instructions (Stack-Based)
1. PUSH A (Push A onto stack)
2. PUSH B (Push B onto stack)
3. ADD (Pop two values, add, push result)
4. PUSH D (Push D onto stack)
5. PUSH E (Push E onto stack)
6. DIV (Pop two values, divide, push result)
7. PUSH C (Push C onto stack)
8. SUB (Pop two values, subtract, push result)
9. MUL (Pop two values, multiply, push result)
10. PUSH F (Push F onto stack)
11. PUSH G (Push G onto stack)
12. MUL (Pop two values, multiply, push result)
13. ADD (Pop two values, add, push result)
14. POP X (Pop result and store in X)
2Q(c)OR
Q: Assume A = + 6 and B = + 7, apply Booth algorithm for multiplying A and B. Make necessary assumptions if
required.
ANS:
Here no register size given so, assume 4 bits
A = (6)10 → (0110)2
B = (7)10 → (0111)2
From flowchat AC = AC - A, If we want to ignore subtraction then we have to find -A
-A = 2’s complement of A
-A = 1010
1010
1010 0111 0 (2) Arithmetic 2)Shifting
shift right(ASR) operation
1101 0011 1
1.SISD
• SISD stands for 'Single Instruction and Single Data Stream'. It represents the organization of a single
computer containing a control unit, a processor unit, and a memory unit.
• Instructions are executed sequentially, and the system may or may not have internal parallel processing
capabilities.
• Most conventional computers have SISD architecture like the traditional Von-Neumann computers.
• Parallel processing, in this case, may be achieved by means of multiple functional units or by pipeline
processing.
• Instructions are decoded by the Control Unit and then the Control Unit sends the instructions to the
processing units for execution.
• Data Stream flows between the processors and memory bi-directionally.
2.SIMD
• SIMD stands for 'Single Instruction and Multiple Data Stream'. It represents an organization that
includes many processing units under the supervision of a common control unit.
• All processors receive the same instruction from the control unit but operate on different items of
data.
• The shared memory unit must contain multiple modules so that it can communicate with all the
processors simultaneously.
• SIMD is mainly dedicated to array processing machines. However, vector processors can also be
seen as a part of this group.
3.MISD
The flow-chart shown in describes the tasks performed by the assembler during the first pass. The steps
involved during the first pass are given below :
1. LC is initialized to 0.
2. A line of symbolic code is scanned and analyzed to determine if it has a label. (Note: The presence of code
of comma indicates a label).
3. If the line of code has no label, the assembler checks the symbol in the instruction field. If an ORG
pseudo instruction is present, the assembler initializes LC to the number that follows ORG.
4. Then the assembler goes to process the next line (goes to step 2).
5. If the line has an END pseudo instruction, the assembler terminates the first pass and goes to the second
pass. (Note: A line with ORG or END should not have a label).
6. If the line of code has a label, it is stored in the address symbol table along with its corresponding binary
equivalent number specified by the content of LC.
7. LC is incremented by 1.
8. The assembler continues to analyze next line of code.
3Q(c)
Q: Elaborate CPU-IOP communication.
ANS:
• There is a communication channel between IOP and CPU to perform task which come under
computer architecture.
• This channel explains the commands executed by IOP and CPU while performing some programs.
• The CPU do not executes the instructions but it assigns the task of initiating operations, the
instructions are executed by IOP.
• I/O transfer is instructed by CPU. The IOP asks for CPU through interrupt.
• This channel starts by CPU, by giving “test IOP path” instruction to IOP and then the communication
begins as shown in diagram:
• The sequence of operations aried out during CPU and IOP mmunication are:
1. CPU checks the existance of I/O path by sending an instruction.
2. In response to this IOP puts the status word in the memory stating the condition of IOP and 1/O
device (Busy, ready, etc.)
3. CPU checks the status word and if all conditions are OK, it sends the instruction to start I/O
transfer along with the memory address where the IOP program is stored.
4. After this CPU continues with another program.
5. IOP now conducts the I/O transfer using DMA and prepares status report.
6. On completion of I/O transfer, IOP sends an interrupt request to the CPU.
7. The CPU responds to the interrupt by issuing an instruction to read the status from the IOP. The
status indicates whether the transfer has been completed or if any errors occurred during the transfer.
3Q(a)OR
Q: Explain pipeline conflicts in brief.
ANS: Pipeline Conflicts:
There are some factors that cause the pipeline to deviate its normal performance. Some of these factors are
given below:
1. Timing Variations
All stages cannot take same amount of time. This problem generally occurs in instruction processing where
different instructions have different operand requirements and thus different processing time.
2. Data Hazards
When several instructions are in partial execution, and if they reference same data then the problem arises.
We must ensure that next instruction does not attempt to access data before the current instruction, because
this will lead to incorrect results.
3. Branching
In order to fetch and execute the next instruction, we must know what that instruction is. If the present
instruction is a conditional branch, and its result will lead us to the next instruction, then the next instruction
may not be known until the current one is processed.
4. Interrupts
Interrupts set unwanted instruction into the instruction stream. Interrupts effect the execution of instruction.
5. Data Dependency
It arises when an instruction depends upon the result of a previous instruction but this result is not yet
available.
3Q(b)OR
Q: Discuss three state bus buffers with neat diagram.
ANS:
• Three-state (tri-state) bus buffers are essential components in digital circuits that allow multiple
devices to share a common bus without interfering with each other.
• A three-state bus buffer can exist in one of three states: logic high (1), logic low (0), or high-
impedance (Z).
• The high-impedance state effectively disconnects the buffer from the bus, allowing other devices to
use the bus.
• Functions of Three-State Bus Buffers
o Drive Control: Enables a device to drive data onto the bus when required.
o Isolation: Prevents multiple devices from driving the bus simultaneously, avoiding bus
contention.
o Bus Sharing: Allows multiple devices to share a single communication bus efficiently.
• To form a single bus line, all the outputs of the 4 buffers are connected together.
• The control input will now decide which of the 4 normal inputs will communicate with the bus line.
• The decoder is used to ensure that only one control input is active at a time.
• The diagram of a 3-state buffer can be seen below.
3Q(c)OR
Q: Write a detailed note on associative memory.
ANS:
• An associative memory can be considered as a memory unit whose stored data can be identified for
access by the content of the data itself rather than by an address or memory location.
• Associative memory is often referred to as Content Addressable Memory (CAM).
• When a write operation is performed on associative memory, no address or memory location is given
to the word. The memory itself is capable of finding an empty unused location to store the word.
• On the other hand, when the word is to be read from an associative memory, the content of the word,
or part of the word, is specified. The words which match the specified content are located by the
memory and are marked for reading.
• The following diagram shows the block representation of an Associative memory.
• The key register supports a mask for selecting a specific field or key in the argument word. The whole
argument is distinguished with each memory word if the key register includes all 1's.
• Hence, there are only those bits in the argument that have 1's in their equivalent position of the key
register are compared. Therefore, the key gives a mask or recognizing a piece of data that determines
how the reference to memory is created.
• The following figure can define the relation between the memory array and the external registers in
associative memory.
• The cells present inside the memory array are marked by the letter C with two subscripts. The first
subscript gives the word number and the second specifies the bit position in the word. For instance,
the cell Cij is the cell for bit j in word i.
• A bit Aj in the argument register is compared with all the bits in column j of the array provided that
Kj = 1. This process is done for all columns j = 1, 2, 3......, n.
• If a match occurs between all the unmasked bits of the argument and the bits in word i, the
corresponding bit Mi in the match register is set to 1. If one or more unmasked bits of the argument
and the word do not match, Mi is cleared to 0.
3Q(a)
Q: Explain DMA in brief
ANS:
• DMA Controller is a hardware device that allows I/O devices to directly access memory with less
participation of the processor.
• DMA controller needs the same old circuits of an interface to communicate with the CPU and
Input/Output devices.
• Direct Memory Access uses hardware for accessing the memory, that hardware is called a DMA
Controller.
• It has the work of transferring the data between Input Output devices and main memory with very
less interaction with the processor.
• The direct Memory Access Controller is a control unit, which has the work of transferring data.
• DMA Controller is a type of control unit that works as an interface for the data bus and the I/O
Devices.
• As mentioned, DMA Controller has the work of transferring the data without the intervention of the
processors, processors can control the data transfer.
• DMA Controller also contains an address unit, which generates the address and selects an I/O device
for the transfer of data.
• Here we are showing the block diagram of the DMA Controller.
Types of Direct Memory Access (DMA)
• Single-Ended DMA
• Dual-Ended DMA
• Arbitrated-Ended DMA
• Interleaved DMA
Single-Ended DMA: Single-Ended DMA Controllers operate by reading and writing from a single memory
address. They are the simplest DMA.
Dual-Ended DMA: Dual-Ended DMA controllers can read and write from two memory addresses. Dual-
ended DMA is more advanced than single-ended DMA.
Arbitrated-Ended DMA: Arbitrated-Ended DMA works by reading and writing to several memory
addresses. It is more advanced than Dual-Ended DMA.
Interleaved DMA: Interleaved DMA are those DMA that read from one memory address and write from
another memory address.
4Q(b)
Q: Write a note on SIMD array processor.
ANS:
1. Registers/CPU:
• Registers are small, high-speed memory units located in the CPU.
• They are used to store the most frequently used data and instructions.
• Registers have the fastest access time and the smallest storage capacity, typically ranging from 16 to
64 bits.
2. Cache Memory
• Cache memory is a small, fast memory unit located close to the CPU.
• It stores frequently used data and instructions that have been recently accessed from the main
memory.
• Cache memory is designed to minimize the time it takes to access data by providing the CPU with
quick access to frequently used data.
3. Main Memory
• Main memory, also known as RAM (Random Access Memory), is the primary memory of a
computer system.
• It has a larger storage capacity than cache memory, but it is slower. Main memory is used to store
data and instructions that are currently in use by the CPU.
4. Magnetic Disk
• Magnetic Disks are simply circular plates that are fabricated with either a metal or a plastic or a
magnetized material.
• The Magnetic disks work at a high speed inside the computer and these are frequently used.
5. Magnetic Tape
• Magnetic Tape is simply a magnetic recording device that is covered with a plastic film. It is
generally used for the backup of data.
• In the case of a magnetic tape, the access time for a computer is a little slower and therefore, it
requires some amount of time for accessing the strip.
4Q(a)OR
Q: In certain scientific computations it is necessary to perform the arithmetic operation (Ai + Bi) * (Ci
+ Di) with a stream of numbers. Specify pipeline configuration to carry out this task. List the contents
of all registers in the pipeline for i=1 through 4.
ANS:
Ai Bi Ci Di
Stage 1:
R1 R2 R3 R4
__________________________________________________________
Adder Adder
Stage 2:
R5 R6
__________________________________________________________
Multiplier
Stage 3:
R7
o The Control memory address register specifies the address of the micro-instruction.
o The Control memory is assumed to be a ROM, within which all control information is permanently
stored.
o The control register holds the microinstruction fetched from the memory.
o The micro-instruction contains a control word that specifies one or more micro-operations for the data
processor.
o While the micro-operations are being executed, the next address is computed in the next address
generator circuit and then transferred into the control address register to read the next microinstruction.
o The next address generator is often referred to as a micro-program sequencer, as it determines the
address sequence that is read from control memory.
5Q(a)
Q: Perform A – B (subtract) operation for the following numbers using signed magnitude number
format. (Write necessary assumptions if required) A = + 11 and B = - 6.
Ans:
• 𝐴=+11
• 𝐵=−6
Step 1: Convert the numbers to signed magnitude format:
In signed magnitude format, the leftmost bit is the sign bit:
• 0 for positive
• 1 for negative
Let's assume we're working with 6-bit binary numbers (1 sign bit + 5 magnitude bits).
Convert 𝐴=+11 to binary:
• Magnitude of 11 in binary (5 bits) is 01011.
• Since it's positive, the signed magnitude representation is: 0 010110.
Convert 𝐵=−6to binary:
• Magnitude of 6 in binary (5 bits) is 00110.
• Since it's negative, the signed magnitude representation is: 1 00110.
Step 2: Formulate the subtraction 𝐴−𝐵
The subtraction 𝐴−𝐵 is equivalent to 𝐴+(−𝐵).
Since B is already negative, subtracting 𝐵 means adding the positive value of 𝐵:
𝐴−𝐵=𝐴+6
Step 3: Convert 6 to signed magnitude and perform the addition
Convert 6 to signed magnitude (positive):
• Magnitude of 6 in binary (5 bits) is 00110.
• Since it's positive, the signed magnitude representation is: 000110.
• Now, we add 𝐴=+11 and +6 using binary addition:
• 01011
+ 00110
10001
Step 4: Interpret the result
The result 10001 in binary corresponds to:
• The sign bit is 0, indicating a positive number.
• The magnitude bits are 10001, which in decimal is 17.
Therefore, the result of A−B in signed magnitude format is: 0 10001
Conclusion:The result of the operation 𝐴−𝐵 when 𝐴=+11 and 𝐵=−6 using signed magnitude number format
is +17, which is represented as 0 10001010001 in signed magnitude binary.
5Q(b)
Q: Explain status bit conditions with neat diagram.
ANS:
• The status register comprises the status bits.
• The bits of the status register are modified according to the operations performed in the ALU.
• The figure displays a block diagram of an 8-bit ALU with a 4-bit status register.
Suppose there are three processors, each having cache. Suppose the following scenario:-
• Processor 1 read X : obtains 24 from the memory and caches it.
• Processor 2 read X : obtains 24 from memory and caches it.
• Again, processor 1 writes as X : 64, Its locally cached copy is updated. Now, processor 3 reads X,
what value should it get?
• Memory and processor 2 thinks it is 24 and processor 1 thinks it is 64.
There are various Cache Coherence Protocols in multiprocessor system. These are :-
1. MSI protocol (Modified, Shared, Invalid)
2. MOSI protocol (Modified, Owned, Shared, Invalid)
3. MESI protocol (Modified, Exclusive, Shared, Invalid)
4. MOESI protocol (Modified, Owned, Exclusive, Shared, Invalid)
These important terms are discussed as follows:
• Modified – It means that the value in the cache is dirty, that is the value in current cache is different
from the main memory.
• Exclusive – It means that the value present in the cache is same as that present in the main memory,
that is the value is clean.
• Shared – It means that the cache value holds the most recent data copy and that is what shared
among all the cache and main memory as well.
• Owned – It means that the current cache holds the block and is now the owner of that block, that is
having all rights on that particular blocks.
• Invalid – This states that the current cache block itself is invalid and is required to be fetched from
other cache or main memory.
5Q(a)OR
Q: Write the difference(s) between arithmetic shift left and logical shift left instruction. Support your
answer with proper illustration.
ANS:
logical shift left arithmetic shift left
It transfers the 0 zero through the serial input. We The arithmetic shift micro-operation moves the
use the symbols ‘<<‘ for the logical left shift signed binary number either to the left or to the
and ‘>>‘ for the logical right shift. right position.
Following are the two ways to perform the logical Following are the two ways to perform the
shift. arithmetic shift.
1. logical Left Shift 1.Arithmetic Left Shift
2. logical Right Shift 2.Arithmetic Right Shift
RISC CISC
1. Focus on software Focus on hardware
2. Uses only Hardwired control unit Uses both hardwired and microprogrammed
control unit
3. Transistors are used for more registers Transistors are used for storing complex
Instructions
4. Fixed sized instructions Variable sized instructions
5. Can perform only Register to Register Can perform REG to REG or REG to MEM or
Arithmetic operations MEM to MEM
6. Requires more number of registers Requires less number of registers
7. Code size is large Code size is small
8. An instruction executed in a single clock Instruction takes more than one clock cycle
cycle
9. An instruction fit in one word. Instructions are larger than the size of one word
10. Simple and limited addressing modes. Complex and more addressing modes.
11. RISC is Reduced Instruction Cycle. CISC is Complex Instruction Cycle.
12. The number of instructions are less as The number of instructions are more as
compared to CISC. compared to RISC.
13. It consumes the low power. It consumes more/high power.
14. RISC is highly pipelined. CISC is less pipelined.
15. RISC required more RAM. CISC required less RAM.
16. Here, Addressing modes are less. Here, Addressing modes are more.
5Q(c)OR
Q: Explain any two types of mapping procedures when considering the organization of cache memory.
ANS:
• In cache memory organization, mapping procedures are essential for determining how data from
main memory is placed into the cache.
• There are several mapping techniques, but two common types are Direct Mapping and Associative
Mapping.
1. Direct Mapping –
▪ Direct Mapping is the simplest cache mapping technique. In this method, each block
of main memory maps to exactly one cache line.
OR
▪ In direct mapping, the cache consists of normal high-speed random-access memory.
Each location in the cache holds the data, at a specific address in the cache.
▪ This address is given by the lower significant bits of the main memory address.
▪ This enables the block to be selected directly from the lower significant bit of the
memory address.
▪ The remaining higher significant bits of the address are stored in the cache with the
data to complete the identification of the cached data.
▪ As shown in the above figure, the address from processor is divided into two field a
tag and an index.
▪ The tag consists of the higher significant bits of the address and these bits are stored
with the data in cache. The index consists of the lower significant b of the address.
Whenever the memory is referenced, the following sequence of events occurs
▪ The index is first used to access a word in the cache.
▪ The tag stored in the accessed word is read.
▪ This tag is then compared with the tag in the address.
▪ If two tags are same this indicates cache hit and required data is read from the cache
word.
▪ If the two tags are not same, this indicates a cache miss. Then the reference is made to
the main memory to find it.
▪ For a memory read operation, the word is then transferred into the cache. It is possible
to pass the information to the cache and the process simultaneously.
Advantages:
• Simple to implement.
• Fast lookup time because each memory block has a predetermined cache location.
Disadvantages:
• Can lead to a high miss rate if multiple memory blocks map to the same cache line (causing
frequent replacements).