Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
13 views28 pages

Coa Paper Summer2023

cos

Uploaded by

PRABH
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views28 pages

Coa Paper Summer2023

cos

Uploaded by

PRABH
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

COA PAPER-SUMMER-2023

1Q(a)
Q: Write the name of basic computer registers with their functionalities.
ANS:
Register Symbol Number of bits Function

Data register DR 16 Holds memory operand

Address register AR 12 Holds address for the memory

Accumulator AC 16 Processor register

Instruction register IR 16 Holds instruction code

Program counter PC 12 Holds address of the instruction

Temporary register TR 16 Holds temporary data

Input register INPR 8 Carries input character

Output register OUTR 8 Carries output character

1Q(b)
Q: Discuss 4-bit binary adder with neat diagram.
ANS:
• A Binary Adder is a digital circuit that implements the arithmetic sum of two binary numbers
supported with any length is known as a binary adder.
• It is generated using full-adder circuits connected in sequence.
• The output carries from one full-adder linked to the input carry of the next full-adder.
• The following block diagram demonstrates the interconnections of four full-adder circuits to support
a 4-bit binary adder.

• The augend bits of A and the addend bits of B is created by subscript numbers from right to left, with
subscript 0 indicating the low-order bit. The carries are linked in a chain by the full-adders.
• The input carry to the binary adder is C0 and the output carry is C4. The S outputs of the full-adders
create the needed sum bits.
• An n-bit binary adder is needed n full-adders. The output carries from each full-adder is linked to the
input carry of the next-high-order full-adder. The n data bits for the A inputs come from one register
including R1, and the n data bits for the B inputs come from another register including R2. The sum
can be transferred to a third register or one of the source registers (R 1 or R2), restoring its earlier
content.
1Q(c)
Q: Enlist various kinds of addressing modes. Explain any five of same and support your answer by
taking small example.
ANS:

1. Immediate Addressing Mode-


In this addressing mode,
• The operand is specified in the instruction explicitly.
• Instead of address field, an operand field is present that contains the operand.
OR
• The operand is specified directly in the instruction.

• Example: MOV R1, #5


o This instruction moves the immediate value 5 into register R1.
• Explanation:
o The # symbol indicates that 5 is an immediate value and not a memory address. The operand
is directly embedded in the instruction itself.
2. Direct Addressing Mode-

In this addressing mode,


• The address field of the instruction contains the effective address of the operand.
OR
• The address of the operand is specified directly in the instruction.
• Only one reference to memory is required to fetch the operand.
• It is also called as absolute addressing mode.

• Example: MOV R1, 1000


o This instruction moves the value located at memory address 1000 into register R1.
• Explanation:
o The operand is found at the memory location specified by the address 1000. The instruction
fetches the data from this address and loads it into the register.

3. Indirect Addressing Mode-

In this addressing mode,


• The address field of the instruction specifies the address of memory location that contains the
effective address of the operand.
• Two references to memory are required to fetch the operand.
OR
• The address of the operand is held in a register or a memory location.

• Example: MOV R1, (R2)


o This instruction moves the value located at the memory address contained in register R2 into
register R1.
• Explanation:
o The operand's address is stored in register R2. The processor first reads the address from R2,
then accesses the memory location pointed to by this address, and finally loads the data into
R1.
4. Register Direct Addressing Mode-

In this addressing mode,


• The operand is contained in a register set.
• The address field of the instruction refers to a CPU register that contains the operand.
• No reference to memory is required to fetch the operand.
• The operand is located in a register.

• Example: MOV R1, R2


o This instruction moves the value in register R2 to register R1.
• Explanation:
o Both the source and destination operands are registers. The data in R2 is copied directly to
R1.

5.Indexed Addressing Mode-

In this addressing mode,


• Effective address of the operand is obtained by adding the content of index register with the address
part of the instruction.
• The effective address of the operand is generated by adding a constant value to the contents of a
register.

Effective Address = Content of Index Register + Address part of the instruction

• Example: MOV R1, 1000(R2)


o This instruction moves the value located at the memory address formed by adding 1000 to the
contents of register R2 into register R1.
• Explanation:
o If R2 contains 200, the effective address would be 1000 + 200 = 1200. The instruction
fetches the data from memory address 1200 and loads it into R1.
2Q(a)
Q: Write sequence of microoperations to execute the following instructions: 1. AND 2.STA
ANS:
1. AND:
• The AND instruction performs a bitwise logical AND operation between the contents of the
accumulator (AC) and a value from memory. The result is stored back in the accumulator.
• Assuming the instruction format is:
o AND Address
Where’ Address’ specifies the memory location of the operand, here are the microoperations:
1.Fetch the Instruction:
o IR <- [PC] ; Fetch the instruction from memory into the Instruction Register
o PC <- PC + 1; Increment the Program Counter to point to the next instruction
2.Decode the Instruction:
o Decode IR to determine the operation and the address
3.Fetch the Operand:
o MAR <- IR[Address]; Load the address part of the instruction into the Memory Address
Register
o MDR <- [MAR] ; Fetch the operand from memory into the Memory Data Register
4.Execute the AND Operation:
o AC <- AC AND MDR ; Perform bitwise AND operation and store result in the
accumulator
1. STA:
• The STA (Store Accumulator) instruction stores the contents of the accumulator into a specified
memory location.
• Assuming the instruction format is:
o STA Address
where Address specifies the memory location where the accumulator content will be stored, here
are the microoperations:
1. Fetch the Instruction:
o IR <- [PC] ; Fetch the instruction from memory into the Instruction Register
o PC <- PC + 1 ; Increment the Program Counter to point to the next instruction
2. Decode the Instruction:
o Decode IR to determine the operation and the address
3. Store Accumulator to Memory:
o MAR <- IR[Address] ; Load the address part of the instruction into the Memory Address
Register
o MDR <- AC ; Move the contents of the accumulator to the Memory Data Register
o [MAR] <- MDR ; Store the contents of MDR (accumulator) into the memory location
specified by MAR
2Q(b)
Q: Write assembly language program to subtract two numbers.
ANS:

2Q(c)
Q: ) Write two address, one address and zero address instructions program for the following
arithmetic expression:
X = (A + B) * (C – D / E) + F * G
ANS:
Two-Address Instructions
1. LOAD A, R1 (Load A into register R1)
2. ADD B, R1 (Add B to R1)
3. LOAD D, R2 (Load D into register R2)
4. DIV E, R2 (Divide R2 by E)
5. LOAD C, R3 (Load C into register R3)
6. SUB R2, R3 (Subtract R2 from R3)
7. MUL R1, R3 (Multiply R1 by R3)
8. LOAD F, R4 (Load F into register R4)
9. MUL G, R4 (Multiply R4 by G)
10. ADD R4, R3 (Add R4 to R3)
11. STORE R3, X(Store the result in X)
One-Address Instructions
1. LOAD A (Load A into AC)
2. ADD B (Add B to AC)
3. STORE T1 (Store result in T1)
4. LOAD D (Load D into AC)
5. DIV E (Divide AC by E)
6. STORE T2 (Store result in T2)
7. LOAD C (Load C into AC)
8. SUB T2 (Subtract T2 from AC)
9. STORE T3 (Store result in T3)
10. LOAD T1 (Load T1 into AC)
11. MUL T3 (Multiply AC by T3)
12. STORE T4 (Store result in T4)
13. LOAD F (Load F into AC)
14. MUL G (Multiply AC by G)
15. ADD T4 (Add T4 to AC)
16. STORE X (Store the result in X)
Zero-Address Instructions (Stack-Based)
1. PUSH A (Push A onto stack)
2. PUSH B (Push B onto stack)
3. ADD (Pop two values, add, push result)
4. PUSH D (Push D onto stack)
5. PUSH E (Push E onto stack)
6. DIV (Pop two values, divide, push result)
7. PUSH C (Push C onto stack)
8. SUB (Pop two values, subtract, push result)
9. MUL (Pop two values, multiply, push result)
10. PUSH F (Push F onto stack)
11. PUSH G (Push G onto stack)
12. MUL (Pop two values, multiply, push result)
13. ADD (Pop two values, add, push result)
14. POP X (Pop result and store in X)
2Q(c)OR
Q: Assume A = + 6 and B = + 7, apply Booth algorithm for multiplying A and B. Make necessary assumptions if
required.

ANS:
Here no register size given so, assume 4 bits
A = (6)10 → (0110)2
B = (7)10 → (0111)2
From flowchat AC = AC - A, If we want to ignore subtraction then we have to find -A
-A = 2’s complement of A
-A = 1010

AC B Q-1 Operation Understanding


part
First cycle 0000 0111 0 AC = AC - A 1) checking the
value of Q and
0000 Q-1 THEN USE
+1010 FORMULA

1010
1010 0111 0 (2) Arithmetic 2)Shifting
shift right(ASR) operation
1101 0011 1

Second cycle 1110 1001 1 (1) Arithmetic 1)Shifting


shift right(ASR) operation
Third cycle 1111 0100 1 (1) Arithmetic
shift right(ASR)
Fourth cycle 0101 0100 1 AC = AC + A
1111
+0110
0101
0010 1010 0 (2) Arithmetic
shift right(ASR)

Final ans (00101010)2 = (42)10


6*7 = 42
3Q(a)
Q: Explain Flynn’s classification for computers in brief.
ANS:

1.SISD

• SISD stands for 'Single Instruction and Single Data Stream'. It represents the organization of a single
computer containing a control unit, a processor unit, and a memory unit.
• Instructions are executed sequentially, and the system may or may not have internal parallel processing
capabilities.
• Most conventional computers have SISD architecture like the traditional Von-Neumann computers.
• Parallel processing, in this case, may be achieved by means of multiple functional units or by pipeline
processing.

Where, CU = Control Unit, PE = Processing Element, M = Memory

• Instructions are decoded by the Control Unit and then the Control Unit sends the instructions to the
processing units for execution.
• Data Stream flows between the processors and memory bi-directionally.

2.SIMD

• SIMD stands for 'Single Instruction and Multiple Data Stream'. It represents an organization that
includes many processing units under the supervision of a common control unit.
• All processors receive the same instruction from the control unit but operate on different items of
data.
• The shared memory unit must contain multiple modules so that it can communicate with all the
processors simultaneously.
• SIMD is mainly dedicated to array processing machines. However, vector processors can also be
seen as a part of this group.

3.MISD

• MISD stands for 'Multiple Instruction and Single Data stream'.


• MISD structure is only of theoretical interest since no practical system has been constructed using this
organization.
• In MISD, multiple processing units operate on one single-data stream. Each processing unit operates
on the data independently via separate instruction stream.

Where, M = Memory Modules, CU = Control Unit, P = Processor Units


4.MIMD

• MIMD stands for 'Multiple Instruction and Multiple Data Stream'.


• In this organization, all processors in a parallel computer can execute different instructions and operate
on various data at the same time.
• In MIMD, each processor has a separate program and an instruction stream is generated from each
program.

1. Where, M = Memory Module, PE = Processing Element, and CU = Control Unit


3Q(b)
Q: Draw the flowchart for first pass of assembler and explain the same in brief.
ANS:

The flow-chart shown in describes the tasks performed by the assembler during the first pass. The steps
involved during the first pass are given below :
1. LC is initialized to 0.
2. A line of symbolic code is scanned and analyzed to determine if it has a label. (Note: The presence of code
of comma indicates a label).
3. If the line of code has no label, the assembler checks the symbol in the instruction field. If an ORG
pseudo instruction is present, the assembler initializes LC to the number that follows ORG.
4. Then the assembler goes to process the next line (goes to step 2).
5. If the line has an END pseudo instruction, the assembler terminates the first pass and goes to the second
pass. (Note: A line with ORG or END should not have a label).
6. If the line of code has a label, it is stored in the address symbol table along with its corresponding binary
equivalent number specified by the content of LC.
7. LC is incremented by 1.
8. The assembler continues to analyze next line of code.
3Q(c)
Q: Elaborate CPU-IOP communication.
ANS:
• There is a communication channel between IOP and CPU to perform task which come under
computer architecture.
• This channel explains the commands executed by IOP and CPU while performing some programs.
• The CPU do not executes the instructions but it assigns the task of initiating operations, the
instructions are executed by IOP.
• I/O transfer is instructed by CPU. The IOP asks for CPU through interrupt.
• This channel starts by CPU, by giving “test IOP path” instruction to IOP and then the communication
begins as shown in diagram:

• The sequence of operations aried out during CPU and IOP mmunication are:
1. CPU checks the existance of I/O path by sending an instruction.
2. In response to this IOP puts the status word in the memory stating the condition of IOP and 1/O
device (Busy, ready, etc.)
3. CPU checks the status word and if all conditions are OK, it sends the instruction to start I/O
transfer along with the memory address where the IOP program is stored.
4. After this CPU continues with another program.
5. IOP now conducts the I/O transfer using DMA and prepares status report.
6. On completion of I/O transfer, IOP sends an interrupt request to the CPU.
7. The CPU responds to the interrupt by issuing an instruction to read the status from the IOP. The
status indicates whether the transfer has been completed or if any errors occurred during the transfer.
3Q(a)OR
Q: Explain pipeline conflicts in brief.
ANS: Pipeline Conflicts:
There are some factors that cause the pipeline to deviate its normal performance. Some of these factors are
given below:
1. Timing Variations
All stages cannot take same amount of time. This problem generally occurs in instruction processing where
different instructions have different operand requirements and thus different processing time.
2. Data Hazards
When several instructions are in partial execution, and if they reference same data then the problem arises.
We must ensure that next instruction does not attempt to access data before the current instruction, because
this will lead to incorrect results.
3. Branching
In order to fetch and execute the next instruction, we must know what that instruction is. If the present
instruction is a conditional branch, and its result will lead us to the next instruction, then the next instruction
may not be known until the current one is processed.
4. Interrupts
Interrupts set unwanted instruction into the instruction stream. Interrupts effect the execution of instruction.
5. Data Dependency
It arises when an instruction depends upon the result of a previous instruction but this result is not yet
available.

3Q(b)OR
Q: Discuss three state bus buffers with neat diagram.
ANS:
• Three-state (tri-state) bus buffers are essential components in digital circuits that allow multiple
devices to share a common bus without interfering with each other.
• A three-state bus buffer can exist in one of three states: logic high (1), logic low (0), or high-
impedance (Z).
• The high-impedance state effectively disconnects the buffer from the bus, allowing other devices to
use the bus.
• Functions of Three-State Bus Buffers
o Drive Control: Enables a device to drive data onto the bus when required.
o Isolation: Prevents multiple devices from driving the bus simultaneously, avoiding bus
contention.
o Bus Sharing: Allows multiple devices to share a single communication bus efficiently.
• To form a single bus line, all the outputs of the 4 buffers are connected together.
• The control input will now decide which of the 4 normal inputs will communicate with the bus line.
• The decoder is used to ensure that only one control input is active at a time.
• The diagram of a 3-state buffer can be seen below.
3Q(c)OR
Q: Write a detailed note on associative memory.
ANS:

• An associative memory can be considered as a memory unit whose stored data can be identified for
access by the content of the data itself rather than by an address or memory location.
• Associative memory is often referred to as Content Addressable Memory (CAM).
• When a write operation is performed on associative memory, no address or memory location is given
to the word. The memory itself is capable of finding an empty unused location to store the word.
• On the other hand, when the word is to be read from an associative memory, the content of the word,
or part of the word, is specified. The words which match the specified content are located by the
memory and are marked for reading.
• The following diagram shows the block representation of an Associative memory.

• The key register supports a mask for selecting a specific field or key in the argument word. The whole
argument is distinguished with each memory word if the key register includes all 1's.
• Hence, there are only those bits in the argument that have 1's in their equivalent position of the key
register are compared. Therefore, the key gives a mask or recognizing a piece of data that determines
how the reference to memory is created.
• The following figure can define the relation between the memory array and the external registers in
associative memory.
• The cells present inside the memory array are marked by the letter C with two subscripts. The first
subscript gives the word number and the second specifies the bit position in the word. For instance,
the cell Cij is the cell for bit j in word i.
• A bit Aj in the argument register is compared with all the bits in column j of the array provided that
Kj = 1. This process is done for all columns j = 1, 2, 3......, n.
• If a match occurs between all the unmasked bits of the argument and the bits in word i, the
corresponding bit Mi in the match register is set to 1. If one or more unmasked bits of the argument
and the word do not match, Mi is cleared to 0.
3Q(a)
Q: Explain DMA in brief
ANS:

• DMA Controller is a hardware device that allows I/O devices to directly access memory with less
participation of the processor.
• DMA controller needs the same old circuits of an interface to communicate with the CPU and
Input/Output devices.
• Direct Memory Access uses hardware for accessing the memory, that hardware is called a DMA
Controller.
• It has the work of transferring the data between Input Output devices and main memory with very
less interaction with the processor.
• The direct Memory Access Controller is a control unit, which has the work of transferring data.

DMA Controller Diagram in Computer Architecture

• DMA Controller is a type of control unit that works as an interface for the data bus and the I/O
Devices.
• As mentioned, DMA Controller has the work of transferring the data without the intervention of the
processors, processors can control the data transfer.
• DMA Controller also contains an address unit, which generates the address and selects an I/O device
for the transfer of data.
• Here we are showing the block diagram of the DMA Controller.
Types of Direct Memory Access (DMA)

There are four popular types of DMA.

• Single-Ended DMA
• Dual-Ended DMA
• Arbitrated-Ended DMA
• Interleaved DMA

Single-Ended DMA: Single-Ended DMA Controllers operate by reading and writing from a single memory
address. They are the simplest DMA.

Dual-Ended DMA: Dual-Ended DMA controllers can read and write from two memory addresses. Dual-
ended DMA is more advanced than single-ended DMA.

Arbitrated-Ended DMA: Arbitrated-Ended DMA works by reading and writing to several memory
addresses. It is more advanced than Dual-Ended DMA.

Interleaved DMA: Interleaved DMA are those DMA that read from one memory address and write from
another memory address.

4Q(b)
Q: Write a note on SIMD array processor.

ANS:

• SIMD array processor :


o This is computer with multiple process unit operating in parallel Both types of array processors,
manipulate vectors but their internal organization is different.
o SIMD is a computer with multiple processing units operating in parallel.
o The processing units are synchronized to perform the same operation under the control of a
common control unit.
o Thus providing a single instruction stream, multiple data stream (SIMD) organization. As
shown in figure, SIMD contains a set of identical processing elements (PES) each having a
local memory M.
• Working:
o Here array processor is in built into the computer unlike in attached array processor where
array processor is attached externally to the host computer.
o Initially mark control unit decodes the instruction and generate the control signals and passes
it into all the processor elements(PE’s) or ALU upon receiving the control signals from
master control unit, all the processing elements come to throw that operations need to
perform.
o The data perform the operations will be accessed from main memory into the local memory
of respective PE’s. SIMD array processor is normally used to compute vector data. All PE’s
execute same instruction simultaneously on different data.
4Q(c)
Q: A computer uses a memory unit with 256K words of 32 bits each. A binary instruction code is
stored in one word of memory. The instruction has four parts: an indirect bit, an operation code, a
register code part to specify one of 64 registers, and an address part.
1. How many bits are there in operation code, the register code part and the address part?
2. Draw the instruction word format and indicate the number of bits in each part.
3. How many bits are there in the data and address inputs of the memory?
4Q(a)OR
Q: Write a brief note on memory hierarchy.
ANS:

1. Registers/CPU:
• Registers are small, high-speed memory units located in the CPU.
• They are used to store the most frequently used data and instructions.
• Registers have the fastest access time and the smallest storage capacity, typically ranging from 16 to
64 bits.
2. Cache Memory
• Cache memory is a small, fast memory unit located close to the CPU.
• It stores frequently used data and instructions that have been recently accessed from the main
memory.
• Cache memory is designed to minimize the time it takes to access data by providing the CPU with
quick access to frequently used data.
3. Main Memory
• Main memory, also known as RAM (Random Access Memory), is the primary memory of a
computer system.
• It has a larger storage capacity than cache memory, but it is slower. Main memory is used to store
data and instructions that are currently in use by the CPU.
4. Magnetic Disk
• Magnetic Disks are simply circular plates that are fabricated with either a metal or a plastic or a
magnetized material.
• The Magnetic disks work at a high speed inside the computer and these are frequently used.
5. Magnetic Tape
• Magnetic Tape is simply a magnetic recording device that is covered with a plastic film. It is
generally used for the backup of data.
• In the case of a magnetic tape, the access time for a computer is a little slower and therefore, it
requires some amount of time for accessing the strip.
4Q(a)OR
Q: In certain scientific computations it is necessary to perform the arithmetic operation (Ai + Bi) * (Ci
+ Di) with a stream of numbers. Specify pipeline configuration to carry out this task. List the contents
of all registers in the pipeline for i=1 through 4.
ANS:
Ai Bi Ci Di
Stage 1:

R1 R2 R3 R4

__________________________________________________________

Adder Adder

Stage 2:

R5 R6

__________________________________________________________

Multiplier

Stage 3:

R7

Stage 1: R1  Ai R2  Bi R3 Ci R4 Di


Stage 2: R5  R1 + R2 R5  R3 + R4
Stage 3: R7  R5 * R6
Clock Stage 1 Stage 2 Stage 3
pulse no. R1 R2 R3 R4 R5 R6 R7
1 A1 B1 C1 D1 - - -
2 A2 B2 C2 D2 A1+B1 C1+D1 -
3 A3 B3 C3 D3 A2+B2 C2+D2 A1+B1* C1+D1
4 A4 B4 C4 D4 A3+B4 C3+D3 A2+B2* C2+D2
5 - - - - A4+B4 C4+D4 A3+B3* C3+D3
6 - - - - - - A4+B4* C4+D4
4Q(c)OR
Q: Discuss microprogrammed control organization with neat diagram.
ANS:
1. Micro-programmed Control
• The Microprogrammed Control organization is implemented by using the programming approach.
• In Microprogrammed Control, the micro-operations are performed by executing a program
consisting of micro-instructions.
• The following image shows the block diagram of a Microprogrammed Control organization.

o The Control memory address register specifies the address of the micro-instruction.
o The Control memory is assumed to be a ROM, within which all control information is permanently
stored.
o The control register holds the microinstruction fetched from the memory.
o The micro-instruction contains a control word that specifies one or more micro-operations for the data
processor.
o While the micro-operations are being executed, the next address is computed in the next address
generator circuit and then transferred into the control address register to read the next microinstruction.
o The next address generator is often referred to as a micro-program sequencer, as it determines the
address sequence that is read from control memory.
5Q(a)
Q: Perform A – B (subtract) operation for the following numbers using signed magnitude number
format. (Write necessary assumptions if required) A = + 11 and B = - 6.
Ans:
• 𝐴=+11
• 𝐵=−6
Step 1: Convert the numbers to signed magnitude format:
In signed magnitude format, the leftmost bit is the sign bit:
• 0 for positive
• 1 for negative
Let's assume we're working with 6-bit binary numbers (1 sign bit + 5 magnitude bits).
Convert 𝐴=+11 to binary:
• Magnitude of 11 in binary (5 bits) is 01011.
• Since it's positive, the signed magnitude representation is: 0 010110.
Convert 𝐵=−6to binary:
• Magnitude of 6 in binary (5 bits) is 00110.
• Since it's negative, the signed magnitude representation is: 1 00110.
Step 2: Formulate the subtraction 𝐴−𝐵
The subtraction 𝐴−𝐵 is equivalent to 𝐴+(−𝐵).
Since B is already negative, subtracting 𝐵 means adding the positive value of 𝐵:
𝐴−𝐵=𝐴+6
Step 3: Convert 6 to signed magnitude and perform the addition
Convert 6 to signed magnitude (positive):
• Magnitude of 6 in binary (5 bits) is 00110.
• Since it's positive, the signed magnitude representation is: 000110.
• Now, we add 𝐴=+11 and +6 using binary addition:
• 01011
+ 00110
10001
Step 4: Interpret the result
The result 10001 in binary corresponds to:
• The sign bit is 0, indicating a positive number.
• The magnitude bits are 10001, which in decimal is 17.
Therefore, the result of A−B in signed magnitude format is: 0 10001
Conclusion:The result of the operation 𝐴−𝐵 when 𝐴=+11 and 𝐵=−6 using signed magnitude number format
is +17, which is represented as 0 10001010001 in signed magnitude binary.
5Q(b)
Q: Explain status bit conditions with neat diagram.
ANS:
• The status register comprises the status bits.
• The bits of the status register are modified according to the operations performed in the ALU.
• The figure displays a block diagram of an 8-bit ALU with a 4-bit status register.

• If the end carry C8 is 1, then carry (C) is set to 1. If C8 is 0, then C is cleared to 0.


• If the highest order bit F7 is 1, then Sign (S) is set to 1. If F7 is 0, then S is set to 0.
• If the output of ALU is 0, then zero (Z) is set to 1, otherwise, Z is set to 0.
• If the XOR of the last two carries is equal to 1, then overflow (V) is set to 1, otherwise, V is cleared
to 0.
• The result of the 8-bit ALU operation is either 127 or -127.
• Z is a status bit used to indicate the result obtained after comparing A and B. Here, the XOR
operation is used to compare two numbers (Z = 0 if A = B).
• Conditional Branch Instruction
• The conditional branch instruction checks the conditions for branching using the status bits. Some of
the commonly used conditional branch instructions are shown in the table.
• Conditional Instructions
Mnemonic Condition
BZ Branch if zero
BNZ Branch if not zero
BC Branch if carry
BNC Branch if no carry
Branch if zero Branch if plus
BM Branch if minus
BV Branch if overflow
BNV Branch if no overflow
Unsigned compare conditions (A –B)
BHI Branch if higher
BHE Branch if higher or equal
BLO Branch if lower
BLOE Branch if lower or equal
BE Branch if equal
BEN Branch if not equal
Signed compare conditions (A – B)
BGT Branch if greater than
BGE Branch if greater or equal
BLT Branch if less than
BLE Branch if less or equal
BE Branch if equal
BEN Branch if not equal
• Thus, when the status condition is true, the program control is transferred to the address specified in
the instruction, otherwise, the control continues with the instructions that are in the subsequent
locations.
• The conditional instructions are also associated with the program control instructions such as jump,
call, or return.
• The zero status bit checks if the result of the ALU is zero or not. The carry bit checks if the most
significant bit position of the ALU has a carryout.
• It is also used with rotate instruction to check whether or not the bit is shifted from the end position
of a register into a carry position.
• The sign bit indicates the state of the most significant bit of the output from the ALU (S = 0 denotes
positive sign and S = 1 denotes negative sign).
• The branch if plus and branch if minus is used to check whether the value of the most significant bit
represents a sign or not.
• The overflow and underflow instructions are used in conjunction with arithmetic operations
performed on signed numbers.
• The higher and lower words are used to denote the relations between unsigned numbers, whereas the
greater and lesser words are used to denote the relations between signed numbers.
5Q(c)
Q: Discuss cache coherence problem in detail.
ANS:Cache Coherence
• In a multiprocessor system, data inconsistency may occur among adjacent levels or within the same
level of the memory hierarchy.
• In a shared memory multiprocessor with a separate cache memory for each processor, it is possible to
have many copies of any one instruction operand: one copy in the main memory and one in each
cache memory.
• When one copy of an operand is changed, the other copies of the operand must be changed also.
• Example : Cache and the main memory may have inconsistent copies of the same object.

Suppose there are three processors, each having cache. Suppose the following scenario:-
• Processor 1 read X : obtains 24 from the memory and caches it.
• Processor 2 read X : obtains 24 from memory and caches it.
• Again, processor 1 writes as X : 64, Its locally cached copy is updated. Now, processor 3 reads X,
what value should it get?
• Memory and processor 2 thinks it is 24 and processor 1 thinks it is 64.
There are various Cache Coherence Protocols in multiprocessor system. These are :-
1. MSI protocol (Modified, Shared, Invalid)
2. MOSI protocol (Modified, Owned, Shared, Invalid)
3. MESI protocol (Modified, Exclusive, Shared, Invalid)
4. MOESI protocol (Modified, Owned, Exclusive, Shared, Invalid)
These important terms are discussed as follows:
• Modified – It means that the value in the cache is dirty, that is the value in current cache is different
from the main memory.
• Exclusive – It means that the value present in the cache is same as that present in the main memory,
that is the value is clean.
• Shared – It means that the cache value holds the most recent data copy and that is what shared
among all the cache and main memory as well.
• Owned – It means that the current cache holds the block and is now the owner of that block, that is
having all rights on that particular blocks.
• Invalid – This states that the current cache block itself is invalid and is required to be fetched from
other cache or main memory.
5Q(a)OR
Q: Write the difference(s) between arithmetic shift left and logical shift left instruction. Support your
answer with proper illustration.
ANS:
logical shift left arithmetic shift left
It transfers the 0 zero through the serial input. We The arithmetic shift micro-operation moves the
use the symbols ‘<<‘ for the logical left shift signed binary number either to the left or to the
and ‘>>‘ for the logical right shift. right position.

Following are the two ways to perform the logical Following are the two ways to perform the
shift. arithmetic shift.
1. logical Left Shift 1.Arithmetic Left Shift
2. logical Right Shift 2.Arithmetic Right Shift

• logical Left Shift • Arithmetic Left Shift:


In this shift, one position moves each bit to the left In this shift, each bit is moved to the left one by
one by one. The Empty least significant bit (LSB) is one. The empty least significant bit (LSB) is filled
filled with zero (i.e, the serial input), and the most with zero and the most significant bit (MSB) is
significant bit (MSB) is rejected. rejected. Same as the Left Logical Shift.

• Logical Right Shift • Arithmetic Right Shift:


In this shift, each bit moves to the right one by one
In this shift, each bit is moved to the right one by
and the least significant bit(LSB) is rejected and the
one and the least significant(LSB) bit is rejected
empty MSB is filled with zero.
and the empty most significant bit(MSB) is filled
with the value of the previous MSB.
5Q(b)OR
Q: State the differences between RISC and CISC.
ANS:

RISC CISC
1. Focus on software Focus on hardware
2. Uses only Hardwired control unit Uses both hardwired and microprogrammed
control unit
3. Transistors are used for more registers Transistors are used for storing complex
Instructions
4. Fixed sized instructions Variable sized instructions
5. Can perform only Register to Register Can perform REG to REG or REG to MEM or
Arithmetic operations MEM to MEM
6. Requires more number of registers Requires less number of registers
7. Code size is large Code size is small
8. An instruction executed in a single clock Instruction takes more than one clock cycle
cycle
9. An instruction fit in one word. Instructions are larger than the size of one word
10. Simple and limited addressing modes. Complex and more addressing modes.
11. RISC is Reduced Instruction Cycle. CISC is Complex Instruction Cycle.
12. The number of instructions are less as The number of instructions are more as
compared to CISC. compared to RISC.
13. It consumes the low power. It consumes more/high power.
14. RISC is highly pipelined. CISC is less pipelined.
15. RISC required more RAM. CISC required less RAM.
16. Here, Addressing modes are less. Here, Addressing modes are more.

5Q(c)OR
Q: Explain any two types of mapping procedures when considering the organization of cache memory.
ANS:
• In cache memory organization, mapping procedures are essential for determining how data from
main memory is placed into the cache.
• There are several mapping techniques, but two common types are Direct Mapping and Associative
Mapping.
1. Direct Mapping –
▪ Direct Mapping is the simplest cache mapping technique. In this method, each block
of main memory maps to exactly one cache line.
OR
▪ In direct mapping, the cache consists of normal high-speed random-access memory.
Each location in the cache holds the data, at a specific address in the cache.
▪ This address is given by the lower significant bits of the main memory address.
▪ This enables the block to be selected directly from the lower significant bit of the
memory address.
▪ The remaining higher significant bits of the address are stored in the cache with the
data to complete the identification of the cached data.
▪ As shown in the above figure, the address from processor is divided into two field a
tag and an index.
▪ The tag consists of the higher significant bits of the address and these bits are stored
with the data in cache. The index consists of the lower significant b of the address.
Whenever the memory is referenced, the following sequence of events occurs
▪ The index is first used to access a word in the cache.
▪ The tag stored in the accessed word is read.
▪ This tag is then compared with the tag in the address.
▪ If two tags are same this indicates cache hit and required data is read from the cache
word.
▪ If the two tags are not same, this indicates a cache miss. Then the reference is made to
the main memory to find it.
▪ For a memory read operation, the word is then transferred into the cache. It is possible
to pass the information to the cache and the process simultaneously.
Advantages:
• Simple to implement.
• Fast lookup time because each memory block has a predetermined cache location.
Disadvantages:
• Can lead to a high miss rate if multiple memory blocks map to the same cache line (causing
frequent replacements).

2. Set Associative Mapping -


▪ Associative Mapping (or fully associative mapping) is a more flexible but complex
method. Any block of main memory can be loaded into any line of the cache.
OR
▪ In set associative mapping a cache is divided into a set of blocks. The number of
blocks in a set is known as associativity or set size. Each block in each set has a stored
tag. This tag together with index completely identify the block.
▪ Thus, set associative mapping allows a limited number of blocks, with the same index
and different tags.
▪ An example of four way set associative cache having four blocks in each set is shown
in the following figure
▪ In this type of cache, the following steps are used to access the data from a cache:
▪ The index of the address from the processor is used to access the set.
▪ Then the comparators are used to compare all tags of the selected set with the
incoming tag.
▪ If a match is found, the corresponding location is accessed.
▪ If no match is found, an access is made to the main memory.
▪ The tag address bits are always chosen to be the most significant bits of the full
address, the block address bits are the next significant bits and the word/byte address
bits are the least significant bits.
▪ The number of comparators required in the set associative cache is given by the
number of blocks in a set.
▪ The set can be selected quickly and all the blocks of the set can be read out
simultaneously with the tags before waiting for the tag comparisons to be made. After
a tag has been identified, the corresponding block can be selected.
Advantages:
• Reduces the likelihood of cache misses since any memory block can be placed
anywhere in the cache.
• Efficient use of cache space.
Disadvantages:
• More complex and slower than direct mapping due to the need to search the entire
cache for each access.
• Requires more hardware for tag comparison, leading to higher costs.

You might also like