Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
23 views22 pages

COA Notes

Summarized notes

Uploaded by

chepngenofaith15
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views22 pages

COA Notes

Summarized notes

Uploaded by

chepngenofaith15
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 22

UNDERSTAND COMPUTER ORGANIZATION AND ARCHITECTURE

COMPUTER SCIENCE

LEVEL 6

The term Computer architectures refer to a set of rules stating how computer software and hardware are
combined together and how they interact to make a computer functional, furthermore, the computer architecture
also specifies which technologies the computer is able to handle.

Computer architecture is a specification, which describes how software and hardware interact together to
produce a functioning platform.

When a person thinks of the word “architecture”, the human mind will probably think of the assembly of buildings
or houses, moreover, with the same principle in mind, computer architecture involves the construction of a
computer system internally and externally.

There are three main categories in computer architecture:

 System design:
 The system design is the hardware parts, which includes multiprocessors, memory controllers, CPU,
data processors, and direct memory access. The system design can be considered to be the actual
computer system.
 Instruction set architecture:
 This revolves around the CPU. It includes the CPU capabilities and functions, furthermore it also
includes the CPU’s data formats, programming language and processor register types and instructions,
which are used by the computer programmers.
 The CPU is the part in a computer, which makes a program run, whether it was the operating system or
an application like Photoshop.
 Microarchitecture:
 The microarchitecture in a system will define the storage element/data paths and how they will be
implemented into the instruction set architecture, the microarchitecture also is responsible for data
processing.
All these will get together in a certain order to make the system functional.
What is the von Neumann Architecture?
In 1945, John von Neumann, who was a mathematician at the time, had delved into the study that, a computer
could have a fixed simple structure and still be able to execute any kind of computation without hardware
modification. This is providing that the computer is properly programmed with proper instructions, in which it is
able to execute them.

Von Neumann’s primary advancement was referred to as “conditional control transfer”, which had allowed a
program sequence to be interrupted and then reinitiated at any point, furthermore, this advancement had
allowed data to be stored with instructions in the same memory unit.

This was beneficial because if instructions were desired, they can be arithmetically modified in the same way as
the data.

The von Neumann architecture describes a design model for a stored program digital computer that incorporates
only one single processing unit and one single separate storage structure, which will hold both instructions and
data.

The von Neumann architecture refers to one that keeps the data as well as the programmed instructions in read-
write RAM (Random Access Memory).

Characteristics of von Neumann Architecture


As mentioned above, the von Neumann Architecture is based on the fact that the program data and the
instruction data are stored in the same memory unit. This can also be referred to as the “stored program
concept”.

This design is still used in the computer produced nowadays:


Central Processing Unit (CPU):
 The CPU is an electronic circuit, which executes instructions of the computer program.
 The CPU can also be referred to as a microprocessor or a processor.
Within the CPU, there is the ALU, CU, and the registers, which are described in more detail below:
Control Unit:
 Controls the operation of the ALU, memory, and input/output, instructing them how to respond to the
instructions from the program it had just read and interpreted from the memory unit. The control unit
directs the operations of the CPU by executing the following jobs:
 Coordinating and controlling activities of the CPU
 Managing data flow between other components and the CPU
 Acknowledging and accepting the next instruction
 Decoding instructions
 Storing the resulting data back into a memory unit
Arithmetic and Logical Unit (ALU):
 Allows logical and arithmetic operations to be carried out such as addition and subtraction.
 (Logical operators are: AND, OR, NOT, XOR)
Memory Unit:
 Consists of RAM, which is partitioned out and consists of an address and its contents, which are in binary
form.
 RAM (Random Access Memory) is a fast type of memory unlike hard drives, it is also directly accessible
by the CPU.
 The existence of RAM in a CPU, allows it to function a lot quicker and hence more efficiently.
Registers:
 Small block in the CPU that consists of a high-speed storage memory cells that store data before it is
processed, all logical, arithmetic, and shift operations occur here.
 The register consist of 5 components
 Program Counter (PC): Holds the address of the next instruction to be executed
 Accumulator (AC): Storage area where logic and arithmetic results are stored
 Memory Address Register (MAR): Holds the address of a location of the data that is to be read from
or written to
 Memory Data Register (MDR): Temporary storage location that stores data that has been read, or
data that still needs to be written
 Current Instruction Register (CIR): Area where the current instruction is being carried out. The
operation is divided into operand and opcode.
 Operand: Contains data or the address of the data (where the operation will be performed)
 Opcode: Specifies type of instruction to be executed
Buses:
 These are a set of parallel wires, which connect components (two or more) inside the CPU. Within the
CPU, there are three types of buses, and these are all referred to a system bus. The types of buses
are: Data bus, Control bus, and Address bus.
 Data bus: This bus is referred to as a bi-directional bus, which means “bits” can be carried in both ways.
This bus is used to transport data and instructions between the processor, memory unit, and the
input/output.
 Address bus: Transmits the memory address specifying where the relevant data needs to be sent or
retrieved from. (The address bus does not carry the data, only the address)
 Control bus: This is also a bi-directional bus used to transmit “control signals”/commands from the CPU
(and status signals from other components) in order to control and coordinate all the activities within the
computer.
Input/Outputs:
 Information passed from the user/information received by the user.

Advantages and Disadvantages of von Neumann


Architecture
Advantages Disadvantages

The control unit retrieves instruction and data in the same way
from one memory unit. This simplifies the development and Parallel executions of programs are not allowed due to serial instruction proces
design of the control unit

The above advantage would also mean that data from memory
Only one “bus” can be accessed at a time. This results in the CPU being idle (a
and from devices are accessed the same way. Therefore
faster than a data bus) This is considered to be the von Neumann Bottlenec
increasing efficiency

Although both instructions and data being stored in the same place can be viewe
An advantageous characteristic is that programmers have control
advantage as a whole. This can however result in re-writing over it, which results
of memory organisation
loss, due to an error in a program

If a defective program fails to release memory when they don’t require it (or fini
it), it may cause the computer to crash, as a result of insufficient memory avail

Von Neumann bottleneck


As processors, and computers over the years have had an increase in processing speed, and memory
improvements have increased in capacity, rather than speed, this had resulted in the term “von Neumann
bottleneck”. This is because the CPU spends a great amount of time being idle (doing nothing), while waiting for
data to be fetched from the memory. No matter how fast the processor is, this ultimately depends on the rate of
transfer, as a matter of fact, if the processor is faster, this just means that it’ll have a greater “idle” time.
Approaches to overcome this bottleneck include:

 Caching:
 Data which is more easily accessible in RAM, rather than stored in the main memory. The type of data
stored here will be the type of data, which is frequently used.
 Prefetching:
 The transport of some data into cache before it is requested. This will speed access in the event of a
request of the data.
 Multithreading:
 Managing many requests at the same time in separate threads.
 New types of RAM:
 Such as DDR SDRAM (Double Data Rate Synchronous Dynamic Random Access Memory).
 This type of RAM activates output on both the falling edge and the rising edge of the system clock,
instead of just the rising edge.
 This can potentially double the output.
 RAMBUS:
 A subsystem connecting RAM controller, RAM, and the bus (path) connecting RAM to the
microprocessor and devices within the computer that utilise it.
 Processing in Memory (PIM):
 PIM’s integrate a processor and memory in single microchip.

What is Harvard Architecture?


Harvard architecture is named after the “Harvard Mark I” relay-based computer, which was an IBM computer in
the University of Harvard.

The computer-stored instructions on “punched tape” (24 bits wide), furthermore the data was stored in electro
mechanical counters. The CPU of these early computer systems contained the data storage entirely, and it
provided no access to the instruction storage as data.

Harvard architecture is a type of architecture, which stores the data and instructions separately, therefore
splitting the memory unit.
The CPU in a Harvard architecture system is enabled to fetch data and instructions simultaneously, due to the
architecture having separate buses for data transfers and instruction fetches.

Characteristics of Harvard Architecture


Both types of architectures contain the same components, however, the main difference is that in a Harvard
architecture the instruction fetches and data transfers can be performed at the same time (simultaneously) (as
the system has two buses, one for data transfers and one for instruction fetches).
Advantages and Disadvantages of Harvard Architecture
Advantages Disadvantages

The memory dedicated to each (data and instructions) must be balanced by the
Due to instructions and data being transferred in different buses,
manufacturer. Because if there is free memory data memory, it cannot be used for
this means there is a smaller chance of data corruption
instructions and vice versa

However, this advantage (to the left) results in a more complex architecture, as it
Instructions and data can be accessed the same way requires two buses. This means it takes more time to manufacture and it makes these
systems more expensive

Harvard architecture offers a high performance, as this architecture


This architecture, however, despite the high performance, is very complex, especially
allows a simultaneous flow of data and instructions. These are kept
for main board manufacturers to implement
in a separate memory and travel via separate buses

Though as mentioned above, to achieve the advantage on the left, Harvard


There is a greater memory bandwidth that is more predictable, due
architecture requires a control unit for two buses. Which increases complexity and
to the architecture having separate memory for instructions and data
makes development more difficult. All of which increase the price of the system

Von Neumann Architecture vs. Harvard Architecture:


Von Neumann Architecture Harvard Architecture

Based on the stored-program computer concept Based on the Harvard Mark I relay-based computer model

Uses the same physical memory address for instructions and data It uses separate memory addresses for instructions and data

The processors require two clock cycles to execute an instruction Processor requires only one cycle to complete an instruction

The von Neumann architecture consists of a simpler control unit design, Harvard architectures control unit consists of two buses, which results in a
which means less complex development is required. This means the system more complicated system. This adds to the development cost, resulting in a
will be less costly more expensive system

Instruction fetches and data transfers cannot be performed at the same time Instruction fetches and data transfers can be performed at the same time

Used in laptops, personal computers, and workstations Used in signal processing and micro-controllers
Modified Harvard Architecture

A pure Harvard architecture suffers from the disadvantage that the mechanism must be provided to separate the
load from the program to be executed into instruction memory and thus leaving any data to be operated upon
into the data memory.

However modern systems nowadays use a read-only technology for the instruction memory and read/write
technology for the same memory.

This allows a system to allow the execution of a pre-loaded program as soon as power is applied.

However, the data will be in an unknown state, therefore it cannot provide any pre-defined values to the
program.

The solution to this is to provide machine language instructions so that the contents of the instruction memory
can be read as if they were data, as well as providing a hardware pathway.

Most adoptions of Harvard architecture nowadays is a modified form, this is to loosen the strict separation
between the data and the code, whilst still maintaining a high-performance concurrent data and instruction
access of the original Harvard architecture.

The modified Harvard architecture is a variation of the original Harvard architecture. However, the difference
between the two of them is, the modified architecture allows the contents of the instruction memory to be
accessed as data.

The three main modifications applied to a Modified Harvard Architecture are:

 Split-cache architecture:
 Very similar to the von Neumann architecture, this modification builds a memory hierarchy with CPU
caches for instructions and data at lower levels of hierarchy.
 Instruction-memory-as-data architecture:
 This modification allows to access the content of the instruction memory as the data. This can be
carried because data cannot directly get executed as instructions.
 (Though there is a debate to whether or not this actually can be named as “Modified” Harvard
architecture)
 Data-memory-as-instruction architecture:
 Executing instructions fetched from any memory segment, unlike Harvard architecture, which can only
execute instructions, fetched from the program memory segment.
Summary and Facts
The von Neumann Architecture was a large advancement from the program-controlled computers, which were
used in the 1940’s. Such computer were programmed by setting the inserting patch leads and switches to route
data and control signals between different functional sets.

Whereas nowadays, the majority of computer systems share the same memory for both data and program
instructions.

The CPU in a Harvard architecture system is enabled to fetch data and instructions simultaneously, due to the
architecture having separate buses for data transfers and instruction fetches.

What is the von Neumann Architecture?


The von Neumann architecture refers to one that keeps the data as well as the programmed instructions in read-
write RAM (Random Access Memory).

Characteristics of von Neumann Architecture:


 Central Processing Unit (CPU)
 Control Unit
 Arithmetic and Logical Unit (ALU)
 Memory Unit
 Registers:
 Program Counter (PC)
 Accumulator (AC)
 Memory Address Register (MAR)
 Memory Data Register (MDR)
 Current Instruction Register (CIR)
 Buses:
 Data bus
 Address bus
 Control bus
 Input/Outputs
Advantages:
 Less expensive/complex compared to Harvard architecture
 Efficient
Disadvantages:
 Von Neumann Bottleneck
 Greater chance of data loss
What is Harvard Architecture?
 The Harvard architecture is a computer system that contains two separate areas for data and
commands/instructions.
 Harvard architecture is a type of architecture, which stores the data and instructions separately, therefore
splitting the memory unit.
Advantages:
 Less chance of data corruption
 High performance
 Greater memory bandwidth
Disadvantages:
 Complex
 Expensive
Modified Harvard Architecture:
The modified Harvard architecture is a variation of the original Harvard architecture. However, the difference
between the two of them is, the modified architecture allows the contents of the instruction memory to be
accessed as data.

The three main modifications applied to a Modified Harvard Architecture are:

 Split-cache architecture
 Instruction-memory-as-data architecture
 Data-memory-as-instruction architecture
RISC and CISC Processors
The general definition of a processor or a microprocessor is: A small chip that is placed inside the computer as
well as other electronic devices.

In very simple terms, the main job of a processor is to receive input and then provide the appropriate output
(depending on the input).

Modern-day processors have become so advanced that they can handle trillions of calculations per second,
increasing efficiency and performance.

Both RISC and CISC architectures have been developed largely as a breakthrough to cover the semantic gap.
The semantic gap, is the gap that is present between machine language and high-level language.

Therefore the main objective of creating these two architectures is to improve the efficiency of software
development, and by doing so, there have been several programming languages that have been developed as a
result, such as Ada, C++, C, and Java etc.

These programming languages provide a high level of power and abstraction.

Therefore to allow for efficient compilation of these high-level language programs, RISC and CISC are used.

What are RISC processors?


Reduced Instruction Set Computer (RISC), is a type of computer architecture that operates on a small, highly
optimised set of instructions, instead of a more specialised set of instructions, which can be found in other types
of architectures. This architecture means that the computer microprocessor will have fewer cycles per
instruction.

The word “Reduced Instruction Set” may be incorrectly interpreted to refer to “reduced number of instructions”.
Though this is not the case, the term actually means that the amount of work done by each instruction is
decreased in terms of the number of cycles.
Due to the design of Alan Turing 1946 Automatic Computing Engine, it had many characteristics that resembled
RISC architecture, furthermore many traits of RISC architectures were seen in the 1960s due to them
embodying the load/store approach.

That being said the term RISC had first been used by David Patterson of “Berkeley RISC project”, who is
considered to be a pioneer in his RISC processor designs. Patterson is currently the Vice-Chair of the Board of
Directors of the RISC-V Foundation.

A RISC chip doesn’t require many transistors, which makes them less costly to design and produce. One of
RISCs main characteristics is that the instruction set contains relatively simple and basic instructions from which
more complex instructions can be produced.

RISC processors/architectures are used across a wide range of platforms nowadays, ranging from tablet
computers to smartphones, as well as supercomputers (i.e. Summit top500 list in 2018).

The characteristics of RISC processors


Some the terminology which can be handy to understand:

 LOAD: Moves data from the memory bank to a register.


 PROD: Finds product of two operands located within the register.
 STORE: Moves data from a register to the memory banks.
Addressing modes: An addressing mode is an aspect of instruction set architecture in most CPU designs.
 The RISC architecture utilises simple instructions.
 RISC synthesises complex data types and supports few simple data types.
 RISC makes use of simple addressing modes and fixed length instructions for pipelining.
 RISC allows any register to be used in any context.
 RISC has only one cycle for execution time.
 The work load of a computer that has to be performed is reduced by operating the “LOAD” and “STORE”
instructions.
 RISC prevents various interactions with memory, it does this by have a large number of registers.
 Pipelining in RISC is carried out relatively simply. This is due to the execution of instructions being done in
a uniform interval of time (i.e. one click).
 More RAM is required to store assembly level instructions.
 Reduced instructions need a smaller number of transistors in RISC.
 RISC utilises the Harvard architecture
 To execute the conversion operation, a compiler is used. This allows the conversion of high-level
language statements into code of its form.
 RISC processors utilise pipelining.
 Pipelining is a process that involves improving the performance of the CPU. The process is completed
by fetching, decoding, and executing cycles of three separate instructions at the same time.
A RISC architecture system contains a small core logic processor, which enables engineers to increase the
register set and increase internal parallelism by using the following techniques:

Thread Level Parallelism:


Thread level parallelism increases the number of parallel threads executed by the CPU.

Thread level parallelism can also be identified as “Task Parallelism”, which is a form of parallel computing for
multiple computer processors, using a technique for distributing the execution of processes and threads across
different parallel processor nodes. This type of parallelism is mostly used in multitasking operating systems, as
well as applications that depend on processes and threads.
Instruction Level Parallelism:
Instructions level parallelism increases the speed of the CPU in executing instructions. This type of parallelism
that measures how many of the instructions in a computer can be executed simultaneously.

However, Instruction level parallelism is not to be confused with concurrency. Instruction level parallelism is
about the parallel election of a sequence of instructions, which belong to a specific thread of execution of a
process.

Whereas concurrency is about threads of one or different processes being assigned by the CPU’s core in a
mannered and strict alteration or in true parallelism (provided that there are enough CPU cores).

Advantages of RISC processors


 Due to the architecture having a set of instructions, this allows high level language compilers to produce
more efficient code.
 This RISC architecture allows simplicity, which therefore means that it allows developers the freedom to
utilise the space on the microprocessor.
 RISC processors make use of the registers to pass arguments and to hold local variables.
 RISC makes use of only a few parameters, furthermore RISC processors cannot call instructions, and
therefore, use a fixed length instruction, which is easy to pipeline.
 Using RISC, allows the execution time to be minimised, whilst increasing the speed of the overall
operation, maximising efficiency.
 As mentioned above, RISC is relatively simple, this is due to having very few instructional formats, and a
small number of instructions and a few addressing modes required.
Disadvantages of RISC processors
 The performance of RISC processors depends on the compiler or the programmer. The following
instructions might rely on the previous instruction to finish their execution.
 RISC processors require very fast memory systems to feed various instructions, thus a large memory
cache is required.
What are CISC processors?
CISC, which stands for “Complex Instruction Set Computer”, is a computer architecture where single instructions
can execute several low-level operations, for instance, “load from memory an arithmetic operation, and a
memory store). CISC processors are also capable of executing multi-step operations or addressing modes with
single instructions.
CISC, as with RISC, is a type of microprocessor that contains specialised simple/complex instructions.

Until recent times, all major manufacturers of microprocessors had used CISC based designs to develop their
products. The reason for that was because, CISC was introduced around the early 1970s, where it was used for
simple electronic platforms, such as stereos, calculators, video games, not personal computers, therefore
allowing the CISC technology to be used for these types of applications, as it was more suitable.
However, eventually, CISC microprocessors found their way into personal computers, this was to meet the
increasing need of PC users. CISC manufacturers started to focus their efforts from general-purpose designs to
a high-performance computing orientation.

Advantageously, CISC processors helped in simplifying the code and making it shorter in order to reduce the
memory requirements.

In CISC processors, every single instruction has several low-level operations. Yes, this makes CISC instructions
short, but complex.

Some examples of CISC processors are:

 IBM 370/168 and Intel 80486


 Also non-trivial items such as government databases were built using a CISC processor
The characteristics of CISC processors
As mentioned above, the main objective of CISC processors is to minimise the program size by decreasing the
number of instructions in a program.

However, to do this, CISC has to embed some of the low-level instructions in a single complex instruction.
Moreover, this means that when it is decoded, this instruction generates several microinstructions to execute.

The complex architecture of CISC is below:


Microprogram Control Unit:
The microprogram control unit uses a series of microinstructions of the microprogram stored in the “control
memory” of the microprogram control unit and generate control signals.

Control Unit:
The control units access the control signals, which are produced by the microprogram control unit, moreover,
they operate the functioning of processors hardware.

Instructions and data path:


The instructions and the data path retrieve/fetches the opcode and operands of the instructions from the
memory.

Cache and main memory:


This is the location where the program instructors and operands are stored.
Instructions in CISC are complex, and they occupy more than a single word in memory. Like we saw in RISC,
CISC also uses LOAD/STORE to access the memory operands, however, CISC also has a “MOVE” instruction
attribute, which is used to gain access to memory operands.
Though one advantageous characteristic of the “MOVE” operation, is that it has a wider scope. This allows the
CISC instructions to directly access memory operands.

CISC instruction sets also have additional addressing modes:


 Auto-increment mode:
 The address of an operand is the content of the register. It is automatically incremented after accessing
the registers content, in order to point to the memory location of the next operand.
 Auto-decrement mode:
 Like “auto-increment”, the address of an operand is the content of the register. However with auto-
decrement, initially the contest of register is decremented, moreover then the content of the register is
used as an address for an operand.
 Relative Mode:
 The program counter is used instead of a general-purpose register. This allows to refer large range of
area in memory.
Advantages of CISC processors
 Memory requirement is minimised due to code size.
 The execution of a single instruction will also execute and complete several low level tasks.
 Memory access is more flexible due to the complex addressing mode.
 Memory locations can be directly accessed by CISC instructions.
 Microprogramming is easy to implement and less expensive than wiring a control unit.
 If new commands are to be added to the chip, the structure of the instruction set does not need to be
changed. This is because the CISC architecture uses general purpose hardware to carry out commands.
 The compiler doesn’t have to be complicated, as the microprogram instruction sets can be written to
match the high-level language constructs.
Disadvantages of CISC processors
 Although the code size is minimised, the code requires several clock cycles to execute a single instruction.
Therefore decreasing the efficiency of the system.
 The implementation of pipelining in CISC is regarded to be complicated.
 In order to simplify the software, the hardware structure needs to be more complex.
 CISC was designed to minimise the memory requirement when memory was smaller and more expensive.
However nowadays memory is inexpensive and the majority of new computer systems have a large
amount of memory, compared to the 1970’s when CISC first emerged.
RISC vs. CISC
RISC CISC

RISC focuses on software CISC focuses on hardware

Single clock, reduced instruction only,


which means the instructions are simple Multi-clock complex instructions
compared to CISC

Operates on Register to Register.


CISC operates from Memory to Memory: The “LOAD” and “STORE” are incorporated in instructions. Also
However, “LOAD” and “STORE” are
uses MOVE
independent instructions

RISC has large code sizes, which means


CISC has small code sizes, high cycles per second
it operates low cycles per second

Spends more transistors on memory


The transistors in a CISC processor are used to store complex instructions
registers

Less memory access More memory access

Due to CISC instructions being of variable length, and having multiple operands, as well as complex addressing
Implementing pipelining on RISC is modes and complex instructions this increases complexity. Furthermore, CISC as defined above occupies more
easier than a memory word. Thus taking several cycles to execute operand fetch. Implementing pipelining on CISC is
complicated
Although the above showcases differences between the two architectures, the main difference between RISC
and CISC is the CPU time taken to execute a given program.

CPU execution time is calculated using this formula:

CPU time = (number of instruction) x (average cycles per instruction) x (seconds per cycle)

RISC architectures will shorten the execution time by reducing the average clock cycle per one instruction.

However, CISC architectures try to reduce execution time by reducing the number of instructions per program.
Summary and Facts
A reduced Instruction Set Computer (RISC), can be considered as an evolution of the alternative to Complex
Instruction Set Computing (CISC). With RISC, in simple terms, its function is to have simple instructions that do
less but execute very quickly to provide better performance.

What are RISC processors?


 Reduced Instruction Set Computer (RISC), is a type of computer architecture which operates on small,
highly optimised set of instructions, instead of a more specialised set of instructions, which can be found in
other types of architectures. This architecture means that the computer microprocessor will have fewer
cycles per instruction.
 RISC processors/architectures are used across a wide range of platforms nowadays, ranging from tablet
computers to smartphones, as well as supercomputers
 Thread Level Parallelism:
 Thread level parallelism increases the number of parallel threads executed by the CPU.
 Instruction Level Parallelism:
 Instructions level parallelism increases the speed of the CPU’s executing instructions.
Advantages and Disadvantages of RISC processors
Advantages:
 Greater performance due to simplified instruction set
 Uses pipelining efficiently
 RISC can be easily designed in compared to CISC
 Less expensive, as they use smaller chips
Disadvantages:
 Performance of the processor will depend on the code being executed
 RISC processors require very fast memory systems to feed different instructions. This requires a large
memory cache.
The characteristics of RISC processor structure:
 Hardwired Control Unit
 Data Path
 Instruction Cache
 Data Cache
 Main Memory
 Only Load and store instructions have access to memory
 Fewer number of addressing modes
 RISC includes a less complex pipelining architecture compared to CISC
What are CISC processors?
 CISC, which stands for “Complex Instruction Set Computer”, is computer architecture where single
instructions can execute several low level operations. CISC processors are also capable of executing
multi-step operations or addressing modes with single instructions.
 CISC, as with RISC, is a type of microprocessor that contains specialised simple/complex instructions.
 The primary objective for CISC processors is to complete a task in as few lines of assembly as possible.
To accomplish this, processor hardware must be built able to comprehend and execute a series of
operations.
Advantages and disadvantages of CISC processors:
Advantages:
 Allows for simple small scripts
 Using CISC, complex commands are readable
 Most code is built to be implemented on CISC
Disadvantages:
 CISC processors are larger as they contain more transistors
 May take multiple cycles per line of code, decreasing efficiency
 Lower clock speed
 Complex use of pipelining
 Compared to RISC, they are more complex, which means they are more expensive
The characteristics of CISC processor structure:
 Microprogram Control Unit
 Control Unit
 Instructions and data path
 Cache and main memory
CISC instruction sets also have additional addressing modes:
 Auto-increment mode
 Auto-decrement mode
 Relative Mode
 CISC uses STORE/LOAD/MOVE

You might also like