Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
10 views51 pages

COA Functional Units Final

COA

Uploaded by

p4cfx826k4
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views51 pages

COA Functional Units Final

COA

Uploaded by

p4cfx826k4
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 51

INTRODUCTION TO

COMPUTER
ORGANIZATION
- History of computers
- Functional units of the computer
Introduction

• Digital computer (or simply computer): A computer is a fast electronic


calculating machine that accepts digitized input information, processes
it according to a list of internally stored instructions, and produces the
resulting output information.
• The list of instructions is called a computer program.
• Desktop computers have processing units, storage units, visual display
and audio output units, and a keyboard that can all be located easily on
a home or office desk.
• The storage media include hard disks, Solid state drives, CDs, DVDs, Blu-
rays, pen drives, etc.
• Typically, domestic use requires a desktop, laptop, tablet, workstations,
etc.
• Industrial/Organizational level users requires (at lower level usage)
servers, enterprise systems, (for high-end usage) supercomputers etc.
History of Computers

• Early Beginnings
• 3000 B.C. – Abacus invented in Babylon; first known calculating device.
• 1800 B.C. – Babylonians create algorithms for numerical problems.
• 500 B.C. – Egyptians develop bead-and-wire abacus.
• 200 B.C. – Japanese computing trays introduced.
• 1617 – John Napier introduces logarithms for easier
multiplication/division.
History of Computers

• Mechanical Innovations
• 1624 – Wilhelm Schickard invents first four-function calculator-clock.
• 1642 – Blaise Pascal invents numerical calculator (adds only).
• 1671–1694 – Leibniz develops calculator that can add & multiply.
• 1780 – Franklin discovers electricity, opening path for electrical
machines.
• 1886 – First commercial adding machine by William Burroughs.
History of Computers

• Towards Electronic Computing


• 1896 – Hollerith creates sorting machine for data processing.
• 1931 – Konrad Zuse builds Z1, first programmable calculator in
Germany.
• 1936 – Alan Turing conceptualizes the Turing Machine.
• 1948 – IBM 604 electronic calculator introduced.
• 1953 – First high-speed printer developed.
Digital Computer Types

Digital
Computer

MICRO Mini
Computer Computer

Mainframes Super
Computers
Microcomputer

◼ A microcomputer is a small, relatively inexpensive computer


with a microprocessor as its central processing unit (CPU).
◼ The Commodore 64, also known as the C64 or the CBM 64, is an 8-bit
home computer introduced in January 1982 by Commodore International.

Ref: InfoWorld, 1 February 1982


Mini Computer

◼ A minicomputer, or colloquially mini, is a


class of smaller general
purpose computers that developed in the mid-
1960s[1]
◼ The PDP-8 is a 12-bit minicomputer that was
produced by Digital Equipment Corporation
(DEC).
◼ It was the first commercially successful
minicomputer,

1. Henderson, Rebecca M.; Newell, Richard G., eds. (2011). Accelerating Energy Innovation: Insights from Multiple Sectors. Chicago:
University of Chicago Press. p. 180. ISBN 978-0226326832.
Mainframes

◼ A mainframe computer, informally called


a mainframe or big iron,[1] is
a computer used primarily by large
organizations for critical applications like
• bulk data processing for tasks such
as censuses,
• industry and consumer statistics,
• enterprise resource planning,
• large-scale transaction processing.

Vance, Ashlee (July 20, 2005). "IBM Preps Big Iron Fiesta". The Register. Retrieved October
2, 2020. A single-frame IBM z15 mainframe. Larger capacity models can have up to four total
frames
Generations of Computers

• First Generation (1940–1956): Vacuum tubes, magnetic drums,


machine language, huge size.
• Second Generation (1956–1963): Transistors replace tubes, faster,
smaller, assembly languages introduced.
• Third Generation (1964–1971): Integrated Circuits, operating systems,
keyboard & monitor interfaces.
• Fourth Generation (1971–Present): Microprocessors, PCs, GUIs,
networking, internet.
• Fifth Generation (Present–Future): Artificial Intelligence, quantum
computing, nanotechnology.
First Generation (1940–1956)

• Used vacuum tubes for circuitry & magnetic drums for memory.
• Consumed lots of electricity, generated heat, prone to malfunctions.
• Input: punched cards, paper tape; Output: printouts.
• Examples: UNIVAC, ENIAC.
• Father of Computing: Charles Babbage (Difference & Analytical Engine
concepts).
Electronic Numerical Integrator and Computer
(ENIAC)

• World’s first general-purpose electronic digital computer.


• Designed and built at University of Pennsylvania.
• Developed during World War II to help the U.S. Army with firing
tables for weapons.
• Problem: Manual calculation of ballistics was slow — 200
people working with calculators took days for one table.
Development of ENIAC

• Proposed by John Mauchly (professor) and John Eckert


(graduate student).
• Proposal accepted by the U.S. Army in 1943.
• Completed in 1946.
• Specs:
- Weight: 30 tons.
- Space: 1,500 square feet.
- Components: 18,000+ vacuum tubes.
- Power: 140 kilowatts.
Performance of ENIAC

• Speed: 5,000 additions per second — much faster than


electromechanical computers.
• Data Representation: Decimal system, not binary.
• Memory: 20 accumulators, each holding a 10-digit decimal
number.
• Each digit stored using a ring of 10 vacuum tubes (only one
tube ON at a time).
Limitations of ENIAC

• Programming was manual:


• - Required setting switches.
• - Plugging/unplugging cables.
• Consumed a lot of electricity and required frequent
maintenance due to vacuum tube failures.
• Not user-friendly compared to later stored-program
computers.
VON Neumann Architecture

• A fundamental design approach first implemented in the IAS


computer is known as the stored- program concept. This idea is
usually attributed to the mathematician John von Neumann.
• Alan Turing developed the idea at about the same time. The first
publication of the idea was in a 1945 proposal by von Neumann for a
new computer, the EDVAC (Electronic Discrete Variable Computer).
• In 1946, von Neumann and his colleagues began the design of a new
stored- program computer, referred to as the IAS (Institute
for Advanced Study) computer, at the Princeton Institute for
Advanced Studies.
• The IAS computer, although not completed until 1952, is the
prototype of all subsequent general-purpose computers
2/15/2022
IAS/
Neumann
structure
Von-
Neumann
Basic
Structure

It consists of
■ A main memory, which stores both data and
instructions
■ An arithmetic and logic unit (ALU) capable of
operating on binary data
■ A control unit, which interprets the instructions in
memory and causes them to be executed
■ Input–output (I/O) equipment operated by the
control unit
Von-Neumann Basic Structure

• Figure reveals that both the control unit and the ALU contain storage
locations, called registers, defined as follows:
• Memory buffer register (MBR): Contains a word to be stored in memory or sent
to the I/O unit, or is used to receive a word from memory or from the I/O unit.
• Memory address register (MAR): Specifies the address in memory of the word to
be written from orVon-Neumann
read into the MBR.Basic Structure
• Instruction register (IR): Contains the 8-bit opcode instruction being executed.
• Instruction buffer register (IBR): Employed to hold temporarily the right-hand
instruction from a word in memory.
• Program counter (PC): Contains the address of the next instruction pair to be
fetched from memory.
• Accumulator (AC) and multiplier quotient (MQ): Employed to hold temporarily
operands and results of ALU operations. For example, the result of multiplying
two 40-bit numbers is an 80-bit number; the most significant 40 bits are stored in
the AC and the least significant in the MQ.
Von-Neumann Basic Structure

• The control unit operates the IAS by fetching instructions from


memory and executing them one at a time. The memory of the
IAS consists of 4,096 storage locations, called words, of 40 binary
digits (bits) each.
• Both data and instructions are stored there. Numbers are
represented in binary form, and each instruction is a binary code.
• Each number is represented by a sign bit and a 39-bit value.
• A word may alternatively contain two 20-bit instructions, with each
instruction consisting of an 8-bit operation code (opcode) specifying
the operation to be performed and a 12-bit address designating one
of the words in memory (numbered from 0 to 999).
IAS Memory Formats
The IAS
Instruction
Set
Harvard Architecture

◼ In a normal computer that follows von Neumann architecture,


instructions and data both are stored in same memory. So same buses
are used to fetch instructions and data. This means CPU cannot do both
things together (read a instruction and read/write data).
◼ Harvard Architecture is the computer architecture that contains
separate
• storage and separate buses (signal path) for instruction and data.
◼ It was basically developed to overcome the bottleneck of Von
Neumann Architecture.
◼ The main advantage of having separate buses for instruction and data is
that CPU can access instructions and read/write data at the same time.
Structure of
Harvard
Architecture
Structure of Harvard Architecture

• Buses: Buses are used as signal pathways. Types of Buses:


• Data Bus: It carries data among the main memory system, processor and I/O
devices.
• Data Address Bus: It carries the address of data from processor to main
• memory system.
• Instruction Bus: It carries instructions among the main memory system,
• processor and I/O devices.
• Instruction Address Bus: It carries the address of instructions from processor to
main memory system.
• Operational Registers: There are different types of registers involved in it which are
used for storing address of different types of instructions. For example,
Memory Address Register and Memory Data Register are operational registers.
• In practice Modified Harvard Architecture is used where we have two separate
caches (data and instruction). This is common and used in X86 and ARM
processors.
Difference Between Von Neumann and
Harvard Architecture

Von Neumann Architecture Harvard Architecture


◼ It is ancient computer architecture ◼ It is modern computer architecture
based on stored program computer based on Harvard Mark I relay based
concept. model.
◼ Same physical memory address is used ◼ Separate physical memory address is
for instructions and data. CPU cannot used for instructions and data. CPU
access instructions and read/write at can access instructions and read/write
the same time. at the same time.
◼ There is common bus for data and ◼ Separate buses are used for
instruction transfer. It is cheaper in cost. transferring data and instruction. It is
◼ Two clock cycles are required to costly than van Neumann architecture.
execute single instruction. It is used in ◼ An instruction is executed in a single
personal computers and small cycle. It is used in micro controllers
computers. and signal processing
Second Generation (1956–1963)

• Transistors replaced vacuum tubes – smaller, faster, more reliable.


• Introduced assembly languages & early high-level languages (COBOL,
FORTRAN).
• Used magnetic core memory instead of drums.
• Designed for specific industries, like atomic energy.
Third Generation (1964–1971)

• Integrated Circuits (ICs): Multiple transistors on a single chip.


• Faster processing, reduced size & cost.
• User interaction via keyboards and monitors.
• Introduction of operating systems for multitasking.
Fourth Generation (1971–Present)

• Microprocessors integrate CPU, memory, and I/O control on one chip.


• Example: Intel 4004 (1971).
• IBM PCs (1981), Apple Macintosh (1984).
• Growth of networking and Internet.
• GUIs, mouse, and portable devices emerge.
Fifth Generation (Present & Future)

• Focus on Artificial Intelligence (AI).


• Examples in use: voice recognition, machine learning.
• Technologies: parallel processing, superconductors, quantum
computing.
• Goal: natural language understanding, self-learning systems.
Types of Computers

• Personal Computers (PCs): Common, used at home and offices,


contain CPU, memory, I/O devices.
• Workstations: More powerful than PCs, used for engineering
and design applications.
• Enterprise Systems and Servers: Used by businesses to process
large volumes of data.
• Supercomputers: Used for simulations, scientific computing,
and weather forecasting.
Data v/s Instructions

➢ Information handled by the computer can be classified as a data or an


instruction.
➢ A set of instructions is a program.
➢ Instructions are explicit commands that Govern the transfer of
information within a computer as well as between the computer and its
I/O devices
➢ Instructions specify the arithmetic and logic operations to be performed
by the processor.
➢ For ex: ADD A, B.
➢ MULT A, B
➢ The program (set of instructions) stored in the memory are fetched one by
one and executed by the processor.
➢ The Instructions are written by a programmer to perform a desired task
with the help of a computer.
➢ Data : in the context of programming, represents the operands used by
the instructions. For ex: AND is operation, A, B are the operands.
Functional Units Overview

• A computer has five main


functional units:
– Input,
– Memory,
– ALU,
– Output, and Input
Arithmetic
and Logic


Unit
Control Unit. Memory

• These units work together to


Output Control Unit
process instructions and data.
• The processor includes the ALU I/O Processor

and Control Unit.


• Input and Output units are
collectively referred to as I/O units.
Input Unit

➢ The input unit accepts coded information from human


operators, from electromechanical devices such as keyboards,
or from other computers over the internet.
➢ Converts data into binary and sends it to memory or processor.
➢ Examples: joystick, microphone, touchpad, barcode reader.
Memory Unit

➢ The information received is either stored in the computer's


memory for later reference or immediately used by the
arithmetic and logic circuitry to perform the desired
operations. The processing steps are determined by a program
stored in the memory.
➢ Stores both programs and data for execution.
➢ Primary Storage: Fast, semiconductor memory (RAM, caches).
➢ Secondary Storage: Large capacity (HDDs, SSDs, optical disks).
➢ Accesses memory locations using addresses, often word-sized
blocks.
Memory Unit (Cont.): Primary Storage

• Consists of fast semiconductor memory like RAM and


caches.
• Directly accessible by the CPU during program
execution.
• Volatile: Loses data when power is turned off.
• Stores instructions and data that are currently in use.
• Examples:
– RAM (Random Access Memory) – Main memory.
– Cache Memory – Small, high-speed memory located close to
the CPU
Memory Unit (Cont.): Primary Storage

• Random-Access Memory (RAM)


– Main working memory.
– Stores data & instructions for currently running programs.
– Access time: nanoseconds.
– Directly accessed by CPU.
– Volatile: loses content when system shuts down.

• Cache Memory
– Very high-speed memory located close to or inside the CPU.
– Stores frequently accessed data and instructions.
– Reduces latency by avoiding slower main memory access.
– Multiple levels: L1 (smallest, fastest), L2, L3 (larger, slower).
Memory Unit (Cont.): Secondary Storage

• Provides long-term, non-volatile data storage.


• Used to store data and programs not in active use.
• Typically, slower than primary memory but has much larger
capacity.
• Examples:
• Hard Disk Drives (HDDs)
• Solid State Drives (SSDs)
• Optical Disks (CDs, DVDs)
• Magnetic Tapes (for archival purposes)
Memory Unit (Cont.): Secondary Storage

• Hard Disk Drives (HDD):


• Uses magnetic platters and mechanical arms.
• Slower access due to moving parts.
• Large capacity (e.g., TBs).
• Cost-effective per GB.
• Solid State Drives (SSD):
• No moving parts; uses NAND flash memory.
• Much faster than HDDs (faster boot/load times).
• More durable and shock-resistant.
• More expensive per GB than HDDs.
• Optical Disks (CD/DVD):
• Use laser technology to read/write.
• Mainly used for media distribution or backup.
• Slower and less common in modern systems.
Memory Unit (Cont.): Registers in the
Processor

• Registers are small, fast storage locations inside the CPU.


• Used for:
– Holding intermediate values during computation.
– Controlling the flow of instruction execution.
• Instruction Register (IR): Holds current instruction being
executed.
• Program Counter (PC): Points to the next instruction.
• Other general-purpose registers: Used during computation.
Arithmetic and Logic Unit (ALU)

• Most operations by a computer are executed by the ALU.


• Ex : ADD A, B
• Step 1 : Decode the instruction.
• Step 2 : Fetch “A” from primary / Secondary memory into processor
registers for execution.
• Step 3 : Fetch “B” from primary / Secondary memory into processor
registers for execution.
• Step 4 : Perform the addition operation.
• Step 5: Store the result back / use it immediately via some processor
registers.
Control Unit

• Coordinates the activities of all functional units.


• Generates timing and control signals for data flow and
operations.
• May be distributed across the system; not always a standalone
unit.
Processor and Memory Communication

• Memory Address Register (MAR): Holds address to access.


• Memory Data Register (MDR): Holds data to read/write.
• PC and IR support sequencing and execution of instructions.
Connection
between
the
Processor
and the
Memory
Output Unit

• The results are sent back to the outside world through the
output unit.
• All these actions are coordinated by the control unit.
• Examples: printers, monitors, speakers.
• Some devices are both input and output (e.g., touchscreens).
Interrupt Handling

• An interrupt signals the processor to stop current task for


urgent service.
• The processor saves its state and executes an interrupt service
routine.
• After handling, it resumes the interrupted task.
Summary of Functional Units

• Input Unit: Accepts data from external sources.


• Memory Unit: Stores instructions and data.
• ALU: Executes computations and logic operations.
• Output Unit: Sends data to the user.
• Control Unit: Directs and coordinates system activities.
History of Processors

1823 Baron Jons Jackob Berzelius discovered silicon (Si), which today is the basic component of
processors.

1903 Nikola Tesla patented electrical logic circuits called "gates" or "switches" in 1903.

1947 John Bardeen, Walter Brattain, and William Shockley invented the first transistor at the Bell
Laboratories on December 23, 1947.

1948 John Bardeen, Walter Brattain, and William Shockley patented the first transistor in 1948.

1956 John Bardeen, Walter Brattain, and William Shockley were awarded the Nobel Prize in
physics for their work on the transistor.

1958 The first working integrated circuit was developed by Robert Noyce of Fairchild
Semiconductor and Jack Kilby of Texas Instruments. The first IC was demonstrated on
September 12, 1958. (Geoffrey Dummer is credited as being the first person to
conceptualize and build a prototype of the integrated circuit.)

1960 IBM developed the first automatic mass-production facility for transistors in New York in
1960.
History of Processors

1965 On April 19, 1965, Gordon Moore made an observation about integrated circuits that
became known as Moore's Law.

1968 Intel Corporation was founded by Robert Noyce and Gordon Moore in 1968.

1969 AMD (Advanced Micro Devices) was founded on May 1, 1969.

1971 Intel with the help of Ted Hoff introduced the first microprocessor, the Intel 4004 on
November 15, 1971. The 4004 had 2,300 transistors, performed 60,000 OPS (operations per
second), addressed 640 bytes of memory, and cost $200.00.

1965 On April 19, 1965, Gordon Moore made an observation about integrated circuits that
became known as Moore's Law.

1968 Intel Corporation was founded by Robert Noyce and Gordon Moore in 1968.

1969 AMD (Advanced Micro Devices) was founded on May 1, 1969.


History of Processors

1971 Intel with the help of Ted Hoff introduced the first microprocessor, the Intel 4004 on
November 15, 1971. The 4004 had 2,300 transistors, performed 60,000 OPS (operations
per second), addressed 640 bytes of memory, and cost $200.00.

1972 Intel introduced the 8008 processor on April 1, 1972.

1974 Intel's improved microprocessor chip was introduced on April 1, 1974; the 8080 became
a standard in the computer industry.

1976 Intel introduced the 8085 processor in March 1976.

1978 The Intel 8086 was introduced on June 8, 1978.

1979 The Intel 8088 was released on June 1, 1979.

1979 The Motorola 68000, a 16/32-bit processor, was released and later chosen as the
processor for the Apple Macintosh and Amiga computers.
References

• Computer Organization : By Carl Hamacher, Z. Vranesic, S. Zaky (5th edn)


McGrawHill Education.
• Computer System Architecture, Morris Mano, Third edition, Pearson
publications.
• Computer Organization and Architecture – Designing for Performance,
William Stallings, Ninth edition, Pearson publications.

You might also like