Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
8 views48 pages

CA-Al Week 02 Lecture

The document provides an overview of computer architecture and assembly language, detailing the history and evolution of computers from mechanical to modern electronic systems. It discusses key concepts such as the von Neumann architecture, generations of computers, and the impact of transistors and microprocessors on computing. Additionally, it covers performance measurement, cloud computing, and parallel processing in contemporary computing environments.

Uploaded by

itskaashii9896
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views48 pages

CA-Al Week 02 Lecture

The document provides an overview of computer architecture and assembly language, detailing the history and evolution of computers from mechanical to modern electronic systems. It discusses key concepts such as the von Neumann architecture, generations of computers, and the impact of transistors and microprocessors on computing. Additionally, it covers performance measurement, cloud computing, and parallel processing in contemporary computing environments.

Uploaded by

itskaashii9896
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 48

Computer Architecture & Assembly Language

CSC-250
COURSE LECTURE

COMPUTER SCIENCE DEPARTMENT


SIBAU KANDHKOT CAMPUS
History of computer
• MECHANICAL

• ELOCTRO MECHANICAL

• ELECTRONIC

• MODREN
ENIAC
• Electronic Numerical Integrator And Computer
An accumulator is a register for short-term,
• Decimal (not binary) intermediate storage of arithmetic and logic
data in a computer's CPU The term
• 20 accumulators of 10 digits (ACCUMULATOR: "accumulator" is rarely used in reference to
contemporary CPUs, having been replaced
• Programmed manually by switches around the turn of the millennium by the
• 18,000 vacuum tubes term "register." In a modern computers, any
register can function as an accumulator
• 30 tons
Eckert and Mauchly
• 15,000 square feet University of Pennsylvania
Trajectory tables for weapons
• 140 kW power consumption Started 1943
Finished 1946
• 5,000 additions per second Too late for war effort
Used until 1955
von Neumann/Turing

• Stored Program concept


• Main memory storing programs and data
• ALU operating on binary data
• Control unit interpreting instructions from memory and executing
• Input and output equipment operated by control unit
• Princeton Institute for Advanced Studies (IAS) computer
• Completed 1952
Structure of von Neumann
machine
John von Neumann and the IAS machine, 1952
Commercial Computers
• 1947 -Eckert-Mauchly Computer Corporation
• UNIVAC I(Universal Automatic Computer)
• US Bureau of Census 1950 calculations
• Became part of Sperry-Rand Corporation
• Late 1950s -UNIVAC II
• —Faster
• —More memory
IBM (International Business Machines
Corporation)
• Punched-card processing equipment
• 1953 -the 701
• IBM’s first stored program computer
• Scientific calculations
• 1955 -the 702
• Business applications
• Lead to 700/7000 series
Transistors
• Replaced vacuum tubes
• Smaller
• Cheaper
• Less heat dissipation
• Solid State device
• Made from Silicon (Sand)
• Invented 1947 at Bell Labs
• William Shockley et al.
Transistor Based Computers
• Second generation machines
• NCR & RCA produced small transistor machines
• IBM 7000
• DEC - 1957
• Produced PDP-1
Microelectronics
• Literally - “small electronics”
• A computer is made up of gates, memory cells and interconnections
• These can be manufactured on a semiconductor
• e.g. silicon wafer
Generations of Computers
• Vacuum tube -1946-1957
• Transistor -1958-1964
• Small scale integration -1965 on
• Up to 100 devices on a chip
• Medium scale integration -to 1971
• 100-3,000 devices on a chip
• Large scale integration -1971-1977
• 3,000 -100,000 devices on a chip
• Very large scale integration -1978 -1991
• 100,000 -100,000,000 devices on a chip
• Ultra large scale integration –1991 -
• Over 100,000,000 devices on a chip
Moore’s Law
• Increased density of components on chip
• Gordon Moore –co-founder of Intel
• Number of transistors on a chip will double every year
• Since 1970’s development has slowed a little
• Number of transistors doubles every 18 months
• Cost of a chip has remained almost unchanged
• Higher packing density means shorter electrical paths, giving higher performance
• Smaller size gives increased flexibility
• Reduced power and cooling requirements
• Fewer interconnections increases reliability
Rock’s Law

Arthur Rock, is a corollary to Moore’s law: “The cost of capital equipment to


build semiconductor will double every four years”
• Rock’s Law arises from the observations of a financier who has seen the price tag of
new chip facilities escalate from about $12,000 in 1968 to $12 million in the late
1990s.
• At this rate, by the year 2035, not only will the size of a memory element be smaller
than an atom, but it would also require the entire wealth of the world to build a
single chip!
• So even if we continue to make chips smaller and faster, the ultimate question may
be whether we can afford to build them
1st Generation Computers
• Used vacuum tubes for logic and storage (very little storage available)
• A vacuum-tube circuit storing 1 byte
• Programmed in machine language
• Often programmed by physical connection (hardwiring)
• Slow, unreliable, expensive
• The ENIAC – often thought of as the first programmable electronic
computer – 1946
• 17468 vacuum tubes, 1800 square feet, 30 tons
2nd Generation Computers
• Transistors replaced vacuum tubes
• Magnetic core memory introduced
• Changes in technology brought about cheaper and more reliable
computers (vacuum tubes were very unreliable)
• Because these units were smaller, they were closer together
providing a speedup over vacuum tubes
• Various programming languages introduced (assembly, high-level)
• Rudimentary OS developed
• The first supercomputer was introduced, CDC 6600 ($10
million)
Second generation of computers
Transistors
• Replaced vacuum tubes
• Smaller
• Cheaper
• Less heat dissipation
• Solid State device
• Made from Silicon (Sand)
• Invented 1947 at Bell Labs by William Shockley et al.
3rd Generation Computers
ability to place circuits onto silicon chips

• Replaced both transistors and magnetic core memory


• Result was easily mass-produced components reducing the cost of computer
manufacturing significantly
• Also increased speed and memory capacity
• Computer families introduced
• Minicomputers introduced
• More sophisticated programming languages and OS developed
• Popular computers included PDP-8, PDP-11, IBM 360 and Cray produced
their first supercomputer, Cray-1
• Silicon chips now contained both logic (CPU) and memory
• Large-scale computer usage led to time-sharing OS
Third generation of computers Integrated
Circuits

• A computer is made up of gates, memory cells and interconnections


• All these can be manufactured either separately (discrete
components) or
• on the same piece of semiconductor (a.k.a. silicon wafer)
4th Generation Computers
1971-Present:
Microprocessors
• Miniaturization took over
• From (10-100 components per chip) to (10,000+)

• Thousands of ICs were built onto a single silicon chip(VLSI), which allowed
Intel, in 1971, to
• create the world’s first microprocessor, the 4004, which was a fully functional, 4-bit system
that ran at 108KHz.
• Intel also introduced the RAM chip, accommodating 4Kb of memory on a
single chip. This allowed computers of the 4th generation to become smaller
and faster than their solid-state predecessors
• Computers also saw the development of GUIs, the mouse and handheld
devices
Microprocessors -Intel
• 1971 -4004
• First microprocessor
• All CPU components on a single chip
• 4 bit
• Multiplication by repeated addition, no hardware multiplier!
• Followed in 1972 by 8008
• 8 bit
• Both designed for specific applications
• 1974 -8080
• Intel’s first general purpose microprocessor
Computer Level Hierarchy
The Von Neumann
Architecture
Named after John von Neumann, Princeton, he designed a
computer architecture whereby data and instructions would be
retrieved from memory,
operated on by an ALU, and moved back to memory (or I/O)

This architecture is the basis for most modern computers (only


parallel processors and a few other unique architectures use a
different model)
Hardware consists of 3 units

1. CPU (control unit, ALU, registers)


2. Memory (stores programs and data)
3. I/O System (including secondary storage)

Instructions in memory are executed sequentially unless a program


instruction explicitly changes the order
Von Neumann Architectures
• There is a single pathway used to move both data and
instructions between memory, I/O and CPU
• the pathway is implemented as a bus
• the single pathway creates a bottleneck
• known as the von Neumann bottleneck

• A variation of this architecture is the Harvard architecture


which separates data and instructions into two pathways
• Another variation, used in most computers, is the system bus
version in which there are different buses between CPU and
memory and memory and I/O
Fetch-execute cycle
• The von Neumann architecture operates on the fetch-execute cycle
• Fetch an instruction from memory as indicated by the Program Counter
register
• Decode the instruction in the control unit
• Data operands needed for the instruction are fetched from memory
• Execute the instruction in the ALU storing the result in a register
• Move the result back to memory if needed
The von Neumann Model

• This is a general
depiction of a von
Neumann system:

• These computers
employ a fetch-
decode-execute cycle
to run programs as
follows . . .
The von Neumann Model
• The control unit fetches the next instruction from memory using the program
counter to determine where the instruction is located
The von Neumann Model
• The instruction is decoded into a language that the ALU can
understand.
The von Neumann Model
• Any data operands required to execute the instruction are fetched
from memory and placed into registers within the CPU.
The von Neumann Model
• The ALU executes the instruction and places results in registers or
memory.
Non-von Neumann Models
• Conventional stored-program computers have undergone
many incremental improvements over the years
• specialized buses
• floating-point units
• cache memories
• But enormous improvements in computational power
require departure from the classic von Neumann
architecture
• Adding processors is one approach
Non-von Neumann Models
• In the late 1960s, high-performance computer systems
were equipped with dual processors to increase
computational throughput.
• In the 1970s supercomputer systems were introduced
with 32 processors.
• Supercomputers with 1,000 processors were built in the
1980s.
• In 1999, IBM announced its Blue Gene system containing
over 1 million processors.
Parallel Computing
• Parallel processing allows a computer to simultaneously work on subparts of a
problem.
• Multicore processors have 2 or more processor cores sharing a single die.
• Each core has its own ALU and set of registers, but all processors share memory
and other resources.
• “Dual core” differs from “dual processor.”
• Dual-processor machines, have two processors, but each processor plugs into
the motherboard separately.
• Multi-core systems provide the ability to multitask
• E.g., browse the Web while burning a CD
• Multithreaded applications spread mini-processes, threads, across one or more
processors for increased throughput.
Computing as a Service Cloud Computing

• The ultimate aim of every computer system is to deliver functionality to its users.
• Computer users typically do not care about terabytes of storage and gigahertz of
processor speed.
• Many companies outsource their data centers to 3rd-party specialists, who agree
to provide computing services for a fee.
• These arrangements are managed through service-level agreements (SLAs).
• Rather than pay a third party to run a company-owned data center, another
approach is to buy computing services from someone else’s data center and
connect to it via the Internet.
• This is the idea behind a collection of service models known as Cloud computing.
Cloud Computing

• Enabling on-demand network access to a shared pool of


configurable computing resources (e.g., networks, servers,
applications, and services) that can be rapidly provisioned
and released with minimal management effort or service
provider interaction
Cloud Computing
• Cloud computing models:
• Software as a Service,
• The consumer of this service buy application services
• Platform as a Service,
• Provides server hardware, operating systems, database services, security
components, and backup and recovery services
• Infrastructure as a Service
• provides only server hardware, secure network access to the servers, and
backup and recovery services. The customer is responsible for all system
software including the operating system and databases
Grid computing
• Combination of computer resources from multiple administrative
domains to reach a common goal.

• What distinguishes grid computing from conventional high


performance computing systems s.a cluster computing is that grids
tend to be more loosely coupled, heterogeneous, and geographically
dispersed.

• Although a grid can be dedicated to a specialized application, it is


more common that a single grid will be used for a variety of different
purposes
Cluster computing
• A group of linked computers, working together closely thus
in many respects forming a single computer. The
components of a cluster are commonly, but not always,
connected to each other through fast LANs

• Clusters are usually deployed to improve performance and


availability over that of a single computer, while typically
being much more cost-effective than single computers of
comparable speed or availability
CORE VS PROCESSOR
ASSIGNMENT
• MEASURE THE PERFORMANCE OF SYSTEM INSTRUCTION
Computer Performance Measures
Program Execution Time
For a specific program compiled to run on a specific machine “A”, the following
parameters are provided:

• The total instruction count of the program.


• The average number of cycles per instruction (average CPI).
• Clock cycle of machine “A”
How can one measure the performance of this machine running this program?
• The machine is said to be faster or has better performance running this program if the total
execution time is shorter.
• Thus the inverse of the total measured program execution time is a possible performance
measure or metric:

PerformanceA = 1 / Execution TimeA

• How to compare performance of different machines?


• What factors affect performance? How to improve performance?
43
Comparing Computer Performance
Using Execution Time
• To compare the performance of two machines “A”, “B” running a given specific program
PerformanceA = 1 / Execution TimeA
PerformanceB = 1 / Execution TimeB
• Machine A is n times faster than machine B means:

PerformanceA Execution TimeB


Speedup = n = =
PerformanceB Execution TimeA
• Example:
For a given program:
Execution time on machine A: ExecutionA = 1 second
Execution time on machine B: ExecutionB = 10 seconds
PerformanceA / PerformanceB = Execution TimeB / Execution TimeA
= 10 / 1 = 10
The performance of machine A is 10 times the performance of machine B when
running this program, or: Machine A is said to be 10 times faster than machine B
when running this program.
44
CPU Execution Time The CPU Equation
• A program is comprised of a number of instructions executed , I
• Measured in: instructions/program
• The average instruction takes a number of cycles per instruction (CPI) to be
completed.
• Measured in: cycles/instruction, CPI
• CPU has a fixed clock cycle time C = 1/clock rate
• Measured in: seconds/cycle
• CPU execution time is the product of the above three parameters as follows:

CPU
CPUtime
time == Seconds
Seconds ==Instructions
Instructions xx Cycles
Cycles xx Seconds
Seconds
Program
Program Program
Program Instruction
Instruction Cycle
Cycle

T = I x CPI x C
45
Example
• A Program is running on a specific machine with the following
parameters:
• Total executed instruction count: 10,000,000 instructions
• Average CPI for the program: 2.5 cycles/instruction.
• CPU clock rate: 200 MHz.
What is the execution time for this program?

CPU time = Instruction count x CPI x Clock cycle


= 10,000,000 x 2.5 x 1 / clock rate
= 10,000,000 x 2.5 x 5x10-9
= .125 seconds

46
Example
• From the previous example: A Program is running on a specific machine with the
following parameters:
• Total executed instruction count, I: 10,000,000 instructions
• Average CPI for the program: 2.5 cycles/instruction.
• CPU clock rate: 200 MHz.
• Using the same program with these changes:
• A new compiler used: New instruction count 9,500,000
New CPI: 3.0
• Faster CPU implementation: New clock rate = 300 MHZ
• What is the speedup with the changes?

–– Speedup == Old
Speedup OldExecution
ExecutionTime
Time == Iold
Iold xx CPI
CPIoldold xx Clock
Clockcycle
cycleoldold
New
NewExecution
ExecutionTime
Time Inew
Inew xx CPI
CPInew xx Clock
ClockCycle
Cyclenew
new new

Speedup = (10,000,000 x 2.5 x 5x10 -9) / (9,500,000 x 3 x 3.33x10-9 )


= .125 / .095 = 1.32
or 32 % faster after changes.
47
48

You might also like