Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
34 views22 pages

3-EED220 Lecture 3

This document discusses different number representation systems used in digital logic and computers, including signed binary numbers, fixed-point representation, and floating-point representation. It explains how signed numbers are represented using signed magnitude, 1's complement, and 2's complement systems. It also describes the concept of fixed-point representation using a binary point and how floating-point representation separates a number into a mantissa and exponent to allow for a wider range of values.

Uploaded by

as0110967700
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views22 pages

3-EED220 Lecture 3

This document discusses different number representation systems used in digital logic and computers, including signed binary numbers, fixed-point representation, and floating-point representation. It explains how signed numbers are represented using signed magnitude, 1's complement, and 2's complement systems. It also describes the concept of fixed-point representation using a binary point and how floating-point representation separates a number into a mantissa and exponent to allow for a wider range of values.

Uploaded by

as0110967700
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

Faculty of Engineering and Technology

Electrical Engineering Department

EED220: Logic Design

Lecture 3
Signed Numbers & Binary Codes
Mostafa Salah, Ph.D.
[email protected]
Contents
• Signed Binary Number
• Number Representations
• Fixed Point
• Floating Point
• Binary Codes

2
Signed Binary Number
If positive, 0 followed by magnitude
Signed magnitude 00001001
Signed-magnitude

If negative, 1 followed by magnitude


Signed magnitude 10001001

Number
Example: Number 9.
8-bit Representation If positive, leave number unchanged
Signed 1’s complement 00001001
Signed 2’s complement 00001001

Signed-complement

If negative, complement number


Signed 1’s complement 11110110
Signed 2’s complement 11110111

3
Signed Binary Number
System Range
Unsigned 0 to 2n – 1
Signed Magnitude –(2(n – 1) – 1) to 2(n – 1) – 1
Two’s Complement –2n – 1 to 2(n – 1) – 1
One’s Complement –(2(n – 1) – 1) to 2(n – 1) – 1

For 4-bit binary encodings:

Unsigned

2’s Complement

Signed Magnitude

4
Signed Binary Number
• The signed‐2’s‐complement system has
only one representation for 0, which is
always positive.
• The signed‐magnitude system is used in
ordinary arithmetic but is awkward
when employed in computer arithmetic
because of the separate handling of the
sign and the magnitude.
• Therefore, the signed‐complement
system is normally used.
• The 1’s complement imposes some
difficulties, and It is useful as a logical
operation, since the change of 1 to 0 or 0
to 1 is equivalent to a logical complement
operation.

5
Arithmetic Addition
• The addition of two signed
binary numbers with +6→ 00000110 +6→ 00000110
negative numbers + +
+ 13→ 00001101 – 13→ 11110011
represented in -------------------------------- --------------------------------
+ 19 → 00010011 –7→ 11111001
signed‐2’s‐complement
form is obtained from the
addition of the two
numbers, including their –6→ 11111010 –6→ 11111010
sign bits. + +
– 13→ 11110011 + 13→ 00001101
• A carry out of the sign‐bit --------------------------------
– 19 → 11101101
--------------------------------
+7→ 00000111
position is discarded.

6
Arithmetic Subtraction
• Subtraction of two signed binary numbers when negative numbers are in
2’s‐complement form is simple and can be stated as follows:
Take the 2’s complement of the subtrahend (including the sign bit) and add it to the
minuend (including the sign bit). A carry out of the sign‐bit position is discarded.

( A) – (+B) = ( A) + (–B)
( A) – (–B) = ( A) + (+B)

✓ To convert a positive number into a negative number, take the complement

Computers need only one common hardware


circuit to handle addition and subtraction
7
Number Representations!
• The key to represent fractional numbers, like 26.5, is the concept of binary point. A
binary point is like the decimal point in a decimal system. It acts as a divider between
the integer and the fractional part of a number.

• The very same concept of decimal point can be applied to our binary representation,
making a "binary point".
• Digits (or bits) on the right of binary point carries a weight of 2–1, 2–2, 2–3, and so on.

Example: The bit pattern of 2.75 and 22 is 24 23 22 21 20 Binary Point 2–1 2–2 2–3
(2.75)10
exactly the same. The only difference, is the 0 0 0 1 0 . 1 1 0
position of binary point.
24 23 22 21 20 Binary Point 2–1 2–2 2–3
(22)10
1 0 1 1 0 . 0 0 0

8
Number Representations! – Fixed Point Number
• Fixed Point Number Representation
▪ To represent a real number in computers (or any hardware in general), a
definition of a fixed-point number type simply done by implicitly fixing the binary
point to be at some position of a numeral.

▪ To define a fixed-point type conceptually, the notation fixed<w,b> is used,


▪ w denotes the number of bits used as a whole (the Width of a number),
▪ b denotes the position of binary point counting from the least significant bit (counting from 0).

Example: fixed<8,3> denotes an 8-bit fixed point number, of which 3 right most bits are fractional.

Example: Given the pattern 0 0 0 1 0 1 1 0

• Represents (0.6875)10 in fixed<8,5>


• Represents (5.5)10 in fixed<8,2>
9
Number Representations! – Fixed Point Number
• Fixed Point Number Representation

Pros:
• Straight-forward and efficient as integers arithmetic in computers.
• Fixed point arithmetic comes for free on computers.

Cons:
• The loss of range and precision when compare with floating point number
representations.
• For example, in a fixed <8,1> representation, the fractional part is only precise to 0.5.
Number like 0.75 can’t represented. One can represent 0.75 with fixed<8,2>, but then
loose range on the integer part.

10
Number Representations! – Floating Point Number
• Floating Point Number Representation
▪ Instead of using the fixed-point representation, which would require
many significant digits, it is better to use the floating-point
representation.
▪ Numbers are represented by a:
▪ Mantissa comprising the significant digits and
▪ Exponent of the radix R.

Number = Mantissa × RExponent

The numbers are often normalized, such that the radix point is placed to the right of
the first nonzero digit, as in 5.234 × 1043.

11
Number Representations! – Floating Point Number
• Binary floating-point representation has been standardized by the Institute of
Electrical and Electronic Engineers (IEEE).
• Two sizes of formats are specified in this standard— a single-precision 32-bit format
and a double-precision 64-bit format.

Prof. Kahan was instrumental in creating


the IEEE 754-1985 standard for floating-
point computation

12
Number Representations! – Floating Point Number
▪ Single-Precision Floating-Point Format
• Sign bit, S
• 23-bit Mantissa field, M.

• an 8-bit exponent field, E,


• the exponent can be either positive or negative to be able to represent both very large and very small
numbers.
• The IEEE standard specifies the exponent in the excess-127 format. In this format the value 127 is added
to the value of the actual exponent so that

Exponent = E − 127
• The range of E is 0 to 255. The extreme values of E = 0 and E = 255 are taken to denote the exact zero
and infinity, Therefore, the normal range of the exponent is −126 to 127, which is represented by the
values of E from 1 to 254.

13
Number Representations! – Floating Point Number
▪ Single-Precision Floating-Point Format
• The IEEE standard calls for a normalized mantissa, which means that the most-
significant bit is always equal to 1.
• Thus, it is not necessary to include this bit explicitly in the mantissa field.
• Therefore, if M is the bit vector in the mantissa field, the actual value of the
mantissa is 1.M , which gives a 24-bit mantissa.
• Consequently, the floating-point format represents the number

Value = (–1)S ×1.M × 2E−127

• The size of the mantissa field allows the representation of numbers that have the
precision of about seven decimal digits. The exponent field range of 2−126 to 2127
corresponds to about 10±38.

14
Number Representations! – Floating Point Number
▪ Single-Precision Floating-Point Format
Example: Convert 0xC0B40000 single precision floating point into decimal.

(C0B40000)16 = (1100 0000 1011 0100 0000 0000 0000 0000)2

Sign
E Mantissa

S = 1 → Negative number
E = (10000001)2 = 129 → Exponent = E − 127 = 2
M = 01101

Value = (–1)S ×1.M ×2E−127


Value = (–1)1 ×1.01101 × 2E−127 = –1 ×1.01101 × 22 = – (20 + 2–2 + 2–3 + 2–5)22 = –5.625
15
Number Representations! – Floating Point Number
▪ Single-Precision Floating-Point Format
Example: Convert the decimal 329.390625 to single precision floating point.

(329.390625)10 = (101001001.011001)2 = 1.01001001011001 ×28

Positive number → S = 0

E = 8 → Exponent = 8 + 127 = 135 = (10000111)2

M = 01001001011001 remember the implied 1 of the mantissa


means we don't include the leading 1

single precision
floating point 0 10000111 01001001011001000000000

16
Number Representations! – Floating Point Number
▪Single-Precision Floating-Point Format
More Examples:

• (– 0.75)10 → 1 01111110 10000000000000000000000

• (0xC1580000)16 → 1 10000010 10110000000000000000000 → (– 13.5)10

Special Cases:
Single Precision Exponent Mantissa Value
Normalized Number 1 to 245 Anything (–1)S ×1.M × 2E−127
Zero 0 0  0 (according to value of S)
Infinity 255 0  ꝏ (according to value of S)
NaN 255 Nonzero NaN

17
Number Representations! – Floating Point Number
▪ Double-Precision Floating-Point Format
• Sign bit, S
• 52-bit Mantissa field, M.

• an 11-bit exponent field, E,


• The IEEE standard specifies the exponent in the excess-1023 format.

Exponent = E − 1023

18
Number Representations! – Floating Point Number
• Single-Precision Floating-Point Boundaries
• Largest positive/negative number =  (2 – 2 –23) x2127   2 x1038

• Smallest positive/negative number =  1 x2 –126   2 x10 – 38

• Single-Precision Floating-Point Boundaries


• Largest positive/negative number = (2 – 2 –52) x21023   2 x10308

• Smallest positive/negative number =  1 x2 –1022   2 x10 –308

19
Number Representations! – Floating Point Number
▪Computation Error
Example:
(0.6)10 = (0.1001 1001 1001 1001 1001…)2
=+2-1 x (1.001 1001 1001 1001 1001…)2
Then:
Positive number → S = 0
E = -1 → Exponent = -1 + 127 = 126 = (01111110)2
M = 001 1001 1001 1001 1001 1001
single precision
0 01111110 01110011001100110011001 → 0.59999996423
floating point

Error = 0.6 – 0.59999996423 ≈ 0


20
Number Representations! – Fixed Vs Floating
Steve Wozniak is brilliant guy, he
The Technology giant Steve Jobs and Bill Gates wrote the best BASIC* on the
planet, it is perfect in every way
except for one thing; it is just use
fixed point, it is not floating point

Steve Wozniak
Co-founder of Apple Inc.

https://www.businessinsider.com/the-bill-gates-steve-jobs-feud-frenemies-2017-6
21
References
‒ M. Mano and M. Ciletti, Digital Design, with an introduction to the
Verilog HDL. 5th Ed. Pearson, 2013.
‒ John F. Wakerly, Digital Design: Principles and Practices. 4th Ed. Pearson,
2005.
‒ R. Katz and G. Boriello, Contemporary Logic Design. 2nd Ed. Pearson,
2005.
‒ S. Brown and Z. Vranesic , Fundamentals of Digital Logic with Verilog
Design. 3rd Ed. SEM, 2013.
‒ Machine Structure Course. Online:
https://inst.eecs.berkeley.edu/~cs61c/sp06/handout/
‒ B. Parhami, Computer Arithmetic: Algorithms and Hardware Designs. 2nd
Ed. Oxford U. Press, 2010.
‒ Floating-Point Arithmetic, IEEE standard 754, 2019.

22

You might also like