[ Prev ] [ Index ] [ Next ]

Day 3

Created Wednesday 19 February 2020

Multiplication and Division (next week we'll do boolean expressions).


We can do binary multiplication and division by 2 veyr easily using an arithmetic shift operation. We call it this because we preserve the sign bit.

A left arithmetic shift inserts a zero in for the rightmost bit and shifts everything else left one bit; in effect, it multiplies by 2.
So, 01011 is +11. A shift to the let multiplies by 2 like so: 10110 = +22

A right arithmetic shift... refer to slide. This divides by 2. 1100 = +12. 0110 = +6.

Multiplying Two Binary Numbers

Copy the multplicand (the type number) wherever there is a 1 in the multiplier (the one on the bottom) then use zeros to fill in blank columns. Add the rows.

Division

// say the dividend is 110110 and the divisor is 101. I'm using | to denote division here.
101|110110

Divisor <- Dividend? No- add 0 to quotient. yes - Add one and copy the divisor. Subtract, bring down another dividend digit. Repeat step 1.

Floating Point Numbers

Now then, things start getting hard here. How do we deal with endlessly repeating digits? like 1/3 is 0.33333... Due to floating point, things on paper that should be equal may not be on the computer due to floating point.

floating-point numbers allow an arbitary number of decimal places to the right of the decimal point. We often express in scientific notation.

Computers use a form of scientific notation for floating-point representation. They have three components:
Sign such as +/-
Mantissa such as 1.25
And then exponent such as 10-1

So we get +1.25 * 10-1

Computer representation of a floating-point number consists of three fixed-size fields:
Sign | Exponent | Significand
Note, significand and mantissa don't technically mean the same thing but people use them interchangeably. We use significand to refer to the fractional part of a floating point number.
We have a one-bit sign field
Exponent determines range of values we can represent
Significand determines precision of the representation

The significand is always preceded by an implied binary point. Thus, the significand always contains a fractional binary value.
The exponent indicates the power of 2 by which the significand is multiplied

Expressing in Floating Point

Express 3210 in simplified 14-bit floating point model.
32 is 25 so, (in binary) scientific notation we get: 32 = 1.0 * 25 = 0.1 * 26
In a moment, we'll explain why we prefer the version farthest to the right.
Using this info, we put 110 (= 610) in the exponent field and 1 in the significand as shown:
sign | exponent | significand
0 | 00110 | 10000000

Negative Exponents

Sadly, there is no way we can express 0.5 (= 2-1)! We have no sign in the exponent field.

To fix the issue of synonymous fields (having multiple representations of the same number in floating point), we establish a rule that the first digit of the significand must be 1, with no ones to the left of the radix point. This is known as normalization! Which results in a unique pattern for each floating point number.

To prodive for negative exponents, we will use a biased exponent! Remember, we use a bias to offset everything to help us represent negative numbers. Let's say we have a 5-bit exponent bias. So, 25-1 - 1 = 24 - 1 = 15. As such, 15 is our bias. Our exponent will use excess-15 representation. Now then, in our model exponent values of less than 15 are negative. Thus, we can represent fractional numbers like 0.5.

So then, 32 = 1.0 * 25 = 0.1 * 26
To use our excess 15 biased exponent, we add 15 to 6, giving 2110 = (1010102)
So then, we'd represent it like so in floating-point:
0|10101|10000000

How about 0.062510 in the revised 14-bit floating-point model?
We know that 0.0625 is 2-4. So, (in binary) scientific notation we have 0.0625 = 1.0 * 2-4 = 0.1 * 2-3
To use our excess 15 biased exponent, we add 15 to -3, giving 1210 = (011002)
So we have this:
0|01100|10000000

Don't forget negative numbers! Refer to the slides on Ch02.

Caveat to Programmers

Due to truncation issues, we cannot assume, for floating point numbers:
(a + b) + c = a + (b + c) or
a * (b + c) = ab + ac

As such, the commutative or distibutive laws don't necessarily apply to floating-point operations.

Although these should be the same, they may not be due to how we handle floating pointer number.

To test for equality to some other number, it is best to declare a "nearness to x" epsilon value.
Instead of x == 2, do: if (abs(x - 2) < epsilon)... assuming we defined epsilon correctly.

Character Codes

Calculations aren't useful unless they can be displayed in a way humans can understand. As such, we need some sort of character encoding scheme.
As such, we developed character codes! Larger memory meant we could have richer character codes. Earliest coding systems used 6-bits. Binary-coded decimal (BCD) was one of these early codes. Used by IBM mainframes in 1950s to 1960s.

The next few slides have some more background on this and how it developed.

Nowadays, we have Unicode. ASCII.

Error Detection and Correction

How do we make sure the signals that we send will be correct? How do we make sure the number we thought we sent is what they got?

It is physically impossible for data recording or transmission to be 100% correct.

One of the ways we can make sure a number is right is to append a check digit at the end of a long number. This can provide some protection against data input errors. The last characters of UPC barcodes and ISBNs are check digits.

Longer data streams require more economical and sophisticated error detection mechanisms.

Checks sums and CRCs are examples of systematic error detection. ...

Modulo 2 Arithmetic

Works like clock arithmetic. In clock arithmetic, if we add 2 hours to 11:00, we get 1:00. It wraps around essentially

In mod 2, if we add 1 to 1 we get 0. Y'know, just like how we add binary stuff.

Slide 111 of Ch02 ppt shows an example of dividing with modulo 2 arithmetic.