Hi All,

I'm trying to learn some theory for a course, and I don't understand this example. I'm not sure whether it's deliberately poorly worded to trick me, or whether I've just missed something.

Let's assume we have a binary number: 001101100011100100110101

When this is stored on a computer (a hypothetical 12 bit computer), 6 bits are reserved for the mantissa (significand), 5 for the characteristic and 1 for the sign.

The example goes on to list a number of things which can be derived from this string. However the one thing I don't understand is where they've gotten this:

"The string could represent the two (smallest) floating point numbers, 0.1367185 & -0.00040435791015625. Any real number between 0.13671875 and 0.140625 will have the same floating point representation. In addition, any real number between -0.000404357791015625 and -0.0004119873046875 will have the same computer representation".

My question is:

1) How do they arrive at these two "smallest" floating point numbers? I can see what the first one is (0.1367185), it's merely the normalised form, but I don't see where the second one comes from at all.

2) How do they derive they range ( -0.000404357791015625 to -0.0004119873046875)?

Thanks!