What is gained when is computed as (for ) rather than the "direct" (but not naive) way?
You get a smaller base (not sure if that helps, but I'll leave it to be corrected ) ... on the other hand, introduces (additional) error.
This is from a chapter on IEEE-754 standard, so call and "floats". Though, at the level of 23 (or 52 for double precision) bits, they're all really just integers, aren't they?
The textbook uses the above (questioned) fact just for showing, that even 52 bits (double precision) aren't good enough for IEEE and we need 64 bits (Extended precision). But it doesn't show why is better computed as .
Perhaps this will be of some use?
Info: (gmp) Normal Powering Algorithm
There might be further notes in the source code of GMP.
Couldn't help myself with GMP documentation.
Should've looked at Wikipedia first, though this paragraph is still unclear (to me):
Looking up, as in "looking up in the tables"? Is this then what makes it efficient, the fact that computer has those tables already stored?Moreover, reduces the exponentiation to looking up the logarithm of c, multiplying it with (possibly using the previous method*) and looking up the antilogarithm of the product.
Without this stored table, this method isn't any more efficient, because antilogarithm is exponentiation, right (or is there some subtle difference (like between antiderivative and integral, I think ... ))?
*
It is (or rather can be) more efficient because you are using base 2 and a binary float representation, so logs are only ever required for numbers in the range 1 to 2, and the main part of a log can be read straight out of the exponent (-ish if we ignore the normalisation/non-normalisation complication) and the same sort of tricks can be played in the reverse direction.
CB