1. ## Computing efficiently

What is gained when $\displaystyle x^y$ is computed as $\displaystyle 2^{y\log_2 x}$ (for $\displaystyle x>0$) rather than the "direct" (but not naive) way?

You get a smaller base (not sure if that helps, but I'll leave it to be corrected ) ... on the other hand, $\displaystyle \log$ introduces (additional) error.

2. Originally Posted by courteous
What is gained when $\displaystyle x^y$ is computed as $\displaystyle 2^{y\log_2 x}$ (for $\displaystyle x>0$) rather than the "direct" (but not naive) way?

You get a smaller base (not sure if that helps, but I'll leave it to be corrected ) ... on the other hand, $\displaystyle \log$ introduces (additional) error.
I don't think I'll have the answer in any case, but you should probably specify whether x,y are integers/floats/other.

3. This is from a chapter on IEEE-754 standard, so call $\displaystyle x$ and $\displaystyle y$ "floats". Though, at the level of 23 (or 52 for double precision) bits, they're all really just integers, aren't they?

The textbook uses the above (questioned) fact just for showing, that even 52 bits (double precision) aren't good enough for IEEE and we need 64 bits (Extended precision). But it doesn't show why $\displaystyle x^y$ is better computed as $\displaystyle 2^{y\log_2 x}$.

4. Originally Posted by courteous
This is from a chapter on IEEE-754 standard, so call $\displaystyle x$ and $\displaystyle y$ "floats". Though, at the level of 23 (or 52 for double precision) bits, they're all really just integers, aren't they?

The textbook uses the above (questioned) fact just for showing, that even 52 bits (double precision) aren't good enough for IEEE and we need 64 bits (Extended precision). But it doesn't show why $\displaystyle x^y$ is better computed as $\displaystyle 2^{y\log_2 x}$.
Perhaps this will be of some use?

Info: (gmp) Normal Powering Algorithm

There might be further notes in the source code of GMP.

5. Couldn't help myself with GMP documentation.

Should've looked at Wikipedia first, though this paragraph is still unclear (to me):
Moreover, $\displaystyle cd = b^{(\log_b c)d}$ reduces the exponentiation to looking up the logarithm of c, multiplying it with $\displaystyle d$ (possibly using the previous method*) and looking up the antilogarithm of the product.
Looking up, as in "looking up in the tables"? Is this then what makes it efficient, the fact that computer has those tables already stored?
Without this stored table, this method isn't any more efficient, because antilogarithm is exponentiation, right (or is there some subtle difference (like between antiderivative and integral, I think ... ))?

*$\displaystyle cd = b^{\log_b c}*b^{\log_b d} = b^{\log_b c + \log_b d}$

6. Originally Posted by courteous
Couldn't help myself with GMP documentation.

Should've looked at Wikipedia first, though this paragraph is still unclear (to me):

Looking up, as in "looking up in the tables"? Is this then what makes it efficient, the fact that computer has those tables already stored?
Without this stored table, this method isn't any more efficient, because antilogarithm is exponentiation, right (or is there some subtle difference (like between antiderivative and integral, I think ... ))?

*$\displaystyle cd = b^{\log_b c}*b^{\log_b d} = b^{\log_b c + \log_b d}$
Isn't exponentiation with base 2 just a bitshift?

2^0 -> 1
2^1 -> 10
2^2 -> 100

Or am I missing something? I haven't really worked with floating point in this way.

7. Originally Posted by courteous
What is gained when $\displaystyle x^y$ is computed as $\displaystyle 2^{y\log_2 x}$ (for $\displaystyle x>0$) rather than the "direct" (but not naive) way?

You get a smaller base (not sure if that helps, but I'll leave it to be corrected ) ... on the other hand, $\displaystyle \log$ introduces (additional) error.
It is (or rather can be) more efficient because you are using base 2 and a binary float representation, so logs are only ever required for numbers in the range 1 to 2, and the main part of a log can be read straight out of the exponent (-ish if we ignore the normalisation/non-normalisation complication) and the same sort of tricks can be played in the reverse direction.

CB

8. Originally Posted by CaptainBlack
..., so logs are only ever required for numbers in the range 1 to 2, ...
Could you give some (simple) example?
For instance, $\displaystyle 5^7=78125=2^{7\log 5}$.
In memory that would look like $\displaystyle 0000.0101^{0000.0111}=101^{111}=10^{111\log 101}= ?$