Hey kpkkpk.
You might want to read how common computers store numbers, since it does the sort of things that you are trying to do:
Floating point - Wikipedia, the free encyclopedia
What kind is an algorithm which "opens" large natural base numbers with natural number exponents?
Manually 3^5 is easy as it only demands 5 repetitive multiplications (3*3*3*3*3) in order to find precise answer (=243).
But 3^4567 does not fit in screen of a typical calculator.
Even if we use computers, they usually give us approximate values in the form of base number 10 exponents (like 1,56789 * 10^234; this is just an example what we can see in calculator or computer screen, not an real approximation of 3^4567).
So, what I am looking for is an algorithm that gives precise values for these stupendously large numbers, not an approximative method.
Is there any shorcut method, or does our computers actually just use raw power and repetitively multiply 3 4567 times in this example? As far as I know, Taylor series can be used in order to find precise enough decimal values for logarithms to be found, so that they can be used to "open", say, 3^4567 with the required precision...
But does that calculation of appropriate Taylor series offers any advantage compared to raw repetitive multiplications?
I do not know, but it is my guess that it does not.
I am aware, and I apologize, that my question may sound like irritating to many MHF readers, but I have a certain application in my mind which requires answer to this question.
Hey kpkkpk.
You might want to read how common computers store numbers, since it does the sort of things that you are trying to do:
Floating point - Wikipedia, the free encyclopedia
Thank you!
From the source you showed to me I found a lot of information how precise computations are while performed by computers.
Still I do not understand the real algorithm computer performs while it computes, say, 3^4567.
If I measure time, and let computer to calculate (="open"), say, 3^13 and compare that time to another computation, say, 3^4567, I assume more time to have used in the latter example...so, obviously, some kind of a repetition there occurs inside these machines.
Perhaps I should try to find an answer from some programming based forum instead?
Floating point arithmetic is something that should not be used to make precise calculations with big integers. See this thread where doing so leads to a misleading result. Many programming languages have libraries for arbitrary-precision integers. If I remember correctly, Scheme's regular integers can have arbitrary size, and Java has the BigInteger class.
Taylor series are used for computing (approximations to) real values of functions that cannot be expressed using standard arithmetic operations, such as trigonometric functions and and logarithms.
A more economical way to compute 3^4567 than performing 4566 multiplications is to express 4567 in binary: , and compute the powers of 3 that correspond to 1's: , , , , , ..., and . The final result is the product of all these numbers. Note that subsequent powers of 3 are obtained from previous ones by squaring them: e.g., .