As goes to more and more decimal places, the value gets closer and closer to the theoretical value, without ever exceeding it. At some place, a succession of zeroes is bound to be encountered. That would be a sign that the value is so close, that a 1 could easily push it over. Would that not be a good place to stop and say “close enough”? What is the longest succession of zeroes that has ever been discovered in ?
Thinking more about this, it occurs to me that when you get to the hundredth decimal place, you're talking about a "googolth." As a googol is a number that exceeds by many orders of magnitude the estimated number of particles in the universe, it may be that the precision exceeds the granularity of reality, in which case it is a good place to stop and say "close enough," even if a long succession of zeroes has not been encountered.
The Feynman Point is a sequence of six 9s that begins at the 762nd decimal place of . It is preceded by a 4, which could reasonably be rounded to 5, and could end there. Taking into account the point made in my previous post, I think it would be reasonable to say that is the value of . Digits after that could be regarded as representing quantum noise.
Feynman point - Wikipedia, the free encyclopedia
I think that when you have a succession of zeros or of the highest digit after the "decimal" point in any number system, it indicates a close approach to the value being sought. For example, in hexadecimal, 0.4FFFFFF indicates a much closer approach to 0.5 than 0.481A32C, even though both are 0.5, rounded to one hexadecimal place. Likewise, 0.5000000A is much closer to 0.5 than 0.57123ABC, even though both are 0.5, rounded to one hexadecimal place. So, the beginning of a succession of zeros or of the highest digit is a good place to round the number off, in any number system except binary, where rounding degrades to truncating. For example, both 1.1010 and 1.1011, rounded to three places after the binary point, have to be 1.101
The structure of space time has nothing to do with the value of . is the ratio of the circumference to the diameter of a circle in Euclidean geometry. That this approximates the geometry of space at some scale is convenient and so we use it, but it is only an approximation when so used. For practical purposes we usually stop well before the 100th decimal digit of .
You are confusing physics with mathematics - they are not the same thing
CB
You may find this thread interesting on that point.At some place, a succession of zeroes is bound to be encountered.
http://www.mathhelpforum.com/math-he...gits-e-pi.html
(we looked at the digits of e, but the same rationale would apply to any finite sequence of digits including 00000000000000000)
Okay, so it's a base-n-centric view of things then. Note that best rational approximations are not base-specific. And I disagree that rounding in binary must be truncation. It is entirely possible to round to . Notice that . So there's no need for you to make an exception for binary; for example, compare with .
Anyway suppose you're approximating 5.19999912345, then there's still less error when choosing 5.1999991 than 5.2, and for real-world calculations it could be necessary to use the first rather than the second (an example that comes to mind is purifying metals for manufacture, where for example 99.99912345% pure may not be good enough), so this "approaching a value with each successive base-n digit" business is, I think, just an expression of a desire that numbers be more "round" than they often are. (The last part is a bit of speculative psychology on my part, I suppose. Don't take it as an argument, just a guess of sorts.)
There is no practical, real-world need to ever use pi to more than 6 places - any more accuracy than that is wasted, since you can't draw nor measure a circle to more than 6 decimal places accuracy. However, just for fun let's take this to the limit: IF you could build a measuring device that was accurate to one angstrom, and IF you could draw a circle within 1 angstrom of being perfectly round, and IF that circle was the size of the known universe (about 15 billion light years in radius), THEN you could calculate the circle's circumference to within 1 angstrom of accuracy by using a value of pi that's good to 36 places (since 15 Giga LY = 1.4E36 Angstroms). So clearly knowing pi to anything more than 36 decimal places is purely for mathematical amusement. Once you've got pi to 36 deci,mal places it really doesn't matter how many more digits you decide to add on - whether you choose a long string of zeroes or not, it's purely arbitrary.
I considered rounding to , but it didn't look right, as it involves moving the digit being rounded to a more significant place, versus just incrementing it, which is impossible in binary. However, now that you have pointed it out, I see that . What didn't occur to me is that even in decimal, rounding up a 9 entails incrementing the next more significant digit.
I still think that rounding 1.4999999 to 1.5 is better than rounding 1.4512345 to 1.5. In the former case, the rounding sacrifices very little accuracy, while in the latter case, the rounding results in a rough approximation.
Just to elaborate: Rounding always involves either truncation or incrementation.
Suppose we are rounding a number X.XXXX to X.XXX where X's are digits in any base. In base 10, we round 5.4321 down, giving 5.4321 5.432, which is truncation. And we round 2.3456 up, giving 2.3456 2.345 + 0.001, which is incrementation. Similarly, 6.7899 6.789 + 0.001, which involves carrying. Having X.XXX5 means either direction gives the same error, so we must decide whether to put X.XXX5 X.XXX or X.XXX5 X.XXX + 0.001.
Similarly in base 2, we must decide whether to put 1.1011 1.101 or 1.1011 1.101 + 0.001, where the latter involves carrying.
Well yes of course there's less error in that case. But suppose we have two irrational numbers we want to approximate,
a = 1.142979519999991234127389...
b = 1.13587198279817298471298712999999123789124...
Suppose we are considering these two approximations
a a' = 1.14297952
b b' = 1.1358719827981729847
The error |a-a'| is greater than the error |b-b'| even though a was rounded at a "good" spot and b was rounded at a "bad" spot.
You're saying that is an approximation, and I take that to mean it is an approximation to the "real" value of that applies to the geometry of space. Is it possible that real value is rational? And, by implication, whatever that real value is, is equal to it only out to a certain number of decimal places. Beyond that decimal place, additional digits are only the result of a mathematical exercise and have no correlation to reality. Approximately where, in your opinion, is the decimal place that deviates from the real value?
I know the difference between math and physics. What I meant was that in a real-world application, to 761 decimal places, even the real value, if we knew how to calculate it, is so close to the "final" value, that any additional precision would be insignificant compared to quantum fluctuations of spacetime.