First sorry about plain ASCII - LaTeX is aching again. I was reading my data structures and algorithms textbook and came across with following na´ve primality testing algorithm:
It was said that, if the 8 < n < 120, then algorithm doesn't fail at all. But if the n is chosen from[1, i], where i is "large", it fails about 1/2 * 2/3 * 4/5 * 6/7 (close to 23%) of the times. Question is, where does that probability come from? No other information was given, so I'm totally puzzled. Maybe that approximation follows from Prime number theorem or something. So, clarification is welcome. Thanks!
1 x <- GCD(n, 210)
2 if x = 1 then return TRUE
3 else return FALSE