Hi,

the method you are using is very, very inefficient. Your code is performing an enhanced version of trial division, which is one of the slowest algorithms known.

Now, there are two types of algorithms to determine the primality of an integer:

- Deterministic algorithms, that can prove that an integer is prime, but are usually quite slow

- Probabilistic algorithms, that assert that an integer is "probably" prime with arbitrary probability, and are very fast

All deterministic primality testing algorithms require more information about the integer to be able to work (usually the prime factorization of the integer minus 1), except two: trial division and AKS. The former is horribly slow, and the latter is a relatively recent algorithm which is "reasonably fast", but is a real hassle to implement (let's say it's not your average algorithm in terms of complexity)

The probabilistic ones work as follows: you give them the number to check, and a parameter which describes how accurate the algorithm is. In return the algorithm either outputs COMPOSITE, which guarantees that the integer is not prime, or outputs PROBABLY PRIME, which means that the integer is likely to be prime within bounds described by the parameter. These algorithms are comparatively quite simple to implement, and really fast. Of course the drawback is that they can potentially identify a composite number as a prime, but more on that a bit below. Here is an example of one of the first probabilistic algorithms: Fermat's algorithm. It's based on the fact that if , then if is prime - but this doesn't mean composite integers can't exhibit the same behavior. Thus you can construct an algorithm:

INPUT n (number to check for primality), k (accuracy parameter)

for k iterations, do

..choose a random number "a" between 2 and p - 2

.. if a divides p, output COMPOSITE

..compute a^(p - 1) mod p

..if the result is different from 1, output COMPOSITE

output PROBABLY PRIME (this happens if the integer made it through the loop)

Now here's the catch with this algorithm - apart from the random aspect of it, there is a certain class of integers known as Carmichael numbers, which will pass the test regardless of "a". And unfortunately, those numbers are not prime.

Fortunately this motivated the invention of several improved algorithms, and the pinnacle of probabilistic primality testing is called Miller-Rabin. While it is not much more complicated, it has the benefit of being substantially more accurate, and (more importantly) that there is no number that can fool this test. Now about the accuracy parameter k, with Miller-Rabin it means that if you let it run "k" times (refer to the loop in the pseudocode above), then the probability that a composite number "slips through" and is flagged as prime is roughly (in the case of Miller-Rabin - it varies among algorithms). For practical purposes, this means that after 30 or so iterations, the probability that the algorithm made a mistake it actually lower than the probability of your hardware itself making a mistake, thus giving a fairly appreciable guarantee that the algorithm will be reliable. And of course, it is very fast (testing a number with several thousand digits should take much less than a minute).

You can look up the algorithm here for a more in-depth description, and there are hundreds if not thousands of code snippets available for virtually every language.

Hope I didn't confuse you too much, but if you are looking for an all-around solution to quickly check if any given integer is prime, Miller-Rabin is your friend.

-----

For your second question, look up Ulam's Spiral, which your images are a transform of. Essentially a guy called Ulam noted that prime numbers, when discretized over a two-dimensional space, tend to exhibit patterns (such as tending to line up on specific diagonals/etc..), and no one knows how to explain it.