I think the main idea is as follows:

To write p/q as a decimal you perform long division, first rewriting p as p.00000000...

When dividing by q there are at most q possible remainders. If the remainder is ever 0, then the decimal terminates. Since there are only finitely many possible remainders, a remainder must eventually come out twice. The first time that this happens is when repetition will occur (you probably have to exclude the very first division here since it is different from the others).