Instead of an example, I'd recommend a much more general idea to show you how you can construct random variables systematically that follow this property.
Define two random variables U and V where V = U^2. Now perform a principal component analysis on them to make them un-correlated and the solution to these un-correlated variables will always have the property you have provided where E[XY] = E[X]E[Y] for the transformed random variables X and Y.
You can make U and V have any relationship you want and you will still get the property to be true for the transformed random variables.
Principal Component Analysis does the equivalent of what Gram-Schmidt does to vectors: it creates an orthogonal basis (note - not orthonormal) which is ordered by the amount of variation that a particular variable contributes to the variability of the data.
It might be useful for you to look at a good book on the subject if you are keen enough.