I have been working through D. L. Johnson's "Elements of Logic via Numbers and Sets" and have come across this problem, amongst others, which has me stumped.
"Prove by contradiction that the cube of the largest of three consecutive integers cannot be equal to the sum of the cubes of the other two."
Assume, then (for contradiction) that the above is not the case, i.e.
for some integer . Then
.
I suppose that we wish to show that this is not the case for any integer , to provide the necessary contradiction. I'm not sure how though I might do this. Any hints much appreciated.
Got it. Both the turning pts are below the n-axis, so there can be only one real root. How, however, did you conclude that it must lie between n = 3 and n= 6? I can understand the reasoning in the choice of n = 3, since we know it to lie beneath the n-axis, but why n = 6? Was this just a random stab in the dark? What would have happened if the root, say lay at n = 4000/3 (i.e. a much higher rational, non-integer number)? Would one use the Newton-Raphson method, or a similar iterative process until the 'window' was sufficiently small to evaluate the value of f(x) at a few values?
One approach might be to note that the product of all 3 roots of this equation has to be 9.
So you can test by exhaustion whether are factors of this cubic. You'll find they're not. Therefore, none of 1, 3 and 9 are roots.
Because the sum of roots is 3, and the sum of pairs of roots taken 2 at a time is 9 (both of which are integers), I think you may be able to show that there are other limitations on the roots such that all other possible integral solutions are likewise eliminated.