So we all (hopefully ) know the definition of gcd. and so if we want to prove that an integer, , is the gcd of two integers, and , then in general, we must show:
(a) and , and,
(b) If there exists , such that and , then .
Now, I want clarification on proving (b). My professor last semester always did something like this. In problems where we were working with linear combinations, he would have the equation for . No problem. But to show that , he would say: "suppose there is some such that and . then and for . Thus, ."
now i always had a problem with that. why does this work? it seems like an underhanded trick to me, and that it could work for 's that are not the gcd.
case in point: lets say i am thinking about (3,2), we know this is 1. and , so here . and i can run the proof above and it works for my (= 1).
but what if i was really imagining the combination , and so i took . and i can run the generic proof above for and conclude that my (which is 2) is the gcd of 3 and 2.
of course if we know the values of , this would be absurd, but a lot of the proofs i am doing are with pure variables. how can i be confident that my linear combination is the optimal one for the gcd and thus can run this proof with a clear conscience.
or am i imagining a problem that really isn't there?
Thanks for listening to my rant... and oh, yeah, the question too,