I want to know how to prove that adj(AB) = adj(B)adj(A), letting adj (A) be the adjugate matrix of A.
I feel the equation adj(A)A = |A|E does not help much.
P.S: I wrongly typed the topic.
Adj(B)Adj(A)AB = Adj(B)(Adj(A)A)B = Adj(B)|A|B = |A|(Adj(B)B) = |A||B|E = |AB|E = Adj(AB)(AB)
Thus Adj(AB))(AB) = |AB|E = Adj(B)Adj(A)(AB).
If inverse AB exists, then we are done. If not , we run into the same problem as o_O's proof.
I wanted to thank the forum for both of these answers. I did a google search for adj(AB) adj(B) and arrived here, which I found charming. I've rarely had less hope for an internet search; and I got the answer right away!
I wanted to address Mr. Fantastic's tone. It seems to me that this is the manner of speaking that we agree not to participate in. "What about a general proof" is not an impolite question. But your post is deliberately condascending, takes up a huge amount of space, and neglects to answer. So, why post at all?
And I think your criteria for "trivial" are sorely misplaced. I am self-instructing in linear algebra. I am working through a Dover reprint of a 1960's text by Hans Schneider and George Barker.
The approach is purely axiomatic. Every step is stated as a Definition, Proposition, Theorem, or Lemma. No step is missed, and there is a proof for every one (except the definitions). What strikes me about Linear Algebra, is that every step is trivial. It is, in its entirety, self-evident. Nevertheless, learning a new algebra is not easy, and mathematically self-evident is a poor criteria for human instruction. Schneider disagrees with you, and I think he knew his subject. Even the ramifications of multiplication by zero merit, in each case, an exact proof and subsequent collection of the special cases into a general rule.
Simply, this: Zero is a necessary case to consider, and the implications are not so easily dismissed. The determinant is an excellent example. If the determinant is zero, we have choices to make. Are we multiplying two matrices of the same rank? If so, should we reduce the dimension of our multiplication, or is it necessary to keep the problem in n equations? The case of zero determinant requires analysis, and my arcane and intellectually demanding textbook considers all cases.
Your quotations are supposed to be a lesson and they're the wrong one. I find in upper division textbooks exactly the opposite recognition. This: that learning a new algebra requires meticulous exposition, and careful handling of all identities. Graduate coursework may open with an exposition of the necessary postulates for addition. And mastery is mastery. I have never once failed to improve at the piano by doing elementary exercises. I have never once felt that identities were always obvious. Quite the contrary; it is the simple identity that we miss. Otherwise we wouldn't have to learn math at all. After all, the entirety of vector algebra is self-evident from the definition of addition and scalar multiplication, right? Was it self-evident to you, or did maybe you need a little help with with every step?
So, for the original poster: the case of zero is contained in the posted answer because both sides will be zero. If you are unable to separate adj(AB) into discrete terms in A and B (both adj and ||), then your problem is here: for example, identities like Aadj(A) = |A|I, and then you can separate |AB| into |A||B|. Then, you can satisfy yourself that the zero case for either |A| or |B| will be 0 for AB. This is also self-evident from the fact that if either of A or B is singular, the product AB is singular.
It depends on what parts of the algebra you already know.
For this proof, it is also important to identify that the determinant is a scalar. This is tricky, because it is a function. It may not have been evident at the time, why the existence and uniqueness of the determinant function needed to be proved; it might be worth reviewing this. The function always exists, it is unique, and it reduces to a scalar. This is crucial. Like with any other scalar, you can move the determinant terms freely, within the matrix products. If the determinant were not a scalar, this identity would not hold.
Anyway, I hope that helps anyone else who (like the original poster and me) needed this thread to put all the pieces together.
I get it. I read your post quite carefully. Too carefully. The little line between the red-flagged text is your signature line! I see. Well, that makes a lot more sense. It was a mean-spirited post the way I read it: red-text leading to red text, and a diversionary quote about why no one would bother to answer the trivial case: because it was obvious.
Hahahah alright, fair enough. I am suitably embarrassed. And I apologize. Sorry I misread you! Indeed, I was worried, thinking this was a mod's answer. Hence my long post. (It is your profile which is enormous!) okay. Well, as long as I've got your attention, let me say/ask this:
I am self-instructing in math, and it's not easy. I have no one to query and I will probably post regularly in this forum. I will often do as above: spell out how I think a problem needs to be looked at, to be answered. When I don't understand something, I have to phrase logical questions, seek answers, then spell out my reasoning so I can identify errors and make corrections. Explicitly voicing this process helps enormously. I need these explanations to be accurate, or I'll go lost. If I'm not right, it might as well be magic.
So, if you've got your scrutiny eye on me, great! I can say that I write as above (the linear algebra parts) to instruct myself, and I NEED corrections. I will be extending my inner procedure into the forums in the hopes of contributing, but I can only do that if my procedures are corrected. I will happily edit posts to avoid lengthening a thread. I state the mathematical elements as if I know what I'm doing because I have no choice. If I don't state my presumptions clearly, I can't find the errors.
So, corrections of any explanations of mine, ever, are deeply appreciated.