# Right and Left Square matrix inverses

Printable View

• Nov 26th 2010, 05:38 AM
leolol
Right and Left Square matrix inverses
Hi guys,

I would like to intuitively understand why the following is true:
Let A be a square, invertible matrix. let C be A's left inverse, and B A's right inverse.
Then it follows: B=C

I am aware of the simple proof:
C=C(AB)=CAB=(CA)B=B

yet, this gives away no intuition about the logic behind this claim.
The left inverse is acting on A's columns, the right one is acting on A's rows.
This seems almost like magic.

How would you explain this in child's terms?

Thanks.
• Nov 26th 2010, 06:28 AM
HallsofIvy
I cannot imagine anything being simpler than that! The "intuition" about the logic is precisely the meaning of "inverse"- it has nothing to do with "columns" or "rows".
• Nov 26th 2010, 07:24 AM
Ackbeet
I believe some authors define the inverse such that it commutes with the original matrix. In that case, it is superfluous to talk about right and left inverses, because they are, by definition, the same thing.
• Nov 26th 2010, 07:41 AM
leolol
I'm going by the natural definition of left or right inverse. in square matrices they are the same and only then can you say "inverse" without specifying left or right.

Sure we can look at this in a high level manner, as a general algebraic structure satisfying associativity and existence of unit element. But then we'd be ignoring the beautiful properties of matrices.
AB means a series of linear transformations on columns of A, according to B.
if AB=I, one cannot immediately see why BA = I as well. BA meaning a series of linear transformations on rows of A according to B, which is, so to say, a completely different operation.
The simple proof above, simply does not answer my question. at least not directly.
• Nov 26th 2010, 07:52 AM
Ackbeet
Tell you what: why don't you give us a list of assumptions (axioms) that you both believe and understand, and that are relevant to the question at hand. List also any other theorems you buy into. Then state the theorem you don't understand, and then perhaps we could produce a constructive proof of the theorem that might enable you to understand.
• Nov 26th 2010, 07:58 AM
leolol
Given only the definition of matrix multiplication. Why is the following true for square matrices:
If AB=I then BA=I
• Nov 29th 2010, 05:35 PM
Ackbeet
Hmm. That could be challenging, although it seems intuitive that you should be able to prove the result from the one assumption. I must admit that I don't know off-hand how to do it, but maybe some ramblings may stimulate some thinking for you.

So the usual definition of matrix multiplication for square matrices is as follows. Let $\displaystyle A,B$ be square, $\displaystyle n\times n$ matrices. The product matrix $\displaystyle C=AB$ is the matrix consisting of entries as follows:

$\displaystyle \displaystyle C_{ij}=\sum_{k=1}^{n}A_{ik}B_{kj}.$

Let us assume that $\displaystyle AB=I.$ We wish to show that $\displaystyle BA=I.$

Now, define the Kronecker delta symbol $\displaystyle \delta_{ij}$ as follows:

$\displaystyle \delta_{ij}=\begin{cases}1\quad i=j\\ 0\quad i\not=j\end{cases}.$

It is standard to show that

$\displaystyle I_{ij}=\delta_{ij}.$ Thus, by assumption, we have that

$\displaystyle \displaystyle \sum_{k=1}^{n}A_{ik}B_{kj}=\delta_{ij}.$

Now, we know that $\displaystyle B^{T}A^{T}=(AB)^{T}=I^{T}=I.$ This you can prove using the definition of matrix multiplication. How would that look in summation notation?
• Nov 29th 2010, 06:54 PM
Drexel28
Quote:

Originally Posted by leolol
Given only the definition of matrix multiplication. Why is the following true for square matrices:
If AB=I then BA=I

Try lifting $\displaystyle A$ to a linear trnasformation.
• Nov 29th 2010, 07:52 PM
Drexel28
Quote:

Originally Posted by Drexel28
Try lifting $\displaystyle A$ to a linear trnasformation.

Or consider that if $\displaystyle AB=I$ then $\displaystyle 1\det\left(AB\right)=\det\left(A\right)\det\left(B \right)$ and so $\displaystyle \det\left(A\right),\det\left(B\right)\ne 0$ and so $\displaystyle A,B$ are invertible.

Thus, we note that $\displaystyle AB=I\implies B=A^{-1}$ and so $\displaystyle B^{-1}=\left(A^{-1}\right)^{-1}=A$ and so $\displaystyle I=BA$.
• Nov 30th 2010, 02:36 AM
Ackbeet
I don't think we are allowed to use determinants.
• Nov 30th 2010, 08:03 AM
Drexel28
Quote:

Originally Posted by Ackbeet
I don't think we are allowed to use determinants.

Then, consider the fact that we can view the matrices $\displaystyle A,B$ as endomorphisms, for simplicities sake from $\displaystyle \mathbb{R}^n$ to itself, then note that the existence of a left inverse for $\displaystyle B$ implies that $\displaystyle B$ is injective, thus $\displaystyle B\left(\mathbb{R}^n\right)$ is $\displaystyle n$-dimensional, and so from an elementary theorem it follows that $\displaystyle B(\mathbb{R}^n}=\mathbb{R}^n$ and so $\displaystyle B$ is surjective, thus bijective and so $\displaystyle B$ is invertible. Similarly, since $\displaystyle A$ possesses a right inverse we know that $\displaystyle A$ is surjective and it's easy to show that this is impossible if $\displaystyle A$ is not injective and thus $\displaystyle A$ is also invertible.

The rest of my proof stands.
• Nov 30th 2010, 08:12 AM
Ackbeet
I'm not sure I followed all that, so you tell me: does this proof satisfy the conditions of post # 6?
• Nov 30th 2010, 08:23 AM
Drexel28
Quote:

Originally Posted by Ackbeet
I'm not sure I followed all that, so you tell me: does this proof satisfy the conditions of post # 6?

No, it does not. But this is really a statement about endomorphisms of finite dimensional spaces whether the OP would like to admit it or not. Even if there was a purely matrix analytic proof it would obscure the reason for it.