# Thread: Right and Left Square matrix inverses

1. ## Right and Left Square matrix inverses

Hi guys,

I would like to intuitively understand why the following is true:
Let A be a square, invertible matrix. let C be A's left inverse, and B A's right inverse.
Then it follows: B=C

I am aware of the simple proof:
C=C(AB)=CAB=(CA)B=B

yet, this gives away no intuition about the logic behind this claim.
The left inverse is acting on A's columns, the right one is acting on A's rows.
This seems almost like magic.

How would you explain this in child's terms?

Thanks.

2. I cannot imagine anything being simpler than that! The "intuition" about the logic is precisely the meaning of "inverse"- it has nothing to do with "columns" or "rows".

3. I believe some authors define the inverse such that it commutes with the original matrix. In that case, it is superfluous to talk about right and left inverses, because they are, by definition, the same thing.

4. I'm going by the natural definition of left or right inverse. in square matrices they are the same and only then can you say "inverse" without specifying left or right.

Sure we can look at this in a high level manner, as a general algebraic structure satisfying associativity and existence of unit element. But then we'd be ignoring the beautiful properties of matrices.
AB means a series of linear transformations on columns of A, according to B.
if AB=I, one cannot immediately see why BA = I as well. BA meaning a series of linear transformations on rows of A according to B, which is, so to say, a completely different operation.
The simple proof above, simply does not answer my question. at least not directly.

5. Tell you what: why don't you give us a list of assumptions (axioms) that you both believe and understand, and that are relevant to the question at hand. List also any other theorems you buy into. Then state the theorem you don't understand, and then perhaps we could produce a constructive proof of the theorem that might enable you to understand.

6. Given only the definition of matrix multiplication. Why is the following true for square matrices:
If AB=I then BA=I

7. Hmm. That could be challenging, although it seems intuitive that you should be able to prove the result from the one assumption. I must admit that I don't know off-hand how to do it, but maybe some ramblings may stimulate some thinking for you.

So the usual definition of matrix multiplication for square matrices is as follows. Let $A,B$ be square, $n\times n$ matrices. The product matrix $C=AB$ is the matrix consisting of entries as follows:

$\displaystyle C_{ij}=\sum_{k=1}^{n}A_{ik}B_{kj}.$

Let us assume that $AB=I.$ We wish to show that $BA=I.$

Now, define the Kronecker delta symbol $\delta_{ij}$ as follows:

$\delta_{ij}=\begin{cases}1\quad i=j\\ 0\quad i\not=j\end{cases}.$

It is standard to show that

$I_{ij}=\delta_{ij}.$ Thus, by assumption, we have that

$\displaystyle \sum_{k=1}^{n}A_{ik}B_{kj}=\delta_{ij}.$

Now, we know that $B^{T}A^{T}=(AB)^{T}=I^{T}=I.$ This you can prove using the definition of matrix multiplication. How would that look in summation notation?

8. Originally Posted by leolol
Given only the definition of matrix multiplication. Why is the following true for square matrices:
If AB=I then BA=I
Try lifting $A$ to a linear trnasformation.

9. Originally Posted by Drexel28
Try lifting $A$ to a linear trnasformation.
Or consider that if $AB=I$ then $1\det\left(AB\right)=\det\left(A\right)\det\left(B \right)$ and so $\det\left(A\right),\det\left(B\right)\ne 0$ and so $A,B$ are invertible.

Thus, we note that $AB=I\implies B=A^{-1}$ and so $B^{-1}=\left(A^{-1}\right)^{-1}=A$ and so $I=BA$.

10. I don't think we are allowed to use determinants.

11. Originally Posted by Ackbeet
I don't think we are allowed to use determinants.
Then, consider the fact that we can view the matrices $A,B$ as endomorphisms, for simplicities sake from $\mathbb{R}^n$ to itself, then note that the existence of a left inverse for $B$ implies that $B$ is injective, thus $B\left(\mathbb{R}^n\right)$ is $n$-dimensional, and so from an elementary theorem it follows that $B(\mathbb{R}^n}=\mathbb{R}^n$ and so $B$ is surjective, thus bijective and so $B$ is invertible. Similarly, since $A$ possesses a right inverse we know that $A$ is surjective and it's easy to show that this is impossible if $A$ is not injective and thus $A$ is also invertible.

The rest of my proof stands.

12. I'm not sure I followed all that, so you tell me: does this proof satisfy the conditions of post # 6?

13. Originally Posted by Ackbeet
I'm not sure I followed all that, so you tell me: does this proof satisfy the conditions of post # 6?
No, it does not. But this is really a statement about endomorphisms of finite dimensional spaces whether the OP would like to admit it or not. Even if there was a purely matrix analytic proof it would obscure the reason for it.