# Linear algebra proof - Elementary matricies

• Mar 31st 2014, 03:41 AM
Goatman
Linear algebra proof - Elementary matricies
I'm having trouble with this proof we need to learn -

Let E = µ(I). Prove that µ(A) = EA for every A ∈ Mnn

A hint some guy gave is to begin with the ij entry of EA and prove that it equals the ij entry of µ(A), by using the
deﬁnition of matrix multiplication and the deﬁnitions of E and µ(A).

Any help? Thanks a bunch (Poolparty)
• Mar 31st 2014, 03:48 AM
Deveno
Re: Linear algebra proof - Elementary matricies
What is µ?
• Mar 31st 2014, 03:54 AM
Goatman
Re: Linear algebra proof - Elementary matricies
Let Mnn be the set of all real n × n matrices, let 1 ≤ k, l ≤ n be ﬁxed integers with k != l, and let c (!= 0) be
a real number. Consider the function µ : Mnn → Mnn, whose image µ(A) for each A ∈ Mnn is deﬁned
by

Attachment 30571

for 1 ≤ i, j ≤ n

Oops sorry, this was some information supplied with the theories.
• Mar 31st 2014, 04:33 AM
romsek
Re: Linear algebra proof - Elementary matricies
This is just showing that elementary row operations can be cast as matrix multiplication. A pretty standard part of any linear algebra course.

http://en.wikipedia.org/wiki/Elementary_matrix
• Mar 31st 2014, 05:29 AM
Goatman
Re: Linear algebra proof - Elementary matricies

Yeah I kinda thought that was the case, although the problem I'm having is the actual index notation in trying to prove the fact :P

Not too sure on how to start it even :(
• Mar 31st 2014, 06:29 AM
romsek
Re: Linear algebra proof - Elementary matricies
Quote:

Originally Posted by Goatman

Yeah I kinda thought that was the case, although the problem I'm having is the actual index notation in trying to prove the fact :P

Not too sure on how to start it even :(

The row addition transformation section of the wiki page I linked has the exact matrix you need. They call it $T_{i,j}(m)$
• Mar 31st 2014, 07:12 AM
Deveno
Re: Linear algebra proof - Elementary matricies
To be more precise, $\mu$ should be "tagged" with the subscripts k,l because you get a DIFFERENT function for each pair.

I will suppose that k and l are fixed, and unequal. So, $\mu$ is the function that adds c times the k-th row of a matrix A, to the l-th row, and leaves all other rows unchanged.

We wish to show that $\mu(A) = \mu(I)A$. For all rows but the l-th row, A is left unchanged by $\mu$, so we need to show that likewise for any row but the l-th row, the i-th row of $\mu(I)A$ is the same as the i-th row of A.

By the definition of matrix multiplication, the i-th row of $\mu(I)A$ is the dot product of the i-th row of $\mu(I)$ with each column of A in turn, that is:

$$(\mu(I)A)_{ij} = \sum_{r = 1}^n (\mu(I))_{ir}(A)_{rj}$$

If i is not equal to l, then the i-th row of $\mu(I)$ is just the i-th standard (row) vector: (0,...,0,1,0,...,0) with 1 in the i-th place. That is:

$(\mu(I))_{ir} = 0, i \neq r$, and $(\mu(I))_{ii} = 1$. In this case, the only term that survives in the sum is $(\mu(I))_{ii}(A)_{ij} = (A)_{ij}$, so we see that for any row but the l-th, A is indeed left unchanged.

Now we consider what happens for our "exceptional" l-th row. In this case, the l-th row of $\mu(I)$ is: $e_l + ce_k$. For example, if l < k, this looks like:

(0,...,0,1,0,...,0,c,0,...,0).

So now we get TWO terms that survive in the dot product, one is $(A)_{lj}$, and the other is $c(A)_{kj}$, the sum of which is precisely the l,j-th entry of $\mu(A)$.