be an elementary row operation,
be a matrices
By we mean the resultant matrix after we perform on
Does the the following hold true -
Also, we know the . One way to prove this is for each kind of elementary row operation establish this fact by performing the computation on the matrix elements.
I was wondering if there is a more conceptual/abstract argument, logic, reason behind this to work.
i don't know what kind of "concept" or even "computation" you're talking about here? is the identity matrix and so which gives us
Also is there another conceptual-argument (based on linear transformation etc) for to be true? The proof I have at my disposal is based on actual computation.
For part(2) - Let me try to explain.
When I say 'e' is an elementary row operation. I look at it as some kind of a function on A. e(A) in my way of thinking is image of A under e.
Now it so happens that
In R.H.S. I guess is treated like a matrix ( ). And, is a matrix multiplication.
I guess it is this multiplication that you refer when you write
My question is why should ? Is there a logic to it - or this is a fact which just happens to be true based on how we define
1. Elementary Row Operation
2. Matrix Multiplication
Please ignore this question if it is really not relevant. Maybe I need to get more clear in my head.
here's a question for you to think about: can you write an elementary matrix in terms of ?
( as usual, is the square matrix with 1 in the -th row and -th column and 0 everywhere else)
Let me try
In the equations below I have used the above mentioned expansion.
Interchange of row -
Multiplication by a scalar
Adding a scalar multiple of a row to another row
Am I anywhere close to what you wanted me to try? If yes, any specific importance of this method?
Thanks very much !!
the reason that i gave you this question is that using these forms makes the proof easier. that's because we can write the matrix in the form and then
we can use this fact that where is, as usual, the Kronecker's delta.