Results 1 to 5 of 5

Math Help - Problem with Elementary row operations and rank theorems.

  1. #1
    Newbie
    Joined
    Mar 2012
    From
    Canada
    Posts
    3

    Problem with Elementary row operations and rank theorems.

    Ok, so I am taking my first course in linear algebra, and even though I am not a math major (physics major actually), I can't help but wish my teacher and text were more rigorous. So let me start by telling you all the problem I am having:



    (First question) My book states the following rank theorem: a system of m eqns in n variable, there will be exactly n-r parameters. r being rank of augmented matrix. But then I begun thinking why, the proof my book gave is a joke, just logical words, and I can't seem to understand any proofs online. So I thought, hey can't you have a matrix with for say a system of 5 equations in 5 variables and in the augmented matrix have only say, 3 leading ones, and instead of the fourth non leading variable, have a column of all zeros, and a zero row at the bottom. I hope you understand what I am saying. Like through applying the Gaussian algorithm that's what you end up with. Here there surely would not be a n-r parameters.

    Now come my next question, which is somewhat related. Can you perform elementary many linear operations simultaneously. For example given augmented matrix (here brackets represent a row, and column number increases left to right):

    [(1,1,2,3),(-2,-1,0,-4),(4,2,4,7)) can I simply ---> R1 - 1/2R2 and do R2-2R1 --> end up with matrix with an entire column complete cleared out and zeros in it's place.

    I know this is probably in violation of some basic properties, so I would like someone to explain why this is so. Why you can't perform multiple elementary row operations at once. Would you still not end up with an equivalent system, since a solution should work for each equation in the original, so can't you keep adding original equations and stuff all at once. Also if someone could help me with my first question.

    Ahh, this is killing me.
    Follow Math Help Forum on Facebook and Google+

  2. #2
    Newbie
    Joined
    Mar 2012
    From
    Canada
    Posts
    3

    Re: Problem with Elementary row operations and rank theorems.

    Ok NVM with original question, can someone show me why one cannot perform two elementary row operations type 3 (where equations or rows are added to multiples of one another), simultaneously

    as in

    matrix (R1, R2, R3) --> R1 +cR2, R3+dR1--> simultaneously Here R1 and R3 being modified simultaneously.
    Follow Math Help Forum on Facebook and Google+

  3. #3
    Junior Member
    Joined
    Apr 2011
    From
    Somwhere in cyberspace.
    Posts
    73
    Thanks
    2

    Re: Problem with Elementary row operations and rank theorems.

    I'm not an expert (not even a student on the subject) so take care, this can be serious garbage. I suspect that all introductory courses just on linear algebra just cannot treat the subject rigorously because the proof is too complex.

    Facts we can use:
    1. The reduced row echelon form of a matrix is unique. For a "simple" proof see: http://www.cs.berkeley.edu/~wkahan/MathH110/RREF1.pdf
    2. All three kinds of elementary row operations are reversible (each concrete operation has "opposite" operation that restores the original matrix). Very easy to prove.
    3. Each elementary row operation on A may be represented by multiplication of A with another matrix E from the left. E is unique for each operation. (For the something like proof you might look down). Since every elementary row operation is invertible it's matrix E, multiplied by the matrix of it's opposite operation E' must cancel each other. EE'=I. This means that all the matrices of the elementary row operations are invertible.
    4. The product of invertible matrices is itself invertible (Easy to prove)
    5. Fundamental theorem of invertible matrices - http://faculty.evansville.edu/nw89/old_courses/old_courses/571SP2011/files/Fundamental Theorem of Invertible Matrices.pdf

    First, we shall look at all "transformations" on the rows of your matrix. Let's call it A. We are looking for particular kind of transformations. Ones that preserve the set of possible solutions of the equation A\vec{x}=\vec{b}. It should immediately be obvious that "adding five to the coefficient of the third variable in fifth equation" changes the set of the solutions of that equation, and probably - to the entire system which is represented by the matrix equation Ax=b. Removing the obviously inappropriate transformations leaves us with transformations of this type:

    A=\begin{bmatrix} a_{11} & a_{12} & ... & a_{1m} \\ a_{21} & a_{22} & ... & a_{2m} \\ ... \\ a_{n1} & a_{n2} & ... & a_{nm} \end{bmatrix}=\begin{bmatrix}\vec{R_1} \\ \vec{R_2} \\ ... \\ \vec{R_n} \end{bmatrix} \rightarrow \begin{bmatrix}\vec{R'_{1}} \\ \vec{R'_{2}} \\ ... \\ \vec{R'_{n}} \end{bmatrix}

    where:
    \vec{R'_{1}}=b_{11} \vec{R_1} + b_{12} \vec{R_2} +...+ b_{1n} \vec{R_n}
    \vec{R'_{2}}=b_{21} \vec{R_1} + b_{22} \vec{R_2} +...+ b_{2n} \vec{R_n}
    ...
    \vec{R'_{n}}=b_{n1} \vec{R_1} + b_{n2} \vec{R_2} +...+ b_{nn} \vec{R_n}

    Note that we have written the rows of the matrices as row vectors, which is perfectly ok. When we speak of row operations, we work with rows of the
    matrix as they are vectors actually. When we add R_{1} and say R_{2} we do it component wise, just as vectors. It's not coincidence. b_{xx} are actually the coefficients we have used to multiply our rows. The  b_{xx} notation for them is not chosen arbitrary too.

    We can write the above equations by little different way:

    \vec{R'_{1}}=b_{11} \begin{bmatrix} a_{11} & a_{12} & ... & a_{1m}\end{bmatrix} + b_{12}\begin{bmatrix}a_{21} & a_{22} & ... & a_{2m} \end{bmatrix} + ... + b_{1n}\begin{bmatrix}a_{n1} & a_{n2} &  ... & a_{nm}\end{bmatrix}

    \vec{R'_{2}}=b_{21} \begin{bmatrix} a_{11} & a_{12} & ... & a_{1m}\end{bmatrix} + b_{22}\begin{bmatrix}a_{21} & a_{22} & ... & a_{2m} \end{bmatrix} + ... + b_{2n}\begin{bmatrix}a_{n1} & a_{n2} &  ... & a_{nm}\end{bmatrix}

    ...

    \vec{R'_{n}}=b_{n1} \begin{bmatrix} a_{11} & a_{12} & ... & a_{1m}\end{bmatrix} + b_{n2}\begin{bmatrix}a_{21} & a_{22} & ... & a_{2m} \end{bmatrix} + ... + b_{nn}\begin{bmatrix}a_{n1} & a_{n2} &  ... & a_{nm}\end{bmatrix}

    And do the operations:

    \vec{R'_{1}}=\begin{bmatrix} b_{11}a_{11}+b_{12}a_{21}+ ... + b_{1n}a_{n1} & b_{11}a_{12}+b_{12}a_{22}+...+b_{1n}a_{n2}  & ... & b_{11}a_{1m}+b_{12}a_{2m}+...+b_{1n}a_{nm} \end{bmatrix}
    \vec{R'_{2}}=\begin{bmatrix} b_{21}a_{11}+b_{22}a_{21}+ ... + b_{2n}a_{n1} & b_{21}a_{12}+b_{22}a_{22}+...+b_{2n}a_{n2}  & ... & b_{21}a_{1m}+b_{22}a_{2m}+...+b_{2n}a_{nm} \end{bmatrix}
    ...
    \vec{R'_{n}}=\begin{bmatrix} b_{n1}a_{11}+b_{n2}a_{21}+ ... + b_{nn}a_{n1} & b_{n1}a_{12}+b_{n2}a_{22}+...+b_{nn}a_{n2}  & ... & b_{n1}a_{1m}+b_{n2}a_{2m}+...+b_{nn}a_{nm} \end{bmatrix}

    Writing the resulting rows together as a matrix shows something interesting. The transformations we described at the beginning are equivalent to multiplying A with another matrix B from the left which looks like this:

    B=\begin{bmatrix} b_{11} & b_{12} & ... & b_{1n} \\ b_{21} & b_{22} & ... & b_{2n} \\ ... \\ b_{n1} & b_{n2} & ... & b_{nn} \end{bmatrix}

    Notice that B is necessarily square.

    Example: Multiplying R_2 by two is equivalent to multiplying A from the left with the diagonal matrix \begin{bmatrix} 1 & 0 & 0 & ... & 0 \\ 0 & 2 & 0 & ... & 0 \\ 0 & 0 & 1 & ...& 0\\ ... \\ 0 & 0 & 0 & ... & 1 \end{bmatrix}

    Now we ask the big question: If there are square matrices B for which we can multiply both sides of the matrix equation A\vec{x}=\vec{b} to get BA\vec{x}=B\vec{b} and still get the new equation, how they do look?

    If A\vec{x}=\vec{b} and BA\vec{x}=B\vec{b} have same solutions, then their respective augmented matrices [A|b] and [BA|Bb] must have the same reduced row echelon form [R|r]. Then there is sequence of elementary row operations that can transform A into R and from there into BA by reversing the elementary row operations, required to transform BA into R (all three kinds of elementary row operations are reversible). But every elementary row operation may be represented by multiplication of the matrix from the left by some other matrix E. So we may write
    E_nE_{n-1}...E_2E_1A=BA for some matrices E_n,E_{n-1},...E_2,E_1 (all of them invertible since the corresponding elemntary operation is itself reversible). But then B must be equal to the product E_nE_{n-1}...E_2E_1 and be itself invertible.
    Now the converse, assume B is invertible then BA\vec{x}=B\vec{b} is equivalent to B'BA\vec{x}=B'B\vec{b} which after cancellation becomes A\vec{x}=\vec{b}. They must have the same solutions.

    So we have proved something: for one transformation of the type

    \vec{R'_{1}}=b_{11} \vec{R_1} + b_{12} \vec{R_2} +...+ b_{1n} \vec{R_n}
    \vec{R'_{2}}=b_{21} \vec{R_1} + b_{22} \vec{R_2} +...+ b_{2n} \vec{R_n}
    ...
    \vec{R'_{n}}=b_{n1} \vec{R_1} + b_{n2} \vec{R_2} +...+ b_{nn} \vec{R_n}

    to preserve the solution of the system, it's matrix B=\begin{bmatrix} b_{11} & b_{12} & ... & b_{1n} \\ b_{21} & b_{22} & ... & b_{2n} \\ ... \\ b_{n1} & b_{n2} & ... & b_{nn} \end{bmatrix} must be invertible. Fundamental theorem of invertible matrices may be used for quick determination if a matrix has inverse.

    Example: If we do R'_1=R_1+R_2 and R'_2=R_1+R_2 then the matrix \begin{bmatrix}3 & 1 & 2 \\ 2 & 1 & 4 \\ 5 & 6 & 2 \end{bmatrix} will become \begin{bmatrix}5 & 2 & 6 \\ 5 & 2 & 6 \\ 5 & 6 & 2 \end{bmatrix}
    The corresponding matrix of this transformation is B=\begin{bmatrix} 1 & 1 & 0 \\ 1 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}. You can compute it via Gauss-jordan for the coefficients b_{xx}. B has two equal rows, hence it's determinant is 0 hence is not invertible hence the transformation does not preserve the solution set of any equation of the type Ax=b.

    So as conclusion, there are plenty of other transformations beside the elementary ones, that preserve the solution set of a system, but determining if one transformation does preserve the solutions is difficult. Besides all of them can be reduced to three for we can guarantee that are invertible and hence any sequence of them results equivalent system. Only they must be performed sequentially with care not to mix them.
    Last edited by mrproper; March 25th 2012 at 12:27 PM.
    Follow Math Help Forum on Facebook and Google+

  4. #4
    Newbie
    Joined
    Mar 2012
    From
    Canada
    Posts
    3

    Re: Problem with Elementary row operations and rank theorems.

    Than you, that actually made quite a lot of since, and though I may still have some questions that tied things together rather nicely. I think in these first courses, one must take alot of thinks for granted, so long as they are logical of course. The alternative would be spending a large deal of time on proofs that are rather complex, but I still can't help it sometimes. Thank you
    Follow Math Help Forum on Facebook and Google+

  5. #5
    Junior Member
    Joined
    Apr 2011
    From
    Somwhere in cyberspace.
    Posts
    73
    Thanks
    2

    Re: Problem with Elementary row operations and rank theorems.

    I worked a little bit on the LaTeX typesetting of this thing and in the end I was so bored that I just didn't bother to proof-read it again:

    If there are square matrices B for which we can multiply both sides of the matrix equation  A\vec{x}=\vec{b} to get  BA\vec{x}=B\vec{b} and still get the new equation, how they do look?
    Should be read:

    "If there are square matrices B for which we can multiply both sides of the matrix equation  A\vec{x}=\vec{b} to get  BA\vec{x}=B\vec{b} and still get the new equation to have the same set of solutions, how they do look?"
    Follow Math Help Forum on Facebook and Google+

Similar Math Help Forum Discussions

  1. Elementary row operations
    Posted in the Advanced Algebra Forum
    Replies: 2
    Last Post: March 1st 2009, 09:23 PM
  2. Replies: 2
    Last Post: October 7th 2008, 06:28 PM
  3. Elementary Row Operations
    Posted in the Advanced Algebra Forum
    Replies: 13
    Last Post: June 10th 2008, 10:34 AM
  4. Elementary row operations
    Posted in the Algebra Forum
    Replies: 1
    Last Post: October 17th 2007, 08:01 AM
  5. Elementary Row Operations
    Posted in the Algebra Forum
    Replies: 3
    Last Post: October 17th 2007, 07:59 AM

Search Tags


/mathhelpforum @mathhelpforum