Results 1 to 12 of 12

Math Help - Proving span and linear independancy of a matrix.

  1. #1
    Junior Member
    Joined
    Oct 2009
    Posts
    30

    Proving span and linear independancy of a matrix.

    Ok I have two very smilar questions for you guys:

    1) Prove that if (v1, . . .,vn) spans Rn, then so does the list

    (
    v1 v2,v2v3, . . .,vn1vn,vn).


    2) Prove that if
    (v1, . . .,vn) is linearly independent in Rn, then so is the list (v1 v2,v2v3, . . .,vn1vn,vn).

    Proofs are new to me this year, and I seem to be struggling with a lot of them so far.

    Any thoughts on this one?

    Any help is very much appreciated


    Follow Math Help Forum on Facebook and Google+

  2. #2
    Banned
    Joined
    Oct 2009
    Posts
    4,261
    Thanks
    2
    Quote Originally Posted by joe909 View Post
    Ok I have two very smilar questions for you guys:


    1) Prove that if (v1, . . .,vn) spans Rn, then so does the list

    (
    v1 v2,v2v3, . . .,vn1vn,vn).

    2) Prove that if (v1, . . .,vn) is linearly independent in Rn, then so is the list (v1 v2,v2v3, . . .,vn1vn,vn).

    Proofs are new to me this year, and I seem to be struggling with a lot of them so far.

    Any thoughts on this one?

    Any help is very much appreciated




    In this case both questions are one and the same IF you already know that dim IR^n = n and n lin. independent vectors in IR^n are a basis.

    Now we know that {v_1,...,v_n} are lin. ind. This means that the linear combination a_1*v_1 +...+ a_*v_n = 0 is possible iff a_1 =...= a_n = 0 , where the a_i are scalars in IR (real numbers).

    Well, USING THIS, you have to prove that if b_1*(v_1 - v_2) +...+ b_n*(v_(n-1) - v_n) = 0 then IT MUST BE that the scalars b_1 =...= b_n = 0.
    Hint: open parentheses, do some order and USE THE FIRST PART...

    Tonio
    Follow Math Help Forum on Facebook and Google+

  3. #3
    Super Member
    Joined
    Aug 2009
    From
    Israel
    Posts
    976
    Quote Originally Posted by joe909 View Post
    Ok I have two very smilar questions for you guys:

    1) Prove that if (v1, . . .,vn) spans Rn, then so does the list

    (
    v1 v2,v2v3, . . .,vn1vn,vn).


    2) Prove that if
    (v1, . . .,vn) is linearly independent in Rn, then so is the list (v1 v2,v2v3, . . .,vn1vn,vn).

    Proofs are new to me this year, and I seem to be struggling with a lot of them so far.

    Any thoughts on this one?

    Any help is very much appreciated


    For #1, recall that the condition for a set A = \{v_1,v_2,...,v_n\} to span some space V (over a field F), is that \forall v \in V, there exist some scalars \alpha_1,\alpha_2,...,\alpha_n \in F, not all zero, such that v = \alpha_1v_1 + \alpha_2v_2 + ... + \alpha_nv_n. That is to say, any element in V can be expressed as a linear combination of the set's elements.

    In this question we are given that this is correct for the set B=\{v_1,v_2,...,v_n\} and we need to prove that it is also correct for B' = \{v_1-v_2,v_2-v_3,...,v_{n-1}-v_n,v_n\}.

    Since it is correct for B, there exist a_1,a_2,...,a_n \in F not all zero such that for any v\in V, \ v=a_1v_1 + a_2v_2 + ... + a_nv_n. We want to find scalars \beta_1,\beta_2,...,\beta_n \in F such that v = \beta_1(v_1-v_2) + \beta_2(v_2-v_3) + ... + \beta_{n-1}(v_{n-1}-v_n) + \beta_nv_n. Try to see what happens if we let
    \beta_1 = \alpha_1
    \beta_2 = \alpha_2 - \beta_1
    .
    .
    \beta_i = \alpha_i - \beta_{i-1} \ \forall 2 \leq i \leq n

    Tonio: I think he has not reached the part of basis yet - this is why he is required to prove both claims... also, it is good practice to do so since he is only now starting with proofs!
    Follow Math Help Forum on Facebook and Google+

  4. #4
    Junior Member
    Joined
    Oct 2009
    Posts
    30
    For #1, recall that the condition for a set to span some space V (over a field F), is that , there exist some scalars , not all zero, such that . That is to say, any element in V can be expressed as a linear combination of the set's elements.

    In this question we are given that this is correct for the set and we need to prove that it is also correct for .

    Since it is correct for B, there exist not all zero such that for any . We want to find scalars such that . Try to see what happens if we let


    .
    .


    Tonio: I think he has not reached the part of basis yet - this is why he is required to prove both claims... also, it is good practice to do so since he is only now starting with proofs!
    I'd like to thank you a lot for your help. Ill show you what I have and if you could let me know if im on the right track that would be great.

    So i took your suggestion and I end up with the following

    let alpha= a

    v=a1v1 + (a2 - 2a1)(v2) + (a3-2a2-2a1)(v3)....

    Im not really sure how this qualifies as a proof though. I see that all the a values are scalars so we combine them to one variable value, is that enough to say that its span is Rn?.

    Also im not sure how we can make the assumption that .

    Im sure theese are probably really bad questions, but im having a lot of trouble getting this for some reason.

    Thanks again for your help
    Follow Math Help Forum on Facebook and Google+

  5. #5
    Super Member
    Joined
    Aug 2009
    From
    Israel
    Posts
    976
    Quote Originally Posted by joe909 View Post
    I'd like to thank you a lot for your help. Ill show you what I have and if you could let me know if im on the right track that would be great.

    So i took your suggestion and I end up with the following

    let alpha= a

    v=a1v1 + (a2 - 2a1)(v2) + (a3-2a2-2a1)(v3)....

    Im not really sure how this qualifies as a proof though. I see that all the a values are scalars so we combine them to one variable value, is that enough to say that its span is Rn?.

    Also im not sure how we can make the assumption that .

    Im sure theese are probably really bad questions, but im having a lot of trouble getting this for some reason.

    Thanks again for your help
    I don't really understand what you did in the first part...

    Perhaps you misunderstood what I was saying. In order to prove that B' = \{v_1-v_2, v_2-v_3,...,v_{n-1}-v_n,v_n\} spans \mathbb{R}^n, you need to show that there exist some (it is up to you to find them) scalars, \beta_1,\beta_2,...,\beta_{n-1},\beta_n \in F, not all of them zero, such that for any v in \mathbb{R}^n , v = \beta_1(v_1-v_2) + \beta_2(v_2-v_3) + ... + \beta_{n-1}(v_{n-1}-v_n) + \beta_nv_n.

    That is to say, any element in V can be expressed as a linear combination of the elements of B'. So the proof is done by finding those coefficients ( \beta_1,...,\beta_n). We don't assume that \beta_1=\alpha_1 - we just let \beta_1=\alpha_1. Surely we can do that.

    Now we simply need to show that those specific coefficients satisfy the condition we want: that for every v \in \mathbb{R}^n, given that v = \alpha_1v_1 + \alpha_2v_2 + ... + \alpha_nv_n, this statement holds: \beta_1(v_1-v_2) + \beta_2(v_2-v_3) + ... + \beta_{n-1}(v_{n-1}-v_n)+\beta_nv_n = v.

    Now, if we let:
    \beta_1=\alpha_1
    \beta_i = \alpha_i + \beta_{i-1} for all i between 2 and n (this was a mistake in my first post -- there shouldv'e been a plus sign here, not a minus)

    And substitute them into \beta_1(v_1-v_2) + \beta_2(v_2-v_3) + ... + \beta_{n-1}(v_{n-1}-v_n)+\beta_nv_n, we get:

    \beta_1(v_1-v_2) + \beta_2(v_2-v_3) + ... + \beta_{n-1}(v_{n-1}-v_n)+\beta_nv_n = \alpha_1(v_1-v_2) + (\alpha_2+\alpha_1)(v_2-v_3) + ... + (\sum_{i=1}^{n-1}\alpha_i)(v_{n-1}-v_n) + (\sum_{i=1}^{n}\alpha_i)v_n (*)

    I'll leave it to you to show that (*) = \alpha_1v_1 + \alpha_2v_2 + ... + \alpha_nv_n and with that the proof will be done.
    Follow Math Help Forum on Facebook and Google+

  6. #6
    Junior Member
    Joined
    Oct 2009
    Posts
    30
    I don't really understand what you did in the first part...

    Perhaps you misunderstood what I was saying. In order to prove that spans , you need to show that there exist some (it is up to you to find them) scalars, , not all of them zero, such that for any v in , .

    That is to say, any element in V can be expressed as a linear combination of the elements of B'. So the proof is done by finding those coefficients (). We don't assume that - we just let . Surely we can do that.

    Now we simply need to show that those specific coefficients satisfy the condition we want: that for every , given that , this statement holds: .

    Now, if we let:

    for all i between 2 and n (this was a mistake in my first post -- there shouldv'e been a plus sign here, not a minus)

    And substitute them into , we get:

    (*)

    I'll leave it to you to show that (*) = and with that the proof will be done.
    Thank you very much. Not only was I able to finish the proof, but I now feel I better understand what I was trying to prove and why the proof works. The tip about the helped a lot.

    I really appreciate you spending the time to help with me this problem.
    Follow Math Help Forum on Facebook and Google+

  7. #7
    Super Member
    Joined
    Aug 2009
    Posts
    639
    sorry, im new to this topic as well.

    may i know how you know that Beta i can be expressed in terms of alpha and that it is

    beta i = alpha i + beta (i-1)

    can it be expressed in any forms of alpha or only like taht?


    thanks!
    Follow Math Help Forum on Facebook and Google+

  8. #8
    Super Member
    Joined
    Aug 2009
    From
    Israel
    Posts
    976
    Quote Originally Posted by alexandrabel90 View Post
    sorry, im new to this topic as well.

    may i know how you know that Beta i can be expressed in terms of alpha and that it is

    beta i = alpha i + beta (i-1)

    can it be expressed in any forms of alpha or only like taht?


    thanks!
    Well, you can simply write \beta_i = \sum_{k=1}^{i} \alpha_k

    To prove that \{v_1-v_2,...,v_{n-1}-v_n,v_n\} spans \mathbb{R}^n, we need to show the existence of those \beta_is... that is - we need to find them and show that they withhold our condition. In other words, we want to find coefficients \beta_1,\beta_2,...,\beta_n that solve this equation: \beta_1(v_1-v_2) + \beta_2(v_2-v_3) + ... + \beta_{n-1}(v_{n-1}-v_n)+\beta_nv_n = \alpha_1v_1 + \alpha_2v_2+...+\alpha_nv_n (**)

    My thought process on choosing those specific coefficients was this: Consider the fact that v_1 is only in one of the elements of B', and its coefficient in the RHS of (**) is \alpha_1, so that will be the coefficient of its one element in the LHS as well. That's why \beta_1=\alpha_1. Now consider v_2 -- its coefficient is now (-\alpha_1) and we want to complete it to \alpha_2 with our remaining coefficient. So we will let \beta_2 = \alpha_2+\alpha_1 ==> the coefficient of v_2 is \alpha_2+\alpha_1-\alpha_1 = \alpha_2 just like we wanted. The same process continues for the rest of the elements of B'.
    Follow Math Help Forum on Facebook and Google+

  9. #9
    Junior Member
    Joined
    Oct 2009
    Posts
    30
    2) Prove that if
    (v1, . . .,vn) is linearly independent in Rn, then so is the list (v1 v2,v2v3, . . .,vn1vn,vn).

    For the second question is it enough to just say that
    0=a1v1+a2v2+...anv such that a1 = 0, a2 = 0, ... ,an = 0.

    And we know we can make .
    =
    by letting
    b1 = a1
    b2 = a2+b1
    ect.
    so therefore (v1 v2,v2v3, . . .,vn1vn,vn) is also linearly independant?
    is this not kind of cheating though as we are effectively just letting all the b values = 0? Or is this enough for the proof, or am i possibly on the complete wrong track.
    Follow Math Help Forum on Facebook and Google+

  10. #10
    Super Member
    Joined
    Apr 2009
    Posts
    677
    Quote Originally Posted by joe909 View Post
    2) Prove that if
    (v1, . . .,vn) is linearly independent in Rn, then so is the list (v1 v2,v2v3, . . .,vn1vn,vn).

    For the second question is it enough to just say that
    0=a1v1+a2v2+...anv such that a1 = 0, a2 = 0, ... ,an = 0.

    And we know we can make .
    =
    by letting
    b1 = a1
    b2 = a2+b1
    ect.
    so therefore (v1 v2,v2v3, . . .,vn1vn,vn) is also linearly independant?
    is this not kind of cheating though as we are effectively just letting all the b values = 0? Or is this enough for the proof, or am i possibly on the complete wrong track.
    Hi - I trust you are correct in your approach but wrong in your argument. You demonstrated one set of values of b1,b2,,,bn which works. There might be other values. for e.g
    0.v1 + 0.(2v1) = 0 but are they independent?
    2.v1 + -1.(2v1) = 0 also works.
    Follow Math Help Forum on Facebook and Google+

  11. #11
    Junior Member
    Joined
    Oct 2009
    Posts
    30
    Quote Originally Posted by aman_cc View Post
    Hi - I trust you are correct in your approach but wrong in your argument. You demonstrated one set of values of b1,b2,,,bn which works. There might be other values. for e.g
    0.v1 + 0.(2v1) = 0 but are they independent?
    2.v1 + -1.(2v1) = 0 also works.
    So is this what you are saying:

    since we know (v1, . . .,vn) is linearly independent we know that
    0=a1v1+a2v2+...anv such that a1 = 0, a2 = 0, ... ,an = 0. because in order for it to be linearly independant all the scalars must be 0 correct?

    And from the first part we proved that we can convert .
    into by letting b1 = a1
    b2 = a2+b1.

    However you are saying that I need to prove that there arent other possible Beta values other then b1=a1, b2=a2+b1, ect. that will add up to 0? That makes sense to me, however im not sure how I would even begin going about doing that, any tips or suggestions?
    Follow Math Help Forum on Facebook and Google+

  12. #12
    Super Member
    Joined
    Apr 2009
    Posts
    677
    Quote Originally Posted by joe909 View Post
    So is this what you are saying:

    since we know (v1, . . .,vn) is linearly independent we know that
    0=a1v1+a2v2+...anv such that a1 = 0, a2 = 0, ... ,an = 0. because in order for it to be linearly independant all the scalars must be 0 correct?

    And from the first part we proved that we can convert .
    into by letting b1 = a1
    b2 = a2+b1.

    However you are saying that I need to prove that there arent other possible Beta values other then b1=a1, b2=a2+b1, ect. that will add up to 0? That makes sense to me, however im not sure how I would even begin going about doing that, any tips or suggestions?
    Hint - You need to use the fact that v1,v2....vn are independent !
    I would also structure your problem better/clearer.
    Known fact: a1v1+...+anvn = 0 => a1,a2....an =0
    To prove:
    = 0 => b1,b2,...,bn = 0

    If you read this formulation is equivalent to what you are doing.
    You have actually done it pretty much. It's trivial.
    Follow Math Help Forum on Facebook and Google+

Similar Math Help Forum Discussions

  1. perpendicular linear span (linear regression)
    Posted in the Advanced Algebra Forum
    Replies: 0
    Last Post: October 22nd 2011, 03:56 PM
  2. Proving span{v1,v2,..} = span{w1, w2, ...w3}
    Posted in the Advanced Algebra Forum
    Replies: 0
    Last Post: March 4th 2010, 11:35 AM
  3. proving span and linear independence
    Posted in the Algebra Forum
    Replies: 0
    Last Post: October 25th 2009, 02:12 PM
  4. Matrix Span
    Posted in the Advanced Algebra Forum
    Replies: 0
    Last Post: October 7th 2009, 04:23 PM
  5. Linear Algebra - Linear Combination/Span
    Posted in the Advanced Algebra Forum
    Replies: 1
    Last Post: September 13th 2008, 06:06 PM

Search Tags


/mathhelpforum @mathhelpforum