Results 1 to 10 of 10

Math Help - tricky linear algebra proof

  1. #1
    Junior Member
    Joined
    Jan 2011
    From
    Sydney
    Posts
    36

    tricky linear algebra proof

    If A is a mxm matrix and A^2 = 0. Prove that rank(A) <= n/2
    Follow Math Help Forum on Facebook and Google+

  2. #2
    Senior Member
    Joined
    May 2010
    From
    Los Angeles, California
    Posts
    274
    Thanks
    1
    We have that

    \text{rank}(A)+\text{nullity}(A)=m.

    Since A^2=0, the column space of A is contained in the null space of A. Hence, \text{nullity}(A)\ge \text{rank}(A) and the result follows.
    Follow Math Help Forum on Facebook and Google+

  3. #3
    Junior Member
    Joined
    Jan 2011
    From
    Sydney
    Posts
    36
    Hi ojones, thanks for your reply. I can't seem to work out how A^2 = 0 implies col(A) is in null(A). This is probably going to be really easy but yeah, can't seem to get anywhere =(
    Cheers
    Follow Math Help Forum on Facebook and Google+

  4. #4
    Senior Member
    Joined
    May 2010
    From
    Los Angeles, California
    Posts
    274
    Thanks
    1
    If we write A=[\mathbf{a}_1, \cdots ,\mathbf{a}_m], then A^2=[A\mathbf{a}_1, \cdots, A\mathbf{a}_m]. If A^2=0, then A\mathbf{a}_i=\mathbf{0} for all i.
    Follow Math Help Forum on Facebook and Google+

  5. #5
    MHF Contributor

    Joined
    Mar 2011
    From
    Tejas
    Posts
    3,397
    Thanks
    760
    col(A) = A(V) = {Av : v in V}

    so if w is in col(A), w = A(v).

    A(w) = A(A(v)) = A^2(v) = 0.

    thus all w = A(v) in col(A) are in the null space of A.
    Follow Math Help Forum on Facebook and Google+

  6. #6
    Senior Member
    Joined
    May 2010
    From
    Los Angeles, California
    Posts
    274
    Thanks
    1
    OK, you're using a non-standard, but equivalent, defintion of the column space. I'm using the definition that it's the span of the columns of A. You're using the definition that it's the range of A.
    Follow Math Help Forum on Facebook and Google+

  7. #7
    MHF Contributor

    Joined
    Mar 2011
    From
    Tejas
    Posts
    3,397
    Thanks
    760
    it works out the same. suppose the columns of A are v1,v2,...,vn = A(e1),A(e2),....,A(en).

    then w = a1v1+a2v2+...+anvn = a1(A(e1))+a2(A(e2))+...+an(A(en)) = A(a1e1+a2v2+...+anen) = A(a1,a2,...,an).

    so A(w) = A^2(a1,a2,..,an) = 0.

    linear combinations of the vj ARE linear combinations of images under A of the ej, so every vector spanned by the columns of A is the image of a vector spanned by the standard basis.

    conversely, if v is a vector in R^n, then v = a1e1+a2e2+...anen, so A(v) = a1A(e1)+a2A(e2)+...+anA(en), and A(ej) IS the j-th column of A (that's what ej does, it picks out the j-th

    entry in every row). so any element in the range of A is in the span of the columns of A.

    all this wholesome goodness happens because matrices induce linear maps from R^n to R^m, the map v-->A(v) is linear. and ultimately the linearity is built-in for matrices because of the distributivity in the underlying field of the vector space R^n (in this case, R), and the compatibility of scalar multiplication with vector additon.

    in other words, in any linear algebra setting, there are two ways of looking at things: numerically (based on matrices, coordinates, orthogonality), and algebraically (based on linearity, bases and inner products). engineering and applications of linear algebra tend to emphasize the computational elements, the vector (1,2,4) the matrix A. for each one of these notions there is a more abstract way of looking at each one (a physicist thinks of a null space, a mathematician thinks "kernel").
    Follow Math Help Forum on Facebook and Google+

  8. #8
    Senior Member
    Joined
    May 2010
    From
    Los Angeles, California
    Posts
    274
    Thanks
    1
    Quote Originally Posted by Deveno View Post
    it works out the same.
    Yes, it does.

    in other words, in any linear algebra setting, there are two ways of looking at things: numerically (based on matrices, coordinates, orthogonality), and algebraically (based on linearity, bases and inner products). engineering and applications of linear algebra tend to emphasize the computational elements, the vector (1,2,4) the matrix A. for each one of these notions there is a more abstract way of looking at each one (a physicist thinks of a null space, a mathematician thinks "kernel").
    I think what you mean here is that you can either work with a concrete example or you can work in generality, or abstraction. The null space and kernel are just different names for the same thing; this not an example illustrating the differences between mathematicians and physicists. A better example would be the Dirac delta function. The way it was defined by Dirac, it made no sense mathematically. He didn't care however because it worked. It was the mathematician Schwartz who made sense of it.
    Follow Math Help Forum on Facebook and Google+

  9. #9
    MHF Contributor

    Joined
    Mar 2011
    From
    Tejas
    Posts
    3,397
    Thanks
    760
    i read (part of) a book the other day called "group theory for physicists". it got away from homomorphisms just as soon as possible, and went hell-bent for leather on representations and character tables.

    isn't the dirac delta (what-ever-it-is, distribution, i guess) awesome? i mean, physically, we know that something like a "unit pulse" exists, which points to the notion of function not capturing the intuition we thought it did.

    i don't mean to slight physicists, btw. some of them know a good deal more mathematics than i do. i was just trying to illustrate the difference between "concrete" and "abstract". for an abstract theory to be meaningful, there ought to be a faithful concrete representation of it. for a concrete theory to be logical, there ought to be an elegant abstract framework beneath it.
    Follow Math Help Forum on Facebook and Google+

  10. #10
    Senior Member
    Joined
    May 2010
    From
    Los Angeles, California
    Posts
    274
    Thanks
    1
    Quote Originally Posted by Deveno View Post
    isn't the dirac delta (what-ever-it-is, distribution, i guess) awesome? i mean, physically, we know that something like a "unit pulse" exists, which points to the notion of function not capturing the intuition we thought it did.
    Well, the delta function is not a function; it's a distribution as you say. There is no function zero everywhere except at a single point but whose integral is 1.

    i don't mean to slight physicists, btw. some of them know a good deal more mathematics than i do. i was just trying to illustrate the difference between "concrete" and "abstract". for an abstract theory to be meaningful, there ought to be a faithful concrete representation of it. for a concrete theory to be logical, there ought to be an elegant abstract framework beneath it.
    I guess!
    Follow Math Help Forum on Facebook and Google+

Similar Math Help Forum Discussions

  1. Help with Linear Algebra proof
    Posted in the Advanced Algebra Forum
    Replies: 1
    Last Post: July 19th 2009, 01:54 PM
  2. linear algebra proof
    Posted in the Advanced Algebra Forum
    Replies: 2
    Last Post: April 11th 2009, 03:05 AM
  3. linear algebra proof help!
    Posted in the Advanced Algebra Forum
    Replies: 2
    Last Post: April 7th 2009, 09:04 AM
  4. This is tricky..Linear algebra again
    Posted in the Advanced Algebra Forum
    Replies: 1
    Last Post: April 20th 2008, 04:25 AM
  5. Linear algebra proof
    Posted in the Advanced Algebra Forum
    Replies: 2
    Last Post: April 10th 2007, 05:16 PM

Search Tags


/mathhelpforum @mathhelpforum