Results 1 to 3 of 3

Math Help - Question about proof in particular text on linear algebra ...

  1. #1
    Junior Member
    Joined
    Apr 2011
    Posts
    62
    Thanks
    1

    Question about proof in particular text on linear algebra ...

    Greetings again,

    I am wondering about the proof of a theorem. I suspect that the proof, as described in the textbook I'm reading (see the comments below for detailed description) is either imprecise (maybe intentionally - this is an introductory level book) or, what is infinitely more probable, my understanding of diagonalizable matrices, eigenvectors and eigenspaces has some flaws. The theorem concerns a method for approximating dominant eigenvalue. In the textbook it is called "power method" and It seems to me that the same thing is described in Wikipedia as "power iteration algorithm" or Von Mises Iteration.

    Theorem: Let A be a n\times n diagonalizable matrix with dominant eigenvalue \lambda_{1}. Then there exists a nonzero vector x_{0} such, that the sequence of vectors x_{k} defined by x_{1}=Ax_{0}, x_{2}=Ax_{1},x_{3}=Ax_{2}, ... ,x_{k}=Ax_{k-1}, ... approaches a dominant eigenvector of A.

    Proof: We may assume that the eigenvalues of A have been labeled so that:
    |\lambda_{1}| > |\lambda_{2}| \geq |\lambda_{3}| \geq ...\geq |\lambda_{n}|

    Let v_{1}, v_{2}, v_{3}, ... , v_{n} be the corresponding eigenvectors. Since v_{1}, v_{2}, v_{3}, ... , v_{n} are linearly independent, they form a basis for \mathbb{R}^{n}. Consequently we can write x_{0} as linear combination of these eigenvectors. Say:
    x_{0}=c_{1}v_{1}+c_{2}v_{2}+ ... + c_{n}v_{n}

    We use that x_{k}=A^{k}x_0 and substitute the above equation into it to get:
    x_{k}=A^{k}x_0=c_{1} \lambda_{1}^{k}v_{1}+c_{2} \lambda_{2}^{k}v_{2}+ ... + c_{n} \lambda_{n}^k v_{n}

    We next factor out \lambda_{1}^k to get

    x_{k} = \lambda_{1}^{k} ( c_{1}v_{1} + c_{2} (\frac{\lambda_{2}}{\lambda_{1}})^{k} + ... + c_{n} (\frac{\lambda_{n}}{\lambda_{1}})^{k} )
    Doing so we use the fact that \lambda_{1} is not equal to zero.

    So far so good. But I cannot get why the dominant eigenvalue cannot be zero. It is obvious that If there are more than one distinct eigenvalues, for  \lambda_{1} to be dominant, its absolute value must be greater than the absolute value of every other eigenvalue. So it cannot be zero. But what if the only eigenvalue is 0? It certainly is not wrong to have eigenvalue equals zero and it's certainly not wrong diagonalizable matrix to have less than n distinct eigenvalues. It seems to me at first look, that only a zero matrix can have zero as only eigenvalue (and will have \mathbb{R}^n as eigenspace). But If we substitute zero matrix for A, things certainly won't behave as in the theorem? [0][0]...[0]x will converge to the zero vector which is not eigenvector at all.

    Uhhh. Help???

    The book in question is - David Poole's Linear Algebra (Second Edition). (Great book so far as a layman as me can say). This is theorem 4.28 on page 309.
    Follow Math Help Forum on Facebook and Google+

  2. #2
    MHF Contributor

    Joined
    Mar 2011
    From
    Tejas
    Posts
    3,150
    Thanks
    591

    Re: Question about proof in particular text on linear algebra ...

    yes a matrix can have 0 as an eigenvalue. if, however, 0 is the DOMINANT eigenvalue, this means that 0 is the ONLY eigenvalue. since A is assumed diagonalizable, this means there is an invertible matrix P for which:

    A = P0P-1 = 0.

    if A = 0, then EVERY vector is a dominant eigenvector, so we get one on the first iteration (pick any non-zero x0. whoa! a dominant eigenvector!). this is a trivial case, and can be disregarded. so we can assume that A has a non-zero eigenvector.

    there do exist non-zero matrices with only 0 eigenvalues, but these matrices are not diagonalizable (such matrices are called nilpotent). here is an example: A =

    [0 1]
    [0 0].

    let me try to give you an idea of what something like theorem could be used for. imagine you have an incoming sound-signal, composing of various component wave-forms. you have a signal processor, which acts as a linear operator. an eigenvector in this case, is a wave-shape which is amplified with minimal distortion (theoretically 0). therefore if you arrange that the sound signal input is matched to the eigenvectors of the processor, you will get a clean sound, but with perhaps say the treble boosted. technology related to this is actually used in noise filters, to sort out signals that carry useful information from "scattered" (non-eigenvector input) wave-forms.

    similar technology is used to make cars quieter on the road (inside the cabin), or test a steel beam for imperfections (the beam is struck by a hammer, and the eigenvalues are "heard". an experienced steel worker knows what good eigenvalues sound like-they listen for the dominant eigenvalue, which should have a clear, crisp bell-like tone).

    determining the largest (dominant) eigenvalue is key to the speed and efficiency of search engines like google. when that low-rider car with the blow-out speakers drives by and shakes the street...yes, you can blame eignevectors. systems of differential equations often have linear models....and what matters in these models is the size of the eigenvalues. even if these systems are "chaotic" at dominant eigenvalues they often display coherent and predictable behavior.

    often, some underlying symmetry in the modelling of a situation justifies the assumption the the matrix we assign to it is diagonalizable. diagonalization is GOOD, because diagonal matrices are much easier to compute with than arbitrary ones (fewer computations). if we already know a matrix is diagonalizable, but haven't actually diagonalized it (computing the inverse of an nxn matrix is not a user-friendly task, for n > 6, let's say), computing the eigenvectors numerically is sometimes useful (especially if you just want "the dominant eigenspace"). remember the matrices you will do problems with in your text are "toy matrices" like 2x2 or 3x3, for the most part. in the "real world" matrices can have 100's or thousands of rows/columns.
    Last edited by Deveno; January 31st 2013 at 06:25 AM.
    Follow Math Help Forum on Facebook and Google+

  3. #3
    Junior Member
    Joined
    Apr 2011
    Posts
    62
    Thanks
    1

    Re: Question about proof in particular text on linear algebra ...

    Deveno, thanks for being as always, clear, punctual and thorough!

    Not being engineer or physicist I don't know much about sound waves, but applications to Google PageRank are surely a killer and of great interest to me. Besides Markov Chains look very interesting and are from some time on my ultimate reading list...
    Follow Math Help Forum on Facebook and Google+

Similar Math Help Forum Discussions

  1. Linear Algebra Proof
    Posted in the Advanced Algebra Forum
    Replies: 2
    Last Post: December 30th 2012, 07:50 PM
  2. linear algebra proof
    Posted in the Advanced Algebra Forum
    Replies: 4
    Last Post: July 10th 2010, 12:33 AM
  3. Help with Linear Algebra proof
    Posted in the Advanced Algebra Forum
    Replies: 1
    Last Post: July 19th 2009, 01:54 PM
  4. linear algebra proof help!
    Posted in the Advanced Algebra Forum
    Replies: 2
    Last Post: April 7th 2009, 09:04 AM
  5. linear Algebra Proof
    Posted in the Advanced Algebra Forum
    Replies: 0
    Last Post: October 18th 2006, 12:33 PM

Search Tags


/mathhelpforum @mathhelpforum