Results 1 to 9 of 9

Thread: matrices of linear equations

  1. #1
    Member
    Joined
    Dec 2010
    Posts
    107

    matrices of linear equations

    Let V be an n-dimensional vector space, and assume $\displaystyle V=U\oplus W$ for subspaces U and W. Suppose $\displaystyle T:V\rightarrow V$ is a linear transformation such that $\displaystyle T(U)\subseteq U$ and $\displaystyle T(W)\subseteq W$.
    Show that the matrix of T with respect to a basis of V that contains bases of U and W, has block shape:
    $\displaystyle \[ \left( \begin{array}{cc}
    A & 0 \\
    0 & B \end{array} \right)\]$

    I am having trouble seeing how the matrix that represents T would be a 2x2 matrix.
    So far in my working I have set the basis of U to be $\displaystyle {v_1,v_2,...,v_k}$ and the basis of W to be $\displaystyle v_{k+1},v_{k+2},...,v_n$ and I can see that under the T(u) (where $\displaystyle u\in U$) can be written as a linear combination of the elements in the basis. But I can't see where to go from there. Help would really be appreciated.
    Follow Math Help Forum on Facebook and Google+

  2. #2
    MHF Contributor Bruno J.'s Avatar
    Joined
    Jun 2009
    From
    Canada
    Posts
    1,266
    Thanks
    1
    Awards
    1
    No, it's not a $\displaystyle 2\times 2$ matrix! Perhaps it wasn't written clearly in your book. Here $\displaystyle A,B$ denote square matrices of dimensions $\displaystyle k,l$ (where $\displaystyle k=\dim U$, $\displaystyle l=\dim W$, and $\displaystyle 0$ denotes a matrix of appropriate size filled with $\displaystyle 0$'s.

    You have the correct first step. Now by assumption, $\displaystyle T(v_1),\dots, T(v_k)$ can be written with $\displaystyle v_1, \dots, v_k$ only... what will that look like in the matrix?
    Follow Math Help Forum on Facebook and Google+

  3. #3
    Member
    Joined
    Dec 2010
    Posts
    107
    So:
    $\displaystyle T(v_1)=a_{11}v_1 + a_{12}v_2 +...+a_{1k}v_k$ and so on for each $\displaystyle v_i$. So the matrix will be:
    $\displaystyle A=[a_{ij}]$ where i and j represent the i-th row and the j-th column.
    Then I could repeat this step for the elements in the basis of W. But I still can't see how that will give the shape given.
    Follow Math Help Forum on Facebook and Google+

  4. #4
    MHF Contributor Drexel28's Avatar
    Joined
    Nov 2009
    From
    Berkeley, California
    Posts
    4,563
    Thanks
    22
    Quote Originally Posted by worc3247 View Post
    So:
    $\displaystyle T(v_1)=a_{11}v_1 + a_{12}v_2 +...+a_{1k}v_k$ and so on for each $\displaystyle v_i$. So the matrix will be:
    $\displaystyle A=[a_{ij}]$ where i and j represent the i-th row and the j-th column.
    Then I could repeat this step for the elements in the basis of W. But I still can't see how that will give the shape given.

    This is called being reduced and has a much more general form (if you're interested see here for a proof of the analogous theorem). The idea is that if $\displaystyle \{x_1,\cdots,x_m\}$ is a basis for $\displaystyle U$ and $\displaystyle \{x_{m+1},\cdots,x_n\}$ is a basis for $\displaystyle W$ then $\displaystyle \{x_1,\cdots,x_m,x_{m+1},\cdots,x_{n}\}$ is a basis for $\displaystyle V$. But, since $\displaystyle T(x_k)\in U$ for $\displaystyle k=1,\cdots,m$ conclude that $\displaystyle \alpha_{i,k}=0$ for $\displaystyle k=1,\cdots,m$ and $\displaystyle i=m+1,\cdots,n$. Can you finish?
    Follow Math Help Forum on Facebook and Google+

  5. #5
    Member
    Joined
    Dec 2010
    Posts
    107
    Quote Originally Posted by Drexel28 View Post
    conclude that $\displaystyle \alpha_{i,k}=0$ for $\displaystyle k=1,\cdots,m$ and $\displaystyle i=m+1,\cdots,n$. Can you finish?
    Ok, I understood you up until this part. Where did the $\displaystyle \alpha $ come from and why does it equal 0 for those values?
    Follow Math Help Forum on Facebook and Google+

  6. #6
    MHF Contributor Drexel28's Avatar
    Joined
    Nov 2009
    From
    Berkeley, California
    Posts
    4,563
    Thanks
    22
    Quote Originally Posted by worc3247 View Post
    Ok, I understood you up until this part. Where did the $\displaystyle \alpha $ come from and why does it equal 0 for those values?
    For an ordered basis $\displaystyle \mathcal{B}=(x_1,\cdots,x_n)$ we know that there exists unique scalars $\displaystyle \alpha_{i,j},\text{ }i,j\in[n]$ such that $\displaystyle \displaystyle T(x_j)=\sum_{i=1}^{n}\alpha_{i,j}x_i$. We then define the matrix $\displaystyle \left[T\right]_{\mathcal{B}}$ to be the one whose $\displaystyle (i,j)^{\text{th}}$ coordinate is $\displaystyle \alpha_{i,j}$, right? But, take $\displaystyle x_1$ for example (going back to our particular case). We know that $\displaystyle \displaystyle T(x_1)=\sum_{i=1}^{n}\alpha_{i,1}x_i\in U$. But, for this to be true we must have that $\displaystyle \alpha_{m+1,1},\cdots,\alpha_{n,1}=0$. Make sense?
    Follow Math Help Forum on Facebook and Google+

  7. #7
    Member
    Joined
    Dec 2010
    Posts
    107
    And likewise for the subspace W you will have $\displaystyle \alpha_{1,m+1},\alpha_{1,m+2},...,\alpha_{1,n} = 0$ which would then give the matrix shape needed? I think...
    Follow Math Help Forum on Facebook and Google+

  8. #8
    MHF Contributor

    Joined
    Apr 2005
    Posts
    19,781
    Thanks
    3030
    Same thing, said in a little different way: you have a basis $\displaystyle \{v_1, v_2, ..., v_k\}$ for U and a basis $\displaystyle \{v_{k+1}, v_{k+2}, ..., v_n\}$ for W as you said originally. Of course, $\displaystyle \{v_1, v_2, ..., v_k, v_{k+1}, ..., v_n\}$ is a basis for $\displaystyle V= U\oplus V$.

    You find the matrix representing T, in this basis, by applying T to each basis vector in turn. The coefficients of the expansion of [tex]T(v_i) in those basis vectors give the ith column of the matrix. In particular, $\displaystyle T(v_1)= a_{11}v_1+ a_{12}v_2+ ...+ a_{1k}v_{k}+ 0v_{k+1}+ ...+ 0v_n$ because $\displaystyle T(U)\subset U$- each of the vectors $\displaystyle v_1, v_2, ..., v_k$ is in U so $\displaystyle T(v_1), T(v_2), ..., T(v_k)$ are also in U and have no component in V. That's why there are only "0"s for the k+ 1 to rows of the first k columns.

    Similarly, $\displaystyle T(v_{k+1}), ..., T(v_n)$, because $\displaystyle v_{k+1}, ..., v_n$ are all in V, and $\displaystyle T(V)\subset V$, are in V, $\displaystyle T(v_{k+1}), ..., T(v_n)$ are all in V and so have no component in U. The coefficents of $\displaystyle v_1, ..., v_k$ are all 0 and so the first k columns of rows k+ 1 to n are all 0.
    Follow Math Help Forum on Facebook and Google+

  9. #9
    Member
    Joined
    Dec 2010
    Posts
    107
    Got it! Thank you so much!
    Follow Math Help Forum on Facebook and Google+

Similar Math Help Forum Discussions

  1. Using matrices to solve systems of linear equations.
    Posted in the Advanced Algebra Forum
    Replies: 4
    Last Post: Nov 15th 2011, 06:07 PM
  2. Matrices - Linear equations
    Posted in the Pre-Calculus Forum
    Replies: 9
    Last Post: Jan 13th 2011, 02:40 PM
  3. Linear Equations and Matrices
    Posted in the Algebra Forum
    Replies: 1
    Last Post: Mar 3rd 2010, 08:51 AM
  4. linear systems of equations/ matrices
    Posted in the Algebra Forum
    Replies: 1
    Last Post: Dec 11th 2008, 08:27 PM
  5. Replies: 1
    Last Post: Sep 2nd 2007, 06:07 AM

Search Tags


/mathhelpforum @mathhelpforum