# Thread: matrices of linear equations

1. ## matrices of linear equations

Let V be an n-dimensional vector space, and assume $\displaystyle V=U\oplus W$ for subspaces U and W. Suppose $\displaystyle T:V\rightarrow V$ is a linear transformation such that $\displaystyle T(U)\subseteq U$ and $\displaystyle T(W)\subseteq W$.
Show that the matrix of T with respect to a basis of V that contains bases of U and W, has block shape:
$\displaystyle $\left( \begin{array}{cc} A & 0 \\ 0 & B \end{array} \right)$$

I am having trouble seeing how the matrix that represents T would be a 2x2 matrix.
So far in my working I have set the basis of U to be $\displaystyle {v_1,v_2,...,v_k}$ and the basis of W to be $\displaystyle v_{k+1},v_{k+2},...,v_n$ and I can see that under the T(u) (where $\displaystyle u\in U$) can be written as a linear combination of the elements in the basis. But I can't see where to go from there. Help would really be appreciated.

2. No, it's not a $\displaystyle 2\times 2$ matrix! Perhaps it wasn't written clearly in your book. Here $\displaystyle A,B$ denote square matrices of dimensions $\displaystyle k,l$ (where $\displaystyle k=\dim U$, $\displaystyle l=\dim W$, and $\displaystyle 0$ denotes a matrix of appropriate size filled with $\displaystyle 0$'s.

You have the correct first step. Now by assumption, $\displaystyle T(v_1),\dots, T(v_k)$ can be written with $\displaystyle v_1, \dots, v_k$ only... what will that look like in the matrix?

3. So:
$\displaystyle T(v_1)=a_{11}v_1 + a_{12}v_2 +...+a_{1k}v_k$ and so on for each $\displaystyle v_i$. So the matrix will be:
$\displaystyle A=[a_{ij}]$ where i and j represent the i-th row and the j-th column.
Then I could repeat this step for the elements in the basis of W. But I still can't see how that will give the shape given.

4. Originally Posted by worc3247
So:
$\displaystyle T(v_1)=a_{11}v_1 + a_{12}v_2 +...+a_{1k}v_k$ and so on for each $\displaystyle v_i$. So the matrix will be:
$\displaystyle A=[a_{ij}]$ where i and j represent the i-th row and the j-th column.
Then I could repeat this step for the elements in the basis of W. But I still can't see how that will give the shape given.

This is called being reduced and has a much more general form (if you're interested see here for a proof of the analogous theorem). The idea is that if $\displaystyle \{x_1,\cdots,x_m\}$ is a basis for $\displaystyle U$ and $\displaystyle \{x_{m+1},\cdots,x_n\}$ is a basis for $\displaystyle W$ then $\displaystyle \{x_1,\cdots,x_m,x_{m+1},\cdots,x_{n}\}$ is a basis for $\displaystyle V$. But, since $\displaystyle T(x_k)\in U$ for $\displaystyle k=1,\cdots,m$ conclude that $\displaystyle \alpha_{i,k}=0$ for $\displaystyle k=1,\cdots,m$ and $\displaystyle i=m+1,\cdots,n$. Can you finish?

5. Originally Posted by Drexel28
conclude that $\displaystyle \alpha_{i,k}=0$ for $\displaystyle k=1,\cdots,m$ and $\displaystyle i=m+1,\cdots,n$. Can you finish?
Ok, I understood you up until this part. Where did the $\displaystyle \alpha$ come from and why does it equal 0 for those values?

6. Originally Posted by worc3247
Ok, I understood you up until this part. Where did the $\displaystyle \alpha$ come from and why does it equal 0 for those values?
For an ordered basis $\displaystyle \mathcal{B}=(x_1,\cdots,x_n)$ we know that there exists unique scalars $\displaystyle \alpha_{i,j},\text{ }i,j\in[n]$ such that $\displaystyle \displaystyle T(x_j)=\sum_{i=1}^{n}\alpha_{i,j}x_i$. We then define the matrix $\displaystyle \left[T\right]_{\mathcal{B}}$ to be the one whose $\displaystyle (i,j)^{\text{th}}$ coordinate is $\displaystyle \alpha_{i,j}$, right? But, take $\displaystyle x_1$ for example (going back to our particular case). We know that $\displaystyle \displaystyle T(x_1)=\sum_{i=1}^{n}\alpha_{i,1}x_i\in U$. But, for this to be true we must have that $\displaystyle \alpha_{m+1,1},\cdots,\alpha_{n,1}=0$. Make sense?

7. And likewise for the subspace W you will have $\displaystyle \alpha_{1,m+1},\alpha_{1,m+2},...,\alpha_{1,n} = 0$ which would then give the matrix shape needed? I think...

8. Same thing, said in a little different way: you have a basis $\displaystyle \{v_1, v_2, ..., v_k\}$ for U and a basis $\displaystyle \{v_{k+1}, v_{k+2}, ..., v_n\}$ for W as you said originally. Of course, $\displaystyle \{v_1, v_2, ..., v_k, v_{k+1}, ..., v_n\}$ is a basis for $\displaystyle V= U\oplus V$.

You find the matrix representing T, in this basis, by applying T to each basis vector in turn. The coefficients of the expansion of [tex]T(v_i) in those basis vectors give the ith column of the matrix. In particular, $\displaystyle T(v_1)= a_{11}v_1+ a_{12}v_2+ ...+ a_{1k}v_{k}+ 0v_{k+1}+ ...+ 0v_n$ because $\displaystyle T(U)\subset U$- each of the vectors $\displaystyle v_1, v_2, ..., v_k$ is in U so $\displaystyle T(v_1), T(v_2), ..., T(v_k)$ are also in U and have no component in V. That's why there are only "0"s for the k+ 1 to rows of the first k columns.

Similarly, $\displaystyle T(v_{k+1}), ..., T(v_n)$, because $\displaystyle v_{k+1}, ..., v_n$ are all in V, and $\displaystyle T(V)\subset V$, are in V, $\displaystyle T(v_{k+1}), ..., T(v_n)$ are all in V and so have no component in U. The coefficents of $\displaystyle v_1, ..., v_k$ are all 0 and so the first k columns of rows k+ 1 to n are all 0.

9. Got it! Thank you so much!