# Proof for a basis of a linear transformation

• Feb 2nd 2010, 10:59 AM
Runty
Proof for a basis of a linear transformation
Suppose that $T,S:R^n \rightarrow R^n$ are inverses.

If { $v_1 ,v_2 ,..., v_k$} is a basis for a subspace $V$ of $R^n$ and $w_1 = T(v_1), w_2 = T(v_2),..., w_k = T(v_k)$, prove that { $w_1, w_2,..., w_k$} is a basis for $T(V)$.

In addition, give an example to show that this need not be true if T does not have an inverse.
• Feb 2nd 2010, 03:37 PM
Defunkt
Quote:

Originally Posted by Runty
Suppose that $T,S:R^n \rightarrow R^n$ are inverses.

If { $v_1 ,v_2 ,..., v_k$} is a basis for a subspace $V$ of $R^n$ and $w_1 = T(v_1), w_2 = T(v_2),..., w_k = T(v_k)$, prove that { $w_1, w_2,..., w_k$} is a basis for $T(V)$.

In addition, give an example to show that this need not be true if T does not have an inverse.

First, show that $\{w_1,...,w_k\}$ spans $T(V)$. Hint:
we can write any $v \in V$ as $v = \sum_{i=1}^k a_iv_i, ~ a_i \in \mathbb{R}$, therefore: $T(v) = T(\sum_{i=1}^k a_iv_i) = ...$

Now, show that $\{w_1,...,w_k\}$ is linearly independent: Assume it is not, and reach a contradiction.

Therefore it is a basis.
• Feb 4th 2010, 05:30 AM
Runty
Quote:

Originally Posted by Defunkt
First, show that $\{w_1,...,w_k\}$ spans $T(V)$. Hint:
we can write any $v \in V$ as $v = \sum_{i=1}^k a_iv_i, ~ a_i \in \mathbb{R}$, therefore: $T(v) = T(\sum_{i=1}^k a_iv_i) = ...$

Now, show that $\{w_1,...,w_k\}$ is linearly independent: Assume it is not, and reach a contradiction.

Therefore it is a basis.

I suppose this answer could work, but I'd like to, if possible, avoid using summation notation.
• Feb 4th 2010, 02:04 PM
Defunkt
Quote:

Originally Posted by Runty
I suppose this answer could work, but I'd like to, if possible, avoid using summation notation.

$v = a_1v_1 + ... + a_kv_k$

$T(v) = T(a_1v_1 + ... + a_kv_k) = a_1T(v_1) + ... + a_kT(v_k)$
• Feb 6th 2010, 06:38 PM
wopashui
Quote:

Originally Posted by Defunkt
$v = a_1v_1 + ... + a_kv_k$

$T(v) = T(a_1v_1 + ... + a_kv_k) = a_1T(v_1) + ... + a_kT(v_k)$

Since $T,S : R^n --> R^n$which implies it's an nxn matrix, you can just apply the big theorem, to prove it
• Feb 7th 2010, 10:31 AM
Runty
Quote:

Originally Posted by Defunkt
$v = a_1v_1 + ... + a_kv_k$

$T(v) = T(a_1v_1 + ... + a_kv_k) = a_1T(v_1) + ... + a_kT(v_k)$

Okay, that solves the first part. But I still need an example to show that this isn't necessarily true, provided that $T$ does not have an inverse.

Honestly, I find this whole question to be pretty obscure.
• Feb 7th 2010, 11:14 AM
Defunkt
Quote:

Originally Posted by Runty
Okay, that solves the first part. But I still need an example to show that this isn't necessarily true, provided that $T$ does not have an inverse.

Honestly, I find this whole question to be pretty obscure.

Take any transformation that is not invertible; for example, $T:\mathbb{R}^2 \to \mathbb{R}^2$ defined by $T((x,y)) = (x,0)$. Since T is not invertible, $Ker T \neq \{0\}$. In fact, $Ker T = \{(x,y) \in \mathbb{R}^2 : x = 0\}$.

Then, $Ker T$ is spanned by $(0, 1)$, however $T(0,1) = (0,0)$ which is not a base.

This will work for any transformation that is not invertible:
Since it is not invertible, $Ker T \neq \{0\}$ but for any $w \in Ker T$, $Tw = 0$, and therefore the image of any base of $Ker T$ will be mapped to the zero vector.
• Feb 7th 2010, 11:16 AM
Runty
Quote:

Originally Posted by Defunkt
Take any transformation that is not invertible; for example, $T:\mathbb{R}^2 \to \mathbb{R}^2$ defined by $T((x,y)) = (x,0)$. Since T is not invertible, $Ker T \neq \{0\}$. In fact, $Ker T = \{(x,y) \in \mathbb{R}^2 : x = 0\}$.

Then, $Ker T$ is spanned by $(0, 1)$, however $T(0,1) = (0,0)$ which is not a base.

This will work for any transformation that is not invertible:
Since it is not invertible, $Ker T \neq \{0\}$ but for any $w \in Ker T$, $Tw = 0$, and therefore the image of any base of $Ker T$ will be mapped to the zero vector.

By $KerT$, do you mean determinant? I've never seen the term $Ker$ used before.
• Feb 7th 2010, 08:27 PM
math2009
$
c_1\vec{w}_1+\cdots +c_k\vec{w}_k=0\rightarrow c_1T(\vec{v}_1)+\cdots +c_kT(\vec{v}_k)=0
$

$
\rightarrow c_1T(\vec{v}_1)+\cdots +c_kT(\vec{v}_k)=0
\rightarrow T(c_1\vec{v}_1)+\cdots +T(c_k\vec{v}_k)=0
\rightarrow T(c_1\vec{v}_1+\cdots +c_k\vec{v}_k)=0
$

$
=A(c_1\vec{v}_1+\cdots +c_k\vec{v}_k)=0
\rightarrow c_1\vec{v}_1+\cdots +c_k\vec{v}_k=A^{-1}0=0
$

$
\rightarrow c_1=\cdots =c_k=0 \rightarrow \{\vec{w}_1,\cdots,\vec{w}_k\}
$
is linearly independent

$
dim \{\vec{w}_1,\cdots,\vec{w}_k\} = k = dim(V)~ \therefore
$
it's basis of V
• Feb 8th 2010, 05:00 AM
HallsofIvy
Quote:

Originally Posted by Runty
By $KerT$, do you mean determinant? I've never seen the term $Ker$ used before.

"ker(T)" is the "kernel" of T, also called the "null space" of T. It is the subspace of all vectors, v, such that Tv= 0.

If T is invertible, then Tv= 0 gives $T^{-1}T(v)= T^{-1}(0)$ or v= 0. That is, if T is invertible, its kernel (null space) consists only of the 0 vector.

In fact, you can also prove the other way: if the 0 vector is the only vector in the kernel of T, T is invertible.

"null space" is used exclusively in linear algebra. "kernel" of an operator is also used in group theory, ring theory, etc.