Thread: Proof for a basis of a linear transformation

1. Proof for a basis of a linear transformation

Suppose that $\displaystyle T,S:R^n \rightarrow R^n$ are inverses.

If {$\displaystyle v_1 ,v_2 ,..., v_k$} is a basis for a subspace $\displaystyle V$ of $\displaystyle R^n$ and $\displaystyle w_1 = T(v_1), w_2 = T(v_2),..., w_k = T(v_k)$, prove that {$\displaystyle w_1, w_2,..., w_k$} is a basis for $\displaystyle T(V)$.

In addition, give an example to show that this need not be true if T does not have an inverse.

2. Originally Posted by Runty
Suppose that $\displaystyle T,S:R^n \rightarrow R^n$ are inverses.

If {$\displaystyle v_1 ,v_2 ,..., v_k$} is a basis for a subspace $\displaystyle V$ of $\displaystyle R^n$ and $\displaystyle w_1 = T(v_1), w_2 = T(v_2),..., w_k = T(v_k)$, prove that {$\displaystyle w_1, w_2,..., w_k$} is a basis for $\displaystyle T(V)$.

In addition, give an example to show that this need not be true if T does not have an inverse.
First, show that $\displaystyle \{w_1,...,w_k\}$ spans $\displaystyle T(V)$. Hint:
we can write any $\displaystyle v \in V$ as $\displaystyle v = \sum_{i=1}^k a_iv_i, ~ a_i \in \mathbb{R}$, therefore: $\displaystyle T(v) = T(\sum_{i=1}^k a_iv_i) = ...$

Now, show that $\displaystyle \{w_1,...,w_k\}$ is linearly independent: Assume it is not, and reach a contradiction.

Therefore it is a basis.

3. Originally Posted by Defunkt
First, show that $\displaystyle \{w_1,...,w_k\}$ spans $\displaystyle T(V)$. Hint:
we can write any $\displaystyle v \in V$ as $\displaystyle v = \sum_{i=1}^k a_iv_i, ~ a_i \in \mathbb{R}$, therefore: $\displaystyle T(v) = T(\sum_{i=1}^k a_iv_i) = ...$

Now, show that $\displaystyle \{w_1,...,w_k\}$ is linearly independent: Assume it is not, and reach a contradiction.

Therefore it is a basis.
I suppose this answer could work, but I'd like to, if possible, avoid using summation notation.

4. Originally Posted by Runty
I suppose this answer could work, but I'd like to, if possible, avoid using summation notation.
$\displaystyle v = a_1v_1 + ... + a_kv_k$

$\displaystyle T(v) = T(a_1v_1 + ... + a_kv_k) = a_1T(v_1) + ... + a_kT(v_k)$

5. Originally Posted by Defunkt
$\displaystyle v = a_1v_1 + ... + a_kv_k$

$\displaystyle T(v) = T(a_1v_1 + ... + a_kv_k) = a_1T(v_1) + ... + a_kT(v_k)$

Since $\displaystyle T,S : R^n --> R^n$which implies it's an nxn matrix, you can just apply the big theorem, to prove it

6. Originally Posted by Defunkt
$\displaystyle v = a_1v_1 + ... + a_kv_k$

$\displaystyle T(v) = T(a_1v_1 + ... + a_kv_k) = a_1T(v_1) + ... + a_kT(v_k)$
Okay, that solves the first part. But I still need an example to show that this isn't necessarily true, provided that $\displaystyle T$ does not have an inverse.

Honestly, I find this whole question to be pretty obscure.

7. Originally Posted by Runty
Okay, that solves the first part. But I still need an example to show that this isn't necessarily true, provided that $\displaystyle T$ does not have an inverse.

Honestly, I find this whole question to be pretty obscure.
Take any transformation that is not invertible; for example, $\displaystyle T:\mathbb{R}^2 \to \mathbb{R}^2$ defined by $\displaystyle T((x,y)) = (x,0)$. Since T is not invertible, $\displaystyle Ker T \neq \{0\}$. In fact, $\displaystyle Ker T = \{(x,y) \in \mathbb{R}^2 : x = 0\}$.

Then, $\displaystyle Ker T$ is spanned by $\displaystyle (0, 1)$, however $\displaystyle T(0,1) = (0,0)$ which is not a base.

This will work for any transformation that is not invertible:
Since it is not invertible, $\displaystyle Ker T \neq \{0\}$ but for any $\displaystyle w \in Ker T$, $\displaystyle Tw = 0$, and therefore the image of any base of $\displaystyle Ker T$ will be mapped to the zero vector.

8. Originally Posted by Defunkt
Take any transformation that is not invertible; for example, $\displaystyle T:\mathbb{R}^2 \to \mathbb{R}^2$ defined by $\displaystyle T((x,y)) = (x,0)$. Since T is not invertible, $\displaystyle Ker T \neq \{0\}$. In fact, $\displaystyle Ker T = \{(x,y) \in \mathbb{R}^2 : x = 0\}$.

Then, $\displaystyle Ker T$ is spanned by $\displaystyle (0, 1)$, however $\displaystyle T(0,1) = (0,0)$ which is not a base.

This will work for any transformation that is not invertible:
Since it is not invertible, $\displaystyle Ker T \neq \{0\}$ but for any $\displaystyle w \in Ker T$, $\displaystyle Tw = 0$, and therefore the image of any base of $\displaystyle Ker T$ will be mapped to the zero vector.
By $\displaystyle KerT$, do you mean determinant? I've never seen the term $\displaystyle Ker$ used before.

9. $\displaystyle c_1\vec{w}_1+\cdots +c_k\vec{w}_k=0\rightarrow c_1T(\vec{v}_1)+\cdots +c_kT(\vec{v}_k)=0$

$\displaystyle \rightarrow c_1T(\vec{v}_1)+\cdots +c_kT(\vec{v}_k)=0 \rightarrow T(c_1\vec{v}_1)+\cdots +T(c_k\vec{v}_k)=0 \rightarrow T(c_1\vec{v}_1+\cdots +c_k\vec{v}_k)=0$

$\displaystyle =A(c_1\vec{v}_1+\cdots +c_k\vec{v}_k)=0 \rightarrow c_1\vec{v}_1+\cdots +c_k\vec{v}_k=A^{-1}0=0$

$\displaystyle \rightarrow c_1=\cdots =c_k=0 \rightarrow \{\vec{w}_1,\cdots,\vec{w}_k\}$ is linearly independent

$\displaystyle dim \{\vec{w}_1,\cdots,\vec{w}_k\} = k = dim(V)~ \therefore$ it's basis of V

10. Originally Posted by Runty
By $\displaystyle KerT$, do you mean determinant? I've never seen the term $\displaystyle Ker$ used before.
"ker(T)" is the "kernel" of T, also called the "null space" of T. It is the subspace of all vectors, v, such that Tv= 0.

If T is invertible, then Tv= 0 gives $\displaystyle T^{-1}T(v)= T^{-1}(0)$ or v= 0. That is, if T is invertible, its kernel (null space) consists only of the 0 vector.

In fact, you can also prove the other way: if the 0 vector is the only vector in the kernel of T, T is invertible.

"null space" is used exclusively in linear algebra. "kernel" of an operator is also used in group theory, ring theory, etc.