# Thread: Prove that the inverse image of V is a subspace in X.

1. ## Prove that the inverse image of V is a subspace in X.

Prove that if $f$ is a linear transformation from a vector space $X$ to a vector space $Y$, then for any subspace $V$ in $Y$, $f^{-1}[V]$ is a subspace in $X$.

2. Originally Posted by mathwizard
Prove that if $f$ is a linear transformation from a vector space $X$ to a vector space $Y$, then for any subspace $V$ in $Y$, $f^{-1}[V]$ is a subspace in $X$.
Let $\bold{x},\bold{y}\in f^{-1}[V]$ and $a\in F$, the base field for the vector spaces. It means by definition that $f(\bold{x}),f(\bold{y}) \in V$. Since $V$ is a subspace it means $f(\bold{x})+f(\bold{y}) \in V$. Therefore, since $f$ is a linear transformation it means $f(\bold{x}+\bold{y}) \in V$ which means $\bold{x}+\bold{y} \in f^{-1}[V]$. Thus, $f^{-1}[V]$ is closed under vector addition. Likewise, $kf(\bold{x}) \in V$ which means $k\bold{x}\in f^{-1}[V]$. Thus, $f^{-1}[V]$ is closed under scalar multiplications. All the other properties for being a vector space are satisfied because $f^{-1}[V]\subseteq X$. Thus, $f^{-1}[V]$ is a vector space over $F$.

3. Originally Posted by ThePerfectHacker
Let $\bold{x},\bold{y}\in f^{-1}[V]$ and $a\in F$, the base field for the vector spaces. It means by definition that $f(\bold{x}),f(\bold{y}) \in V$. Since $V$ is a subspace it means $f(\bold{x})+f(\bold{y}) \in V$. Therefore, since $f$ is a linear transformation it means $f(\bold{x}+\bold{y}) \in V$ which means $\bold{x}+\bold{y} \in f^{-1}[V]$. Thus, $f^{-1}[V]$ is closed under vector addition. Likewise, $kf(\bold{x}) \in V$ which means $k\bold{x}\in f^{-1}[V]$. Thus, $f^{-1}[V]$ is closed under scalar multiplications. All the other properties for being a vector space are satisfied because $f^{-1}[V]\subseteq X$. Thus, $f^{-1}[V]$ is a vector space over $F$.
Thanks for your input; your proof confirms my belief that the same proof in my linear algebra textbook is incorrect; it messed up on x & y and f(x) & f(y) (and it wasted me a long time to try to make sense out of that flawed proof ).

PS: I think there's one part that your proof is incomplete; it hasn't showed that the zero vector is in $f^{-1}[V].$

4. Originally Posted by mathwizard
PS: I think there's one part that your proof is incomplete; it hasn't showed that the zero vector is in $f^{-1}[V].$
Yes it has. If $\bold{x}\in f^{-1}[V]$ we proved $k\bold{x} \in f^{-1}[V]$. Now let $k=0$.

5. Originally Posted by ThePerfectHacker
Yes it has. If $\bold{x}\in f^{-1}[V]$ we proved $k\bold{x} \in f^{-1}[V]$. Now let $k=0$.
Using your logic, why must there be a condition that a subspace must contain a zero vector, since one of the other two conditions (i.e., closed under scalar multiplication) has already taken care of the first condition (by letting k=0 as you said)?

6. Originally Posted by mathwizard
why must there be a condition that a subspace must contain a zero vector, since one of the other two conditions (i.e., closed under scalar multiplication) has already taken care of the first condition (by letting k=0 as you said)?
The condition is necessary, in order to exclude the possibility of the set being empty (the empty set is by convention not considered to be a subspace). If you're trying to show that X is a subspace then you can only make use of the implication $\bold{x}\in X\Rightarrow 0\bold{x}\in X\Rightarrow \bold{0}\in X$ if you know that there exists a vector x in X.