linear algebra problem set 4

Hello. I am having problems with our 4th assignment in linear algebra. It's like I am completely clueless on what to do. I hope you can help me prove these.

(1) Let *V* be a vector space over a field *F*. Let *W* be a subspace of *V*. Prove that dim *V* = dim *W* + dim (*V/W*) using the definition of a basis.

(2) Let *U, V, W *be finite-dimensional vector spaces over a field *F*.

Let dim *U*=n; dim *V*=m; and dim *W*=p.

Let *f:U->U ; g:U->U ; h:U->V* and *k:V->W* be linear transformations.

Show that:

a) rank of (*f+g*) <= rank of *f* + rank of *g*

b) rank of (*kh*) <= min{rank of *k*, rank of *h*}

c) nullity of (*kh*) <= nullity of *h* + nullity of *k*

d) rank of *f* + rank of *g* - n <= rank of (*fg*) <= min{rank of *f*, rank of *g*}

e) nullity of (*hg*) >= max {nullity of *g*, nullity of *h*}

Re: linear algebra problem set 4

(1). start with a basis of W. extend this to a basis of V.

so you should have something like:

B = {w_{1},....,w_{k},v_{1},...,v_{n-k}}, where dim(W) = k, and dim(V) = n.

so what you want to prove is dim(V/W) = n-k. well, {v_{1},...,v_{n-k}} is a set of n-k vectors, so if:

{v_{1}+W,....,v_{n-k}+W} is a basis for V/W, we're good to go.

to prove linear independence, you need to show that if there are c_{1},...,c_{n-k} with:

c_{1}(v_{1}+W) +...+ c_{n-k}(v_{n-k} + W) = W

then c_{1} =...= c_{n-k} = 0.

show that the above means that c_{1}v_{1} +...+ c_{n-k}v_{n-k} is in W.

now assume that not all c_{j} = 0, but we have c_{1}v_{1} +...+ c_{n-k}v_{n-k} in W.

then (since the w_{i} form a basis for W) we have:

c_{1}v_{1} +...+ c_{n-k}v_{n-k} = b_{1}w_{1} +...+ b_{k}w_{k},

so -b_{1}w_{1} -...- b_{k}w_{k} + c_{1}v_{1} +...+ c_{n-k}v_{n-k} = 0.

why does this contradict the linear independence of B?

to show that the {v_{1}+W,....,v_{n-k}+W} span V/W, pick any u+W in V/W.

write u = a_{1}w_{1} +...+ a_{k}w_{k} + a_{k+1}v_{1}+...+a_{n}v_{n-k}

(which we can do since B is a basis for V).

note that if u = w + v, where w is in W, then u + W = (w + v) + W = (v + w) + W = (v + W) + (w + W) = v + W + W = v + W

(W = 0 + W is the identity of V/W, and w + W = W if and only if w - 0 = w is in W).

(2). these sorts of exercises are tedious, and involve working through what rank(f) means. rank(f) = dim(f(U)) if f:U→U (f(U) is also called im(f)).

so if rank(f) = k, for example, this means we have {u_{1},...,u_{k}} with B = {f(u_{1}),...,f(u_{k})} a basis for f(U).

and if rank(g) = m, we have {v_{1},...,v_{m}} with B' = {g(v_{1}),...,g(v_{m})} a basis for g(U).

show that B U B' spans (f+g)(U). this means that some linearly independent subset C of B U B' is a basis for (f+g)(U).

hence rank(f+g) = dim((f+g)(U)) = |C| ≤ |B U B'| ≤ |B| + |B'| = dim(f(U)) + dim(g(U)) = rank(f) + rank(g).

i don't want to spoil all your fun, so try working through the rest and let us know where you get stuck.

Re: linear algebra problem set 4

uwokie. thanks for making a crack out of it. i really appreciate it a lot. i'll review this and i hope i would get it. thanks again. GOD bless.

Re: linear algebra problem set 4

sir, hope you don't mind? i don't have any idea to answer the other items? would they go the same way as in 2.a?

Re: linear algebra problem set 4

It would be better for **you** to at least **try** to do the problems yourself and then show us what you have done. That way we can specifically address the points where you have difficulty. In particular, what are the **definitions** of all of these terms: 'linear transformation', 'rank', 'nullity', etc.

Re: linear algebra problem set 4

ok, sorry for that. i am really having a hard time understanding the new concepts for me, the vector spaces, linear transformation, rank, nullity, etc. which we weren't able to tackle in my bachelor's degree. i'm only reading some topics about it but but still, i'm having difficulties learning them. this is why i can't answer or even start the proof, which is one big dilemma for me.

anyway, here's my proof in number 1, the way i understood the response here and some reading materials i gathered from the internet:

Let *Y* be a basis for *W*. *Y* is a linearly independent subset of *V*, which can be extended to basis *X* for *V*.

Fact: <*Y*> = W, *Y* is linearly independent. => dim W = |*Y*| = m

WTS, if dim *V/W* = n, then dim [I]V[/I] = m+n, or dim *V* = dim *W* + dim *V/W*

First, show that *A* = {a+W|a element of *X/Y*} is a basis for *V/W*.

That is, (i) *A* is linearly independent and (ii) <*A*> = *V/W*

For (i), let a_{1},...,a_{n} be distinct elements of *X/Y* and suppose r_{1},...,r_{n} are elements of *F* such that

r_{1} (a_{1}+*W*) + ... + r_{n} (a_{n}+*W*) = 0 in *V/W*. But, 0_{V/W} = 0_{V} + *W*, so

so r_{1} (a_{1}+*W*) + ... + r_{n} (a_{n}+*W*) = 0_{V} + *W*

(r_{1} a_{1} + ... + r_{n} a_{n}) + *W* = 0_{V} + W

Hence, (r_{1} a_{1} + ... + r_{n} a_{n}) is an element of *W*.

Since *Y* is a basis for *W*, r_{1} a_{1} + ... + r_{n} a_{n} = s_{1} y_{1} + ... + s_{m} y_{m} for some s_{1},...,s_{m} elements of *F*.

r_{1} a_{1} + ... + r_{n} a_{n} - s_{1} y_{1} - ... - s_{m} y_{m} = 0

Since *X* is a basis for *V*, *X* is linearly independent, r_{1} = ... = r_{n} = 0.

Thus, A is linearly independent.

For (ii), let *a+W* be an element of *V/W*.

Since *X* is a basis for *V*, <*X*>=*V*. For *a* in *V*, there exist some r_{1},...,r_{n},s_{1},...,s_{m} in *F* such that

*a* = r_{1} x_{1} + ... + r_{n} x_{n} + s_{1} y_{1} + ... + s_{m} y_{m}

*a+W* = (r_{1} x_{1} + ... + r_{n} x_{n} + s_{1} y_{1} + ... + s_{m} y_{m}) + *W*

= (r_{1} x_{1} + ... + r_{n} x_{n}) + (s_{1} y_{1} + ... + s_{m} y_{m}) + *W*

But, s_{1} y_{1} + ... + s_{m} y_{m} is in *Y* which is in *W*,

*a+W* = (r_{1} x_{1} + ... + r_{n} x_{n}) + *W*

= r_{1} (x_{1}+*W*) + ... + r_{n} (x_{n}+*W*) which is equal to *a* an element of <*A*>.

Thus, <*A*> = *V/W*.

From here, dim *V/W* = n

(by the way, our professor doesn't want to assume that dim *V* , dim *W* and dim *V/W* are finite. so I cannot do any transposition in the theorem.)

Recall, *W* union with *V/W* = V. So, dim *W* + dim *V/W* = dim *V*, taking dim *V*=m+n.

for the item number 2, sadly, i might not be able to answer it anymore since i don't have any time to finish absorbing the concepts covering it. so, i just wish i have answered item 1 correctly.

Re: linear algebra problem set 4

you can adapt your proof easily for the infinite cases (i would use the notation X\Y instead of X/Y. Y is not a subspace of X). all you really need to do is show that the set:

B' = {a + W: a is in X\Y} is a basis for V/W. you don't need to say that dim(V) = n, or dim(V/W) = m. any vector in W is a FINITE linear combination of elements in Y.

linear independence of B' is shown by taking FINITE linear combinations of elements of B'.

what your proof then shows is:

dim(W) + dim(V/W) = |Y| + |B'| = |Y| + |X\Y| = |Y U (X\Y)| (since Y and X\Y are disjoint)

= |X| = dim(V) (and no finite-dimensionality is assumed).

***********

basically, the whole idea of a linear mapping is it preserves "linearity". so it takes linear combinations in one space, to the SAME linear combinations in another space. in the target (image) space, however, we may get linear combinations that "are the same". in other words, the image of a basis set, may not be a basis of the image, because some "collapsing" occurs. the vectors that "collapse" (down to 0) are the vectors in the null space of a linear mapping, and the dimension of this null space is a measure of how much shrinkage occurs.

here is a concrete example:

let T:R^{3}→R^{2} be the mapping T((x,y,z)) = (x,y).

first, we verify that T is linear:

now (1,0,0), (0,1,0) and (0,0,1) is a basis for R^{3}. so T is linear if T((a,b,c)) = T(a(1,0,0) + b(0,1,0) + c(0,0,1)) = a(T(1,0,0)) + b(T(0,1,0)) + c(T(0,0,1)).

T(1,0,0) = (1,0)

T(0,1,0) = (0,1)

T(0,0,1) = (0,0), so a(T(1,0,0)) + b(T(0,1,0)) + c(T(0,0,1)) = a(1,0) + b(0,1) + c(0,0) = (a,0) + (0,b) + (0,0) = (a,b), so T is linear.

note that the set {(1,0),(0,1),(0,0)} is not a basis for im(T) = R^{2}, since (0,0) is always linearly dependent.

in fact, it is not hard to see that the null space of T = {z(0,0,1) : z in R}. so if two vectors in R^{3} only differ in their 3rd coordinate, T maps them both to the same vector in R^{2}. in other words, even though T is linear, it doesn't preserve linear independence, it "loses one dimension" (the entire z-axis "shrinks to 0").

the null space of a linear transformation is always a subspace of the domain of a linear transformation. conversely, if we have a subspace U of a vector space V, we can form the linear transformation T: V → V/U given by:

T(v) = v + U

in which case the null space of T is precisely U.

so back to the example above:

we have a natural identification (isomorphism) of R^{2} with R^{3}/ker(T) given by:

(x,y) ↔ (x,y,z) + ker(T)

that is we can write (x,y,z) = (x,y,0) + (0,0,z), and T(0,0,z) = (0,0) (T only cares about the first two coordinates, the last one might as well be 0).

what does this quotient space R^{3}/ker(T) look like? it looks like a bunch of parallel lines (like a stack of straws all bundled together), all parallel to the z-axis (the null space of T *is* the z-axis). to specify "which" straw (line) a point (x,y,z) is on, we look at (x,y) in the xy-plane, for there is a unique line parallel to the z-axis going through (x,y,0), namely:

L = (x,y,0) + t(0,0,1).

********

all of this can be summarized in different ways: the usual way is called the rank-nullity thereom (very important):

if T:V→W is a linear transformation, then

rank(T) + nullity(T) = dim(V)

but an equivalent way to say this is:

if U is a subspace of V, then V ≅ U ⊕ V/U (use the map T:V→V/U, given above).

in particular, if rank(T) = dim(V), then nullity(T) = 0, in which case T preserves linear independence. it turns out that this is the same as saying T is injective.

if rank(T) = dim(W), then T preserves spanning (T takes a basis to a spanning set). it turns out that this is the same thing as saying T is surjective.

you should keep this in mind: linear independence corresponds to injective, spanning corresponds to surjective. the first terms apply to VECTORS, the second terms apply to linear transformations. the concept of basis is what connects the two: we can describe what a linear transformation does, by looking at basis vectors.

Re: linear algebra problem set 4

thanks for the assistance. i really appreciate it a lot. :) i'll just have to read some materials for now. i got sick for a while.