first of all, the tensor product "⊗" is NOT a direct product. the elementary tensor u⊗v does NOT live in UxV, it lives in U⊗V, which is actually formed this way:

you take UxV as a SET (yes, it's a big set), and you considereveryelement to be a basis element (yes, it's a big basis), and form every possible linear combination (over the field F) of u's and v's. this gives you "the free vector space over UxV", F(UxV).

for example, an element of F(RxR) looks like this: 2(1,0) + 3(2,-2) + 4(3,-2). we can't simplify this any further.

but we don't actually want this vector space, it's "too big". so we're going to form a quotient space, by modding out a subspace, W. we're going to specify W by specifying basis elements, which are all elements of the form:

(u,v+v') - (u,v) - (u,v')

(u+u',v) - (u,v) - (u',v)

(cu,v) - c(u,v)

(u,cv) - c(u,v)

"modding these out" effectively sets all of these to 0 in the quotient space F(UxV)/W. an element (u,v) + W is written u⊗v.

then the above rules become:

u⊗(v+v') = u⊗v + u⊗v'

(u+u')⊗v = u⊗v + u'⊗v

c(u⊗v) = (cu⊗v) = (u⊗cv)....in other words, ⊗ is bilinear on UxV.

so let's look at 2(1,0) + 3(2,-2) + 4(3,-2) as it occurs in the tensor product R⊗R:

it becomes 2(1⊗0) + 3(2⊗-2) + 4(3⊗-2) = 2⊗0 + 6⊗-2 + 12⊗-2 (taking c(u⊗v) = (cu⊗v) for all 3 terms)

= 2⊗0 + (6+12)⊗-2 = 2⊗0 + 18⊗-2 = 2⊗(9*0) + 18⊗-2 = 18⊗0 + 18⊗-2 = 18⊗-2 = -2(18⊗1) = -36(1⊗1).

in fact, for any two real numbers a,b, it is easy to see that (a⊗b) = ab(1⊗1). now {1} is a basis for R, so {1⊗1} is a basis for R⊗R, which is thus a 1-dimensional vector space over R, and thus isomorphic to R.

that is: R⊗R ≅ R (as vector spaces). but note that we can do something in the vector space R⊗R we can't do ordinarily in a vector space: we can "multiply vectors". in general U⊗V can be thought of as "a generic bilinear map on UxV" (if U = V = F, this becomes ordinary field multiplication). if U has a basis {u_{1},...,u_{m}} and V has a basis {v_{1},...,v_{n}} then U⊗V has the basis {u_{i}⊗v_{j}}, thus U⊗V has dimension mn. in the case where U = V* (the dual space of V) it is natural to think of v*⊗v' as the nxn matrix v^{T}v'.

in a naive way, U⊗V can be thought of as "multiplying two vector spaces together".

*****************

the direct sum, is an entirely different sort of animal. the only similarity is we are building a bigger space out of U and V. it is important that U and V be disjoint (except for 0, which always is in any vector space) for this to work: if U and V have non-trivial intersection, and lie in some "bigger space" W, we only get U+V. (because u+v may equal u'+v' even if u ≠ u' and v ≠ v'). note that dim(U⊕V) = dim(U) + dim(V). so this is sort of a way to "add two vector spaces together".

you might find it instructive to prove that:

U⊗(V⊕W) ≅ (U⊗V)⊕(U⊗W)

for a finite number of "terms" (only taking the direct sum of finitely many vector spaces) the direct sum *is* the direct product on the underlying (abelian) groups. for "infinite copies" of a vector space, there is a slight wrinkle:

the direct sum consists of all (v_{i}) (where i can belong to any indexing set I) where all but finitely many v_{i}= 0.

the direct product consists of all (v_{i}), where no limitation is placed on the v_{i}.

the difference is analogous to the distinction between polynomials (which are a direct sum), and power series (which are a direct product). the vector space of all real power series can be indentified with R^{N}, the space of all functions from the natural numbers to the reals, whereas the polynomials are (can be identified with) the subspace of all such functions with finite support.