Results 1 to 2 of 2

Math Help - Direct Products/Direct Sums

  1. #1
    Newbie
    Joined
    Mar 2012
    From
    Dresden
    Posts
    24
    Thanks
    1

    Direct Products/Direct Sums

    Hi people,
    I have some struggles with the concepts mentioned in the header of this subject.

    I thought I'd know what direct sums and direct products are, but there are always incidents where I am confused as to how these
    concepts are used. Let me start with direct sums.

    If one has two subspaces, say  V_1 and  V_2 , of a vector space V, then one can form the direct sum vector space
     V_1 \oplus V_2 , which is a larger vector space, containing all linear combinations of those two vector spaces. This is completely clear to me.
    However, there are many cases where one doesn't have a priori a "big" vector space V where the subspaces are embedded. So one only has vector spaces W and U, say, of different dimension, and then forms the space  W \oplus U . Now here would be my question: What does that mean, then? Do I have to think of a "big" vector space V where these two are embedded beforehand? But then there'd be more than one possibility of combining these two vector spaces W and U by direct sum (with more or less "zeros" in the vectors, if one thinks about it in such a way).
    In an infinite-dimensional Hilbert space  \mathcal{H} , which is actually more abstract, the concept is - curiously - perfectly clear again.

    Now direct products. I always thought of direct products as some sort of tensor products, that is, if  v \in V,  u \in U, then
     v \otimes u \in V \times U  . One can represent this as tuple (v,u) (in which the order is important). One can also define some sort of operation. Let's consider two groups  G_1 and  G_2 . Then one might form the direct product  G_1 \otimes G_2 with
    some direct product group operation defined by  (g_1,g_2) \cdot (h_1,h_2) := (g_1 h_1, g_2 h_2) with  g_i, h_i \in G_i.

    Applying this to vector spaces, for example, one might set the operation between g and h as g+h.
    However there is once again some point which confuses me: In quantum mechanics, one considers states of a system, namely |n> and |m>, being in the Hilbert space  \mathcal{H}. Now one can form the product  |n> \otimes |m> \in \mathcal{H} \otimes \mathcal{H} , or shortly written,  |n>|m>  (yes, one does such things in quantum mechanics).
    But then, sometimes one just considers  |n>|m> as being a "normal" product between functions (states) in the Hilbert space, an AT THE SAME TIME as direct products, being in the product Hilbert space!
    One possible explanation for me could be that one can once again consider the states as tuple (|n>, |m>), equipped with the multiplication of functions. But firstly this would be arbitrary, and secondly, the question would arise as what the difference between the cartesian product  "\times" and the direct product  "\otimes" would be.

    I'd be thankful if someone clarifies the above concepts for me and maybe give some reference where this is explained in an adequate way.

    Thanks
    Follow Math Help Forum on Facebook and Google+

  2. #2
    MHF Contributor

    Joined
    Mar 2011
    From
    Tejas
    Posts
    3,391
    Thanks
    758

    Re: Direct Products/Direct Sums

    first of all, the tensor product "⊗" is NOT a direct product. the elementary tensor u⊗v does NOT live in UxV, it lives in U⊗V, which is actually formed this way:

    you take UxV as a SET (yes, it's a big set), and you consider every element to be a basis element (yes, it's a big basis), and form every possible linear combination (over the field F) of u's and v's. this gives you "the free vector space over UxV", F(UxV).

    for example, an element of F(RxR) looks like this: 2(1,0) + 3(2,-2) + 4(3,-2). we can't simplify this any further.

    but we don't actually want this vector space, it's "too big". so we're going to form a quotient space, by modding out a subspace, W. we're going to specify W by specifying basis elements, which are all elements of the form:

    (u,v+v') - (u,v) - (u,v')
    (u+u',v) - (u,v) - (u',v)
    (cu,v) - c(u,v)
    (u,cv) - c(u,v)

    "modding these out" effectively sets all of these to 0 in the quotient space F(UxV)/W. an element (u,v) + W is written u⊗v.

    then the above rules become:

    u⊗(v+v') = u⊗v + u⊗v'
    (u+u')⊗v = u⊗v + u'⊗v
    c(u⊗v) = (cu⊗v) = (u⊗cv)....in other words, ⊗ is bilinear on UxV.

    so let's look at 2(1,0) + 3(2,-2) + 4(3,-2) as it occurs in the tensor product R⊗R:

    it becomes 2(1⊗0) + 3(2⊗-2) + 4(3⊗-2) = 2⊗0 + 6⊗-2 + 12⊗-2 (taking c(u⊗v) = (cu⊗v) for all 3 terms)

    = 2⊗0 + (6+12)⊗-2 = 2⊗0 + 18⊗-2 = 2⊗(9*0) + 18⊗-2 = 18⊗0 + 18⊗-2 = 18⊗-2 = -2(18⊗1) = -36(1⊗1).

    in fact, for any two real numbers a,b, it is easy to see that (a⊗b) = ab(1⊗1). now {1} is a basis for R, so {1⊗1} is a basis for R⊗R, which is thus a 1-dimensional vector space over R, and thus isomorphic to R.

    that is: R⊗R ≅ R (as vector spaces). but note that we can do something in the vector space R⊗R we can't do ordinarily in a vector space: we can "multiply vectors". in general U⊗V can be thought of as "a generic bilinear map on UxV" (if U = V = F, this becomes ordinary field multiplication). if U has a basis {u1,...,um} and V has a basis {v1,...,vn} then U⊗V has the basis {ui⊗vj}, thus U⊗V has dimension mn. in the case where U = V* (the dual space of V) it is natural to think of v*⊗v' as the nxn matrix vTv'.

    in a naive way, U⊗V can be thought of as "multiplying two vector spaces together".

    *****************

    the direct sum, is an entirely different sort of animal. the only similarity is we are building a bigger space out of U and V. it is important that U and V be disjoint (except for 0, which always is in any vector space) for this to work: if U and V have non-trivial intersection, and lie in some "bigger space" W, we only get U+V. (because u+v may equal u'+v' even if u ≠ u' and v ≠ v'). note that dim(U⊕V) = dim(U) + dim(V). so this is sort of a way to "add two vector spaces together".

    you might find it instructive to prove that:

    U⊗(V⊕W) ≅ (U⊗V)⊕(U⊗W)

    for a finite number of "terms" (only taking the direct sum of finitely many vector spaces) the direct sum *is* the direct product on the underlying (abelian) groups. for "infinite copies" of a vector space, there is a slight wrinkle:

    the direct sum consists of all (vi) (where i can belong to any indexing set I) where all but finitely many vi = 0.

    the direct product consists of all (vi), where no limitation is placed on the vi.

    the difference is analogous to the distinction between polynomials (which are a direct sum), and power series (which are a direct product). the vector space of all real power series can be indentified with RN, the space of all functions from the natural numbers to the reals, whereas the polynomials are (can be identified with) the subspace of all such functions with finite support.
    Follow Math Help Forum on Facebook and Google+

Similar Math Help Forum Discussions

  1. Direct Products
    Posted in the Advanced Algebra Forum
    Replies: 6
    Last Post: December 14th 2011, 01:37 AM
  2. direct products
    Posted in the Advanced Algebra Forum
    Replies: 1
    Last Post: December 4th 2009, 11:57 AM
  3. direct products problem
    Posted in the Advanced Algebra Forum
    Replies: 1
    Last Post: October 6th 2009, 12:13 PM
  4. direct products
    Posted in the Advanced Algebra Forum
    Replies: 0
    Last Post: November 9th 2008, 07:42 PM
  5. direct products problem
    Posted in the Advanced Algebra Forum
    Replies: 1
    Last Post: October 21st 2008, 12:25 PM

Search Tags


/mathhelpforum @mathhelpforum