Infinite Dimensional Cross Product

It has been proved that cross product of vectors just is satisfied in 3D space and also in 7D and 8D spaces as specific cases. Besides, there are some other multiplication schemes of vectors such as wedge product, Ersatz cross product ... that are not quite the cross product (they do not satisfy the required conditions as in 3D space). On the other hand, in Dirac bra-ket notation of quantum mechanical vectors, the operators are introduced as outer (cross) product of a pair of vector. But it is not clear that this definition of cross product is the same and familiar cross product of vectors with the known required conditions or it is a specific case such as the others. And in principle, is there exist a definition of "cross product" of two vectors in infinite dimensional space that satisfies the conditions as in 3D space? I will gratefully appreciate if anyone could guide me.

Re: Infinite Dimensional Cross Product

If you are looking at R^n, you would need the product of all vector lengths to be finite, or more correctly, the sum of all the logarithms of the lengths to also be finite.

For this you would need a stronger condition of little l squared (l^2) since you would need a finite amount of vectors to have length greater than 1 and an infinite amount to have length of 1 or less.

Re: Infinite Dimensional Cross Product

We know that inner product of vectors is satisfactorily defined in infinite dimensional Hilbert space in integral form. But what about "cross product". Is there exist a definition of "cross product" of two vectors in infinite dimensional space, particularly in integral form, that satisfies the conditions as in 3D space?

Re: Infinite Dimensional Cross Product

in general, the cross product is a product of n-1 vectors in n-dimensional space. this is in stark contrast to the inner product which is a product of 2 vectors in n-dimensional space that yields a scalar (increasing the dimensionality doesn't change the basic configuration). for this reason it is often maintained that the cross product can only be defined for n = 3 (at least as a binary operation).

it is possible to define an exterior product of vectors:

which is a map from .

it turns out that

if dim(V) = n, this is (n-1)n/2. if n-1 = 2, the 2's cancel, and we get a vector space of the same dimension as V, that is: 3.

that is we can regard as an element of F^{3}, using the basis correspondence:

and with this identification of we get the usual cross-product.

for an infinite-dimensional space, it's not exactly clear how to generalize this, as the definition of the cross-product relies on unique arithmetical properties of 3.

if one relaxes the conditions for a cross-product somewhat, one can obtain a quasi-cross product by considering the non-real part of octonians (hence the 7-dimensional version), but the dimensionality of division rings over R is severely limited. one can also obtain a cross product of 3 vectors in 8 dimensions, which is a special case (and involves more than 2 factors).

in short, the possibilities for cross-products are rather limited.

2 Attachment(s)

Re: Infinite Dimensional Cross Product

Thanks for your excellent explanation. In quantum mechanical discussion of Hilbert space(J. J. Sakurai, Modern Quantum Mechanics, 2ed., p-15) and also some other mathemtical references, the operators, that for ket and bra of:

Attachment 26659

is defined as:

Attachment 26660

is mentioned as "outer product". It is not clear for me the relation of this definition with other familiar outer or cross product definitions in n-dimensional spaces.

Re: Infinite Dimensional Cross Product

regard u,v as nx1 real column vectors. then

<u,v> = u^{T}v (this is a matrix product of size (1xn)x(nx1) = 1x1, a scalar).

we can also form the matrix product:

v^{T}u which is an nxn matrix. this is the "outer product".

if we regard u,v as complex nx1 matrices, then we have to define the inner product as:

<u,v> = u^{H}v, where u^{H} is the conjugate-transpose. we then also have the outer product of uv^{H}.

now in an arbitrary complex inner-product space (where we no longer have the ability to use matrices as representations of vectors and linear maps for the infinite-dimensional case, in general), we want some way to generalize this.

matrices naturally generalize to linear transformations, and linearity can be checked directly on the values, without producing a basis:

L(au+bv) = a(L(u)) + b(L(v)) for all u,v in V, is the linearity condition.

and nx1 column matrices can just be vectors, one does not need a basis to verify the axioms of a vector space.

the question arises, what corresponds to the "transpose" or "conjugate-transpose" of a vector now?

note that considered as a FUNCTION, a 1xn row-vector spits out a number when multiplied by an nx1 matrix.

since matrix multiplication is linear, we can regard the transpose (or conjugate-transpose in the complex case) of a vector as a function V--->F.

namely u^{T}(v) = u^{T}v (or u^{H}v). in this case, we call u^{T} (or u^{H}) the DUAL vector to u, and denote it like so: u*.

we then (in the finite-dimensional case) get a vector space isomorphic to V, V*, the dual space of linear functionals on V.

sometimes you see this written like so:

u* = <u,__> which means that u*(v) = <u,v>.

given an inner product <_,_> it can be shown that in the finite-dimensional case, EVERY linear functional in V* arises in such a way.

so an inner product <u,v> can be seen as a combination of the linear functional u* = <_,v> and the vector v.

this gives us a way to regard the (conjugate-) transpose of a vector: as an element of the dual space.

in physics (such as in the study of quantum mechanics), instead of using the notation u* = <u,_>, they represent it like so: u* = <u|, or a "bra".

similarly, the vectors bras act upon are called "kets" and notated like so: v = |v>.

when a bra acts upon a ket, you get a scalar:

<u|v> = <u,v> (the inner product).

so in the infinite-dimensional case, instead of writing u^{H}, we write <u|, which is the bra that corresponds to the ket |u>.

it turns out that for infinite-dimensional V, V* can be lots bigger than V, but the CONTINUOUS elements of V* are isomorphic to V (actually for complex vector spaces it's an anti-isomorphism, because scalar multiples are turned into conjugate scalar multiples, but that's not all that important).

now we're in a position to say what |u><v| means. this represents a linear operator: V-->V defined by:

(|>u<v|)(w) = |>u<v|w> = (<v|w>)|u> = <v,w>u

that is, the linear operator (transformation) that maps the ket (vector) |w> to the ket (vector) |u><v|w> (the vector u multiplied by the inner product of v and w).

that is the outer product is a mapping VxV*--->Hom(V,V), whereas the inner product is a mapping V*xV-->F

*********************

with regards to your earlier questions about integrals and inner products, the elements of the space of continuous (this is important, for technical reasons) linear functionals on the vector space of square-integrable functions all look like this:

in other words L_{g} is a "bra" (feed it a "ket", a function f in this case, you get a number).

3 Attachment(s)

Re: Infinite Dimensional Cross Product

I’m not sure whether my understanding is correct or not:

The outer product of |u><v| is defined (only?) by its operation on |w> as: |>u<v|w> and there is not any direct relationship between it and “cross product” of vectors.

Because you kindly regarded to my previous question, I allow myself to remind one of your phrases, that is not so clear for me:

“it is not true that all infinite-dimensional vectors are sums of the form:

Attachment 26674

some infinite-dimensional vector spaces are of uncountable dimension, and such a series "doesn't give enough terms". for example, the real numbers are a vector space over the rationals, but there is no countable set of (Q-linearly independent) real numbers that will serve to describe all reals as Q-linear combinations of them.”

Now, when we define, in general:

Attachment 26675

We in fact assumed that the set of vectors components is equipotent to natural number set N. So, the set of terms ukvk to be summed for calculating the inner product of the vectors is equipotent to N, too. But, when we introduce the inner product in integral form of:

Attachment 26676

he variable x is assumed to be continuously ranged from a to b thereby covering the uncountable real numbers in [a,b] range. Now, it seems that more terms are summed in the later expression than the former! (such an ambiguity can be found wherever an infinite-element summation is transformed to integral). Am I mistaken? or we need a more suitable notation for definition of the vectors emphasizing continuity property of their components? It is one of my old questions!

Re: Infinite Dimensional Cross Product

there is a certain "dynamic tension" you get when dealing with infinite-dimensional vector spaces.

on the one hand, every vector is a finite linear combination of basis elements. on the other, the set of basis elements itself may be uncountable. so you can use a "per-vector" index which is countable (indeed, even finite), but not a "per-space" index (if that makes sense to you).

but, i hesitate to regard an integral as a natural generalization of finite sums (in the sense of inner products). an inner product is just a bilinear function from VxV to F with certain properties. there are more possibilities than just sums or integrals.