1. ## Hilbert A-module

Suppose X is a Hilbert A-module. I would like to show that for each $\displaystyle x\in X$ there is a unique $\displaystyle y\in X$ such that $\displaystyle x=y\cdot\langle y,y\rangle$

The plan is to use adjoinable operators for each $\displaystyle x\in X$
$\displaystyle D_x:X\rightarrow A,D_x(y):=\langle x,y\rangle$
$\displaystyle L_x:A\rightarrow X,L_x(a)=x\cdot a$

where we take $\displaystyle A_A$ to be a right Hilbert A-module with the inner product defined by $\displaystyle \langle a,b\rangle:=a^*b$

I have shown that the operators $\displaystyle D_x,L_x$ are the adjoint of each other

2. Originally Posted by Mauritzvdworm
Suppose X is a Hilbert A-module. I would like to show that for each $\displaystyle x\in X$ there is a unique $\displaystyle y\in X$ such that $\displaystyle x=y\cdot\langle y,y\rangle$

The plan is to use adjoinable operators for each $\displaystyle x\in X$
$\displaystyle D_x:X\rightarrow A,D_x(y):=\langle x,y\rangle$
$\displaystyle L_x:A\rightarrow X,L_x(a)=x\cdot a$

where we take $\displaystyle A_A$ to be a right Hilbert A-module with the inner product defined by $\displaystyle \langle a,b\rangle:=a^*b$

I have shown that the operators $\displaystyle D_x,L_x$ are the adjoint of each other
It's true that the operators $\displaystyle D_x,L_x$ are adjoints of each other, but I don't see how to use that to find y, given x.

To me this looks a bit similar to the polar decomposition of an element of a von Neumann algebra, as done in Gert Pedersen's book C*-algebras and their automorphism groups. In fact, if $\displaystyle y = x\cdot\langle x,x\rangle^{-1/3}$ then $\displaystyle x=y\cdot\langle y,y\rangle$. Of course, that construction only works if $\displaystyle \langle x,x\rangle$ is invertible. But you can avoid that snag as follows. First note that we may assume that A is unital (if not, then adjoin an identity e to it and extend the action of A on X to an action of the unitised algebra on X). Now define a sequence of elements $\displaystyle y_n = x\cdot\bigl(\frac1ne + \langle x,x\rangle^{1/3}\bigr)^{-1}$ and prove, (as in Pedersen's Proposition 2.2.9) that $\displaystyle (y_n)$ is Cauchy.

I don't see how to prove the uniqueness part of the problem. If $\displaystyle y\cdot\langle y,y\rangle = z\cdot\langle z,z\rangle$ then $\displaystyle \bigl\langle y\cdot\langle y,y\rangle,y\cdot\langle y,y\rangle \bigr\rangle = \bigl\langle z\cdot\langle z,z\rangle,z\cdot\langle z,z\rangle\bigr\rangle$, from which you get $\displaystyle \langle y,y\rangle^3 = \langle z,z\rangle^3$ and hence $\displaystyle \langle y,y\rangle = \langle z,z\rangle$. But that's as far as I can get.

3. I think I have found a way to handle to problem, but it is somewhat involved. Here I will give the idea

Let us define the following operators
$\displaystyle T\in \mathcal{K}(X,A_A)$
$\displaystyle S\in\mathcal{K}(A_A,X)$
$\displaystyle R\in\mathcal{K}(X)$ and let $\displaystyle a\in A_A$
We can now use these to define an operator on the direct sum Hilbert A-module $\displaystyle A\oplus X$ in the following way
$\displaystyle $\left( \begin{array}{cc} a & T \\ S & R \end{array} \right) \left( \begin{array}{c} b\\ x \end{array} \right)=\left( \begin{array}{c} ab+Tx\\ Sb+Rx \end{array}\right)$$

This operator clearly is adjoinable (just take the complex conjugate en transpose of the matrix). We can now also show that the operator defined above is in $\displaystyle \mathcal{K}(A\oplus X)$

This can be done using embeddings of the form
$\displaystyle a\mapsto \left(\begin{array}{cc}a & 0 \\ 0 & 0 \end{array} \right)$
$\displaystyle T\mapsto \left(\begin{array}{cc}0 & T \\ 0 & 0 \end{array} \right)$
and similarly for the others.

now for each $\displaystyle x\in X, \left(\begin{array}{cc}0 & D_x \\ L_x & 0 \end{array} \right)$ is selfs adjoint element in $\displaystyle \mathcal{K}(A\oplus X)$ which anticommutes with $\displaystyle \left(\begin{array}{cc}1 & 0 \\ 0 & -1 \end{array} \right)$

We now exploit the functional calculus together with the function $\displaystyle f(x)=x^{\frac{1}{3}}$ and find that $\displaystyle f\left(\begin{array}{cc}0 & D_x \\ L_x & 0 \end{array} \right)\in\mathcal{K}(A\oplus X)$ and anticommutes with $\displaystyle \left(\begin{array}{cc}1 & 0 \\ 0 & -1 \end{array} \right)$

We can also show that every self adjoint element in $\displaystyle \mathcal{K}(A\oplus X)$ is of the form $\displaystyle \left(\begin{array}{cc}0 & D_x \\ L_x & 0 \end{array} \right)$
(The proof of this is also somewhat involved and for practical reasons I will omit it here)

Now we have
$\displaystyle \left(\begin{array}{cc}0 & D_x \\ L_x & 0 \end{array} \right)=\left(\begin{array}{cc}0 & D_y \\ L_y & 0 \end{array} \right)^3$ for some $\displaystyle y\in X$

Now we just multiply out the matrices and solve for the bottom left corner which will yield
$\displaystyle L_x=L_yD_yL_y$ then we have
$\displaystyle L_x(a)=L_yD_yL_y(a)=L_yD_y(y\cdot a)=L_y(\langle y,y\rangle \cdot a)=y\cdot\langle y,y\rangle \cdot a$

so finally we get $\displaystyle x=y\cdot\langle y,y\rangle$

4. That's a very nice argument. I haven't checked every detail of it, but it looks convincing. Certainly the cube root function has to lie at the heart of the proof.