L_+ = ker(A*-i)=[ran(A+i)^perp
L_- = ker(A*+1) = [ran(A-i)]^perp
K_+ = {f directsum if: f in L_+}
K_- = {g directsum (-ig):g in L_-}
Show that gra A, K_+ and K_- are pairwise orthogonal.
How do I do this? Please help
This is a problem about extensions of a symmetric unbounded operator A on some Hilbert space H, and it's not hard provided that you understand all the definitions.
For a start, A has a domain D(A) (which is a dense subspace of H). It has an adjoint A* with domain D(A*), such that $\displaystyle \langle Af,g\rangle = \langle f,A^*g\rangle$ for all f in D(A) and g in D(A*). To say that A is symmetric means that $\displaystyle D(A)\subseteq D(A^*)$ and A*(f)=A(f) for all f in D(A).
The notation gra(A) means the graph of A, which is the set $\displaystyle \{f\oplus Af \in H\oplus H : f\in D(A)\}$.
The spaces $\displaystyle \mathcal{L}_\pm$ are defined by $\displaystyle \mathcal{L}_\pm = \{f\in D(A^*) : A^*f = \pm if\}$. The kernel of an adjoint operator is always the orthogonal complement of the range of the operator, so $\displaystyle \mathcal{L}_\pm = \text{ker}(A^*\mp i) = (\text{ran}(A\pm i))^\perp$ (because $\displaystyle -iI$ is the adjoint of $\displaystyle iI$).
The spaces $\displaystyle \mathcal{K}_\pm = \{f\oplus(\pm if) : f\in\mathcal{L}_\pm\}$ are by definition contained in the graph of A*. To see that they are orthogonal to the graph of A, let $\displaystyle f\oplus Af \in \text{gra}(A)$ and $\displaystyle g\oplus(ig)\in\mathcal{K}_+$. Then
$\displaystyle \langle f\oplus Af, g\oplus ig\rangle = \langle f,g\rangle + \langle Af,ig\rangle$ (definition of inner product in direct sum)
$\displaystyle {\color{white}\langle f\oplus Af, g\oplus ig\rangle} = \langle f,g\rangle - i\langle Af,g\rangle$ (conjugate-linear property of inner product)
$\displaystyle {\color{white}\langle f\oplus Af, g\oplus ig\rangle} = -i\langle (A+i)f,g\rangle$
$\displaystyle {\color{white}\langle f\oplus Af, g\oplus ig\rangle} = 0$ (because $\displaystyle g\in(\text{ran}(A+ i))^\perp$.
A similar argument shows that gra(A) and $\displaystyle \mathcal{K}_-$ are orthogonal.
The fact that $\displaystyle \mathcal{K}_+$ and $\displaystyle \mathcal{K}_-$ are orthogonal is more or less immediate, because if $\displaystyle f\oplus(if)\in\mathcal{K}_+$ and $\displaystyle g\oplus(ig)\in\mathcal{K}_-$ then $\displaystyle \langle f\oplus if,g\oplus ig \rangle = \langle f,g\rangle + \langle if,-ig\rangle = \langle f,g\rangle - \langle f,g\rangle = 0.$
gra(A) is contained in gra(A*) because A is symmetric. (If you're unsure about that, go back and check the definition of a symmetric operator.)
$\displaystyle \mathcal{K}+$ and $\displaystyle \mathcal{K}_-$ are contained in gra(A*) by the way in which they are defined. (If you're unsure about that, go back and check the definitions of $\displaystyle \mathcal{K}_{\pm}$.)
The sum is a direct one because we have seen that these three spaces are all orthogonal to each other.
This is proved in Lemma 2.13 of Conway's Course in functional analysis, Chapter X. The idea is that if $\displaystyle \text{gra}(A) \oplus \mathcal{K}_- \oplus \mathcal{K}_-$ is not the whole of gra(A*) then there will be a nonzero vector in its orthogonal complement. An element of gra(A*) is of the form $\displaystyle h\oplus A^*(h)$ for some h in the domain of A*. So you take a vector h in D(A*) such that $\displaystyle h\oplus A^*(h)$ is in that orthogonal complement, and you need to show that h=0. Go slowly and carefully through the proof of Lemma 2.13, and you'll see how it works.
Example 2.21:
Let A and $\displaystyle \mathcal{D} $ be as in Example 1.11 ($\displaystyle \mathcal{D} = $ all functions $\displaystyle f:[0,1]\rightarrow C $ that are absolutely continuous; so A is symmetric (how does this imply that?). The operator B of Example 1.12 ($\displaystyle B \in \mathcal{C}(L^{2}(0,1)) $ - it's not even what he meant by $\displaystyle \mathcal{C} $ ) is a self-adjoint extension of A. Let us determine all self-adjoint extensions of A. To do this it is necessary to determine $\displaystyle \mathcal{L}_\pm $. Now $\displaystyle f \in \mathcal{L}_\pm$ if and only if $\displaystyle f \in dom A^*$ and $\displaystyle \pm if=A^*f=if'$, so $\displaystyle \mathcal{L}_\pm = \{\alpha e^{\pm x} : \alpha \in C \} $. Hence $\displaystyle n_\pm =1 $. Also, the isomorphisms of $\displaystyle \mathcal{L}_+ $ onto $\displaystyle \mathcal{L}_- $ are all of the form $\displaystyle W_\lambda e^{x} = \lambda e^{-x}$ where $\displaystyle |\lambda| = e$.
If $\displaystyle |\lambda|=e $, let $\displaystyle \mathcal{D} \equiv \{f+ \alpha e^{x} + \lambda \alpha e^{-x} : \alpha \in C, f \in \mathcal{D}\}$ .
$\displaystyle A_\lambda(f+\alpha e^{x} + \lambda \alpha e^{-x})= if' + \alpha i e^{x} - i\lambda \alpha e^{-x}$,
if $\displaystyle f \in \mathcal{D}, \alpha \in C$.
What is the significance of this example? What was Conway trying to illustrate?
To say that A is symmetric means that $\displaystyle \langle Af,g\rangle = \langle f,Ag\rangle$ for all functions f,g in the domain of A (which should consist of all differentiable functions on [0,1] whose derivatives are in $\displaystyle L^2[0,1]$ and which vanish at 0 and 1). The operator A is defined by Af=if'.
The inner product is given by $\displaystyle \langle Af,g\rangle = \int_0^1\!\!if'(t)\overline{g(t)}\,dt$. To see that this is equal to $\displaystyle \langle f,Ag\rangle$, you need to integrate by parts. I'll leave you to do that. Study the calculation very carefully, and you will see that only f (not g) needs to satisfy that condition f(0)=f(1)=0. In fact, the adjoint of A is the operator A* given by the same formula A*g=ig' but on a larger domain, namely the set of all differentiable functions on [0,1] whose derivatives are in $\displaystyle L^2[0,1]$ (but which need not vanish at the endpoints).
The fundamental idea of von Neumann's theorem is that if A is symmetric (so that A* is an extension of A) then as you increase the domain of A, the domain of its adjoint decreases. If you're lucky, there is an intermediate domain which is the same for the operator and its adjoint. That operator is then selfadjoint.
In the case of the operator Af=if', that point occurs when the domain is the set of all differentiable functions f on [0,1] whose derivatives are in $\displaystyle L^2[0,1]$ and which satisfy f(0)=f(1). You can check that if f and g both satisfy that condition then the integration by parts calculations works. This salfadjoint extension of A is what Canway calls B in his Example 1.12.
He is showing how von Neumann's theorem applies to this example. You get a selfadjoint extension of A by increasing its domain, or equivalently by increasing its graph through incorporating part of the graph of A*. It turns out that the way to do this is to add to the graph of A a subspace of $\displaystyle \mathcal{K}_+\oplus\mathcal{K}_-$ which in turn is defined in terms of an isometric map from a subspace of $\displaystyle \mathcal{L}_+$ to a subspace of $\displaystyle \mathcal{L}_-$. If this map takes the whole of $\displaystyle \mathcal{L}_+$ to the whole of $\displaystyle \mathcal{L}_-$ then the extension will be selfadjoint. The condition for that to be possible is that $\displaystyle \mathcal{L}_+$ and $\displaystyle \mathcal{L}_-$ have the same dimension.
I see, von Neumann's theorem is not stated in Conway or in Wikipedia.
It says:
Let A be a symmetric operator and suppose that there exists a conjugation C with C: D(A)->D(A) and AC = CA. Then A has equal deficiency indices and therefore has self-adjoint extensions.
Do you know of a more precise way of writing that theorem?