The Sine and Cosine functions are mutually orthogonal, since for any given , the functions are precisely 90 degrees out of phase with one another.
This is all that is meant.
Hi! In Fourier transform, the basis functions are , where is unique for each basis function. But what does it mean that two basis functions are orthogonal? The ordinary integral for the inner product
cannot be used here to indicate orthogonality, because the integral would not converge. So what does orthogonality in this case really mean? I have looked in a few articles at Wikipedia, but it is not mentioned anywhere that this integral will actually not converge (at least not in the normal sense) for two functions of this kind. In the Fourer transform article, orthogonality between two different basis functions is not even mentioned.
What do you mean, that a basis function is orthogonal with itself? At least that is not what I meant. No, two basis functions and are orthogonal if and only if for . Why it is like this I do not know, but there has to be some definition for orthogonality between basis functions?
There's a very interesting, relevant page in Griffith's Introduction to Quantum Mechanics: page 59. He states that
And then he says, "Comment: this formula gives any respectable mathematician apoplexy. Although the integral is clearly infinite when it doesn't converge (to zero or anything else) when since the integrand oscillates forever. There are ways to patch it up..."
Then, on page 102, he's talking about the basis functions you're talking about, which "satisfy a kind of orthogonality condition:
There's a footnote:
"That's right: We're going to use, as bases, sets of functions none of which is actually in the space! They may not be normalizable, but they are complete, and that's all we need."
To sum up: the Delta function (really distribution), in this context, is the orthogonality condition. It turns out that if a set of functions can produce the Delta function under the inner product, then they are complete. This is, again, the sort of thing the physicists love, and the mathematicians hate. Let the games begin!
Assume a knowledge of Fourier series. Note e^iw = cosw + i sinw. Formally:
e^(im x/T) is orthogonal over -T x T
e^(im x/T) e^(-in x/T) dx = 0, m n, 2T m = n
Let T ->
m /T -> w, m = 1,2,3, ....
Then e^iwx is orthogonal over x . Note that T shows up in the transition to Fourier transform as you go from a periodic to aperiodic function.
Can you show directly that e^iw1x e^iw2x dx = 0 w1 w2, = w1 = w2. You can if you define
w = n /T, T -> , n = 1,2,3..., which really just duplicates the above.
Hartlw: Maybe it is something like what you wrote. I read a bit in my Fourier analysis book yesterday and noticed that orthogonality was defined for basis functions in the Fourier series (discrete spectra), but I couldn't find any definition for the Fourier transform which came a bit later in the book (it doesn't have any index). Maybe I'm just bad at searching. Anyway, defining orthogonality for the basis functions in the Fourier transform only should be pretty straight forward; unless otherwise I guess it's possible to simply define orthogonality as a property that is true when the two basis functions have different angular velocity and false otherwise.
In the case of arbitrary functions it may become a bit more difficult on the other hand. But maybe it's possible to define orthogonality for arbitrary functions like this:
Two functions and are orthogonal if and only if at least one of the two function are square integrable and
or none of the functions are square integrable and the limit
exists and equals to zero.
here is a test function of some appropriate kind centered in the origin, possibly a bump function. I'm not sure about this definition though but it's worth considering as a possibility.
Ackbeet: Yeah, I guess it can be defined roughly like that. The method does however have a flaw as the author of your book mentioned. I'm interested about the way he patched it up...
It might very well be true what you wrote about physicists and mathematicians. One might think that I as a physician have no problem with this? But oh yes, I do. ^^
He doesn't say. He just says it can be done.
I myself tend to think more like a mathematician than a physicist, but I also can see where they're coming from. It's why I love being in the middle.It might very well be true what you wrote about physicists and mathematicians. One might think that I as a physician have no problem with this? But oh yes, I do. ^^
In quote above I corrected a critical typo in my original post. in first integral should be T.
I too tried to find something on orthogonality of e^iwt over infinity. When I couldn't, my curiousity was aroused and I made the above formal attempt using definition of orthogonality and improper integral (calculate integral and then let limit approach .
The only problem I see is the definition of w
Let T ->
m /T -> w, m = 1,2,3, ....
which also shows up in the derivation of the Fourier transform. If you use that definition, can you, for example calculate derivative of e^iwt wrt w? From the formal definiton of a derivative, it looks like you could. But the question of whether w is a legitimate variable in the sense of functions defined on closed (compact) sets is something that requiers some deliberation. I haven't seen the question addressed in derivations of the Fourier transform.
Orthogonality is a formal text-book definition. The concept probably had its origin in the derivation of Fourier series by expressing any periodic function as a sine and cosine series, and then multiplying by a given sine or cosine term and integrating over a period, and everything drops out except a particular coefficient in the sine and cosine series.
In the Fourier transform, you first multiply the assumed expansion of a periodic function by e^(im x/T), integrate over the period T, then divide by T, which then shows up on the function side, and then take limit as T -> infinity, so that you never actually run into T-> infinity in the orthogonality condition, which is probably why the text-books don't give a definition of orthogonality over infinity. Doing it this way you don"t have to deal with T -> infinity when w1 = w2, which I also had a problem with. This only makes sense in conjunction with the std textbook derivation of Fourier Transform. Im sure you can google it.