Hey Laotzu.
We can do definite integrals in the same way.
You should take a look at Hilbert-Space Theory as well as l^2 and L^2 spaces to get more of an idea.
As it can be easily found in literature, indefinite integrals are treated as inner product of infinite dimensional vectors. But what about definite integrals? When we deal with definite integrals, I think, we would have been previously defined the associated vectors such that the integration limits to be applied on them. However, the vectors x in such infinite dimensional spaces are always shown by summation of x_{k}a_{k} for k=1 to infinite. In better word, when we assume a given definite integral (with limits a, b) as an inner product of infinite dimensional vectors, how can we define the associated vectors? I will appreciate if anyone would help me.
Thanks. But I have no problem with definite integrals. What I want to know is the form of two vectors (in infinite dimensional space) that were multiplied to produce a certain definite integral. How they may be showed that respect the integration limits?
i'm not sure what you're asking here.
usually, a function space (that is, a vector space where the vectors are functions) assumes a common set of definition (domain) for the functions in question.
that is one speaks of V = {f: D-->F, f in some family of functions} where F is the underlying field of V. to study inner products, F is usually taken to be either R, or C.
since we are talking about using integrals as inner products, a common restriction is that f be square-integrable on D (which is typically a subset (often compact) of R^{n} or C^{n}). this is enough to ensure the inner product exists.
"integrable" here is rather vague, different integral definitions exist (riemann, lebesgue,harr,darboux,etc.)
in this case that D is unspecified, often the entire space R^{n} or C^{n} is intended as the domain of the functions (this is the case for polynomial spaces, for example).
the usual "multiplication" of the functions is given "point-wise":
(fg)(x) = f(x)g(x) <---the RHS is the multiplication of the field F.
it is not true that all infinite-dimensional vectors are sums of the form:
some infinite-dimensional vector spaces are of uncountable dimension, and such a series "doesn't give enough terms". for example, the real numbers are a vector space over the rationals, but there is no countable set of (Q-linearly independent) real numbers that will serve to describe all reals as Q-linear combinations of them.
i think there is some confusion in what you are asking because, in general, integrals are used to define (certain) inner products, inner products are not used to define integrals.
Dear Deveno, thanks for your useful help.
I will try to clear what I’m looking for. When we define vectors in infinite dimensional space, we put no limits on number of its components (if we are allowed to use "component" here). So, the vector length would be calculated accounting all of its infinite components. On the other hand, when we assume a definite integral with limits a and b as an inner product of two infinite dimensional vectors, I think, we limit the component number of the vectors, thereby limiting their length. In fact, the length of vectors are calculated now by integration from a to b. Now, if it is true, how can we show or apply this limitation on vectors definition (or expression), before forming the inner product? In other word, how the changes of integration limits a and b reflect in the initial form of the multiplied vectors? Or, the integration limits are artificially added, without any particular meaning for the initial vectors?
well, the "first step" in extending the ordinary concept of "length" to infinite-dimensional vector spaces would be to extend the FINITE sum:
of a finite number of coordinates to an infinite sum:
but this then raises questions of "convergence" (if we are dealing with "square-summable sequences" we're good, but this won't work for an arbitrary sequence).
with an arbitrary function, let's say a function f:R-->R, it's no longer clear at first how we should define the "length" of it.
but what comes to our aid is the notion of linear functional....that is linear functions L:V--->F. the inner product
<u,v> can be thought of as a linear functional <u,_>: v---><u,v>.
so for a space V of functions (let's say real-valued functions, just to be specific), what we want is a linear functional
L(f):V-->R. it turns out that for vector spaces comprised of (integrable) functions, a definite integral is just such a functional, we have:
.
L is clearly linear:
it is important that this be a definite or improper integral, we need to have it spit out a NUMBER, not another function.
if we have an orthonormal basis w.r.t. a certain inner product, say, {u_{1},...,u_{n}} in the finite case, then if
x = x_{1}u_{1}+...+x_{n}u_{n}
then orthonormality lets us express the "coordinates" in terms of the inner product:
x_{j} = <x,u_{j}>.
this is precisely what is done with fourier analysis, here the functions are only defined on (-π,π) (or some other interval of length 2pi)
and the basis used is {1,cos(x),sin(x),cos(2x),sin(2x),cos(3x),sin(3x),. .......} (technically 1/√2 should be used instead of 1, but square roots are annoying)
and the inner product used is:
the "coordinates" of a function f are then given by its fourier coefficients, each of which is a coefficient of a term in the following series
where:
the limits of integration here have nothing to do with the "infiniteness" of the dimension of the square-integrable functions on [-π,π]. the limits of integration are usually chosen for intervals [a,b] on which we want to know something, and can be imposed rather arbitrarily (often we are interested in what happens in a certain time interval, say from t = 0 to t = 1, so we might chose the interval [0,1] over which to integrate).
that is the passage from definite--->indefinite integral has nothing to do with finite-dimensional spaces--->infinite-dimensional spaces.
indefinite integrals (anti-derivatives, or primitives) are an entirely different kind of animal that definite integrals, which are linear functionals. indefinite integrals are more like linear OPERATORS (except they yield an equivalence class of functions, f(x)+C instead of any one particular function). this is why differential equations need "boundary conditions" (initial value specifications) to be fully solved for a specific situation.
put another way, the differential operator, D (which IS a linear operator) is not injective, so to specify an inverse image (something in D^{-1}(f(x)), we need to provide more information (to determine C).
Dear Deveno. It give me pleasure to be informed your deep understanding of the subject. I’m not mathematician and I hope my non-professional questions do not trouble you. It is said in references that angle between to infinite dimensional vectors u(x) and v(x) can be calculated based on their inner product as:It give me pleasure to be informed your deep understanding of the subject. I’m not mathematician and I hope my non-professional questions do not trouble you. It is said in references that angle between to infinite dimensional vectors u(x) and v(x) can be calculated based on their inner product as:
This expression can be rewritten in familiar form:
So far, so good. Since I have not found anywhere the definite version of this formula, I’m dubious about:
If it’s wrong, I have nothing to say, otherwise, I have to regard to my first question. Because, here the length of vectors are clearly defined regarding the integration limits that in turn affect the angle between them. Now, we can ask how infinite dimensional vectors can be limited? What do mean limits a and b in respect of vectors u(x) and v(x) initial definition? Is it true that dimensions of the vectors are infinite but limited?!
actually the correct formula is:
which when an integral is used as an inner product becomes:
it is important that limits of integration be used.
to take a simple example, over the real numbers, the angle between the polynomials f(x) = 1, and g(x) = x would be (if there were no limits of integration):
which isn't a number, but a (somewhat ugly) function of x (and we don't even know which one since there are 3 unspecified parameters).
in a more general sense, the limits a and b in the definite integral have more to do with "the area of the functions' domain we are interested in". it is unreasonable to expect that functions that are orthogonal when we integrate from -1 to 1, will still be orthogonal when we integrate from 2 to 3. in fact, it is common to integrate with a "weighting function" w(x) that has the effect of "stretching" the underlying space and thereby "skews" the angles between vectors.
but to answer your question more simply:
the limits of integration usually means we are in a subspace of C[a,b], the space of all continuous functions f:[a,b]-->R (although it is possible to make a "larger space" out of "integrable functions", for example, step-functions, which are clearly NOT continuous). which functions one is going to allow to a large extent depends on which flavor of integral you're using.
the infinite-dimensional-ness of a function space has to do with how many "basis functions" we need to specify to get "coordinates". the simplest case is polynomial functions where the functions:
f(x) = x^{k}
form a countable basis. that is, any polynomial can be regarded as a finite sequence of coefficients (but we need an infinite basis, because we might have polynomials of arbitrarily high degree).
since polynomials are "defined everywhere", any closed interval of R could be used as a domain of definition. limiting the domain of definition does NOT change the dimension.