# Every nonzero vector space can be viewed as a space of functions

Show 40 post(s) from this thread on one page
Page 1 of 2 12 Last
• Jul 22nd 2010, 01:31 PM
buri
Every nonzero vector space can be viewed as a space of functions
The problem statement, all variables and given/known data

Let V be a nonzero vector space over a field F, and suppose that S is a bases for V. Let C(S,F) denote the vector space of all functions f ∈ Ω(S,F) (i.e. the set of functions from S to a field F) such that f(s) = 0 for all but a finite number of vectors in S. Let Ψ: C(S,F) → V be defined by Ψ(f) = 0 if f is the zero function, and Ψ(f) = Σ {s ∈ S, f(s) ≠ 0} f(s)s, otherwise. Prove that Ψ is an isomorphism.

The attempt at a solution

Okay, I've already proved that Ψ is linear, that it is 1-1, but I'm having troubles proving that it is onto. Here's what I've done:

I'd like to be able to show that for any v ∈ V there is a f ∈ Ω(S,F) such that Ψ(f) = v. So v = (a1)s1 + (a2)s2 + ... where S = {s1, s2, ...} is a basis for V (I haven't been told whether V is finite or infinite dimensional). However Ψ(f), when f is nonzero, is a linear combination of FINITE number of elements of the basis. I do realize we could write Ψ(f) = f(s1)s1 + f(s2)s2 + ... where some of these will be zero as f is only nonzero at a finite number of them. See if v = (a1)s1 + a2(s2) + ... + (an)sn then I could define f to be f(si) = ai and I would be done. But the problem is that I can't see and can't show that Ψ(f) could ever equal a vector in v which is a linear combination of an infinite number of elements of the basis.

Any help? This isn't homework, I'm taking a look at linear algebra on my own this summer. Thanks a lot!
• Jul 22nd 2010, 02:20 PM
tonio
Quote:

Originally Posted by buri
The problem statement, all variables and given/known data

Let V be a nonzero vector space over a field F, and suppose that S is a bases for V. Let C(S,F) denote the vector space of all functions f ∈ Ω(S,F) (i.e. the set of functions from S to a field F) such that f(s) = 0 for all but a finite number of vectors in S. Let Ψ: C(S,F) → V be defined by Ψ(f) = 0 if f is the zero function, and Ψ(f) = Σ {s ∈ S, f(s) ≠ 0} f(s)s, otherwise. Prove that Ψ is an isomorphism.

The attempt at a solution

Okay, I've already proved that Ψ is linear, that it is 1-1, but I'm having troubles proving that it is onto. Here's what I've done:

I'd like to be able to show that for any v ∈ V there is a f ∈ Ω(S,F) such that Ψ(f) = v. So v = (a1)s1 + (a2)s2 + ... where S = {s1, s2, ...} is a basis for V (I haven't been told whether V is finite or infinite dimensional). However Ψ(f), when f is nonzero, is a linear combination of FINITE number of elements of the basis. I do realize we could write Ψ(f) = f(s1)s1 + f(s2)s2 + ... where some of these will be zero as f is only nonzero at a finite number of them. See if v = (a1)s1 + a2(s2) + ... + (an)sn then I could define f to be f(si) = ai and I would be done. But the problem is that I can't see and can't show that Ψ(f) could ever equal a vector in v which is a linear combination of an infinite number of elements of the basis.

Any help? This isn't homework, I'm taking a look at linear algebra on my own this summer. Thanks a lot!

There is no such a thing as "linear combination of an infinite number of elements of the basis": by definition, if S is a base of V and S is infinite, then any element in V is a (unique) lin. comb. of a FINITE number of elements of S.
This, I think, solves the problem.

Tonio
• Jul 22nd 2010, 02:33 PM
buri
You're right. I'm following an outline for a course I'll be taking in the fall and they skip the section on "Maximal Linearly Independent Subsets" and in that section it asks the reader to prove what you've just stated. So I hadn't seen it, so I guess this problem was connected to that section. Thanks a lot for the help!!
• Jul 22nd 2010, 06:14 PM
buri
Quote:

Originally Posted by tonio
by definition, if S is a base of V and S is infinite, then any element in V is a (unique) lin. comb. of a FINITE number of elements of S.
Tonio

Sorry I'm coming back to this, but I don't see intuitively how this can be true. For example, lets say I take the vector space of infinite-tuples so x = (x1, x2, ...). How is it that I can write this as a linear combination of FINITE number of elements of S (a basis of this vector space)? It just seems that I'd require an infinite number of elements of S to do so. Can you explain this to me?
• Jul 23rd 2010, 02:01 AM
Ackbeet
In infinite-dimensional vector spaces, there can be functions in the space which require an infinite sum of vectors in the basis. Just think about periodic functions on the interval $\displaystyle [0,2\pi],$ with the Fourier sines and cosines as a basis set. If I take a function as simple as the triangle function, I'm going to need an infinite sum of sines and cosines in order to write the triangle function as a linear combination of sines and cosines. If you consider the continuous functions on the interval in question, this is a vector space.
• Jul 23rd 2010, 02:18 AM
HallsofIvy
Quote:

Originally Posted by buri
Sorry I'm coming back to this, but I don't see intuitively how this can be true. For example, lets say I take the vector space of infinite-tuples so x = (x1, x2, ...). How is it that I can write this as a linear combination of FINITE number of elements of S (a basis of this vector space)? It just seems that I'd require an infinite number of elements of S to do so. Can you explain this to me?

Look carefully at what "this" refers to above. They way you use "this", it refers to the vector space, not a single vector. There exist an infinite number of such vectors, a basis, such that any member of the space, that is any vector, can be written as a linear combination of a finite number of them. That is, as tonio said, part of the definition of "basis".
• Jul 23rd 2010, 02:20 AM
HallsofIvy
Quote:

Originally Posted by Ackbeet
In infinite-dimensional vector spaces, there can be functions in the space which require an infinite sum of vectors in the basis. Just think about periodic functions on the interval $\displaystyle [0,2\pi],$ with the Fourier sines and cosines as a basis set. If I take a function as simple as the triangle function, I'm going to need an infinite sum of sines and cosines in order to write the triangle function as a linear combination of sines and cosines. If you consider the continuous functions on the interval in question, this is a vector space.

Akbeet, that's a different matter, a "Hamel basis". In any vector space, even infinite dimensional, there must exist a collection of vectors such that every vector can be written as a linear combination of a finite number of them.
• Jul 23rd 2010, 02:26 AM
Ackbeet
Ah. I see it's been a little too long since I've studied this stuff. I get tangled up in careful definitions, I suppose. The wiki on Linear Combinations certainly agrees with you.
• Jul 23rd 2010, 05:10 AM
Ackbeet
I've also been hobnobbing with physicists too long. (Wink) Here's a quote from Griffith's Introduction to Quantum Mechanics, 1st Ed., p. 102, which I thought you'd find cute. He's talking about the eigenfunctions of the $\displaystyle id/dx$ operator, and the $\displaystyle x$ (multiplication by $\displaystyle x$) operator.

Quote:

Thus

$\displaystyle \displaystyle{f_{\lambda}(x)=\frac{1}{\sqrt{2\pi}} \,e^{-i\lambda x},\;\text{with}\;\langle f_{\lambda}|f_{\mu}\rangle=\delta(\lambda-\mu),}$

are the "normalized" eigenfunctions of $\displaystyle i\hat{D}$, and

$\displaystyle g_{\lambda}(x)=\delta(x-\lambda),\;\text{with}\;\langle g_{\lambda}|g_{\mu}\rangle=\delta(\lambda-\mu),$

are the "normalized" eigenfunctions of $\displaystyle \hat{x}$.
At this point he has footnote 21:

Quote:

We are engaged here in a dangerous stretching of the rules, pioneered by Dirac (who had a kind of inspired confidence that he could get away with it) and disparaged by von Neumann (who was more sensitive to mathematical niceties), in their rival classics... Dirac notation invites us to apply the language and methods of linear algebra to functions that lie in the "almost normalizable" suberbs of Hilbert space. It turns out to be powerful and effective beyond any reasonable expectation.
Then back to the main text:

Quote:

What if we use the "normalized" eigenfunctions of $\displaystyle i\hat{D}$ and $\displaystyle \hat{x}$ as bases for $\displaystyle L_{2}$?
Then footnote 22:

Quote:

That's right: We're going to use, as bases, sets of functions none of which is actually in the space! They may not be normalizable, but they are complete, and that's all we need.
So you can see here that some physicists don't give a rip about the distinction between "basis" and "Hamel basis." All they care about is whether they can write a given function as a sum (possibly infinite) of coefficients against a set of "simpler" or "easier to work with" functions, often the eigenfunctions of a Hermitian operator.

And, if you think about it, in applied mathematics, I'm not sure I can think up an example where you'd actually have a bona fide basis in an infinite-dimensional space. In any example I can think of (which is certainly not all examples), you'd have to go for a Hamel basis.

Is it true that every vector space has a "basis" as you've described? That would have to be a matter of definition: what about my example? Would you not call it a vector space, then? If you look at the axioms of a vector space, I think my function example would fit the bill.
• Jul 23rd 2010, 06:48 AM
buri
Quote:

Originally Posted by HallsofIvy
Look carefully at what "this" refers to above. They way you use "this", it refers to the vector space, not a single vector. There exist an infinite number of such vectors, a basis, such that any member of the space, that is any vector, can be written as a linear combination of a finite number of them. That is, as tonio said, part of the definition of "basis".

Okay, maybe I'm just getting to wrapped up in all this since its by definition of a basis.

I KNOW that such a basis exists, by the maximal principal, but intuitively I don't see how such a basis exists. How could I possibly write an infinite-tuple as a linear combination of a finite number of elements of the basis?

Is there a infinite-dimensional vector space for which we know its basis explicitly? Maybe seeing an example can help me understand this better.

EDIT
I'm considering the vector spaces of infinite-tuples. x = (x1,x2,...) by the result above this can be written as a finite linear combination of vectors in S (if S is the basis of V). So we'd have something like x = (a1)s1 + (a2)s2 + ... + (an)sn. I guess my confusion arises becuase I tend to think of s1,s2,...sn as the "standard basis" (i.e. (1,0,0...) (0,1,0...) and so on) for which I would need an infinite linear combination to get x. BUT this "standard basis" isn't a basis by the definition. So the basis that will work, maybe has elements that are like s1 = (1, 0, 3/4, 2,...) and other ones with more than one entry so that each entry in x will be "taken care of". Maybe like that only a finite number would be needed.

See what I mean? Seriously, this is freaky stuff lol
• Jul 23rd 2010, 08:28 AM
buri
Quote:

Originally Posted by Ackbeet
Is it true that every vector space has a "basis" as you've described? That would have to be a matter of definition: what about my example? Would you not call it a vector space, then? If you look at the axioms of a vector space, I think my function example would fit the bill.

Yes, EVERY vector space has a basis (like the one described in my original post), but you've defined the "wrong" basis (i.e. one that doesn't fit the definition I've given). However, one such basis DOES exist as a consequence of the maximal principal (or equivalently Zorn's Lemma/Axiom of Choice) we just don't know what it is.
• Jul 23rd 2010, 08:36 AM
Ackbeet
So what if you don't believe in the Axiom of Choice? (Evilgrin)
• Jul 23rd 2010, 09:14 AM
buri
Quote:

Originally Posted by Ackbeet
So what if you don't believe in the Axiom of Choice? (Evilgrin)

Then I wouldn't know what to say...lol
• Jul 23rd 2010, 09:46 AM
Ackbeet
Well, some mathematicians don't believe in the AoC. My real analysis professor in graduate school didn't.

I have to admit that I'm not terribly interested in the differences between the kinds of bases right now, though I'm not saying they're unimportant.

I'm not competent to answer your OP, I'm afraid. If you still have questions, see if someone else can help.
• Jul 23rd 2010, 10:52 AM
buri
Thanks anyways though! I've learned a lot nonetheless!
Show 40 post(s) from this thread on one page
Page 1 of 2 12 Last