Maybe you can search for 'Hilbert spaces'.
I first introduce the vector along the lines 'something with magnitude and direction'. Later on the definition of a vector becomes generic - 'an element of a vector space'.
Euclidean spaces (n=2 and n=3) are something we can all visualize. However when describing other vector spaces such as set of polynomials or set of continuous functions all this stuff becomes abstract and many finding this severely boring. Are there other vector spaces which students will find attractive? Are there any real life examples of vector spaces which would be a good hook?
Thanks in advance for any replies.
I think it depends: if your students are studying mathematics and they find "boring" getting into abstract matters then they may have to rethink if what they're studying is
the best think for them. Now, if your students are not mathematics, or at least science, science students (like in economy) then they may not need that material anyway.
Nemesis
one way in which people can "relate" to linear algebra is throuh geometry. choosing a basis in this view amounts to "picking a coordinate system" so that we can be specific about "where" in the space we are (relative to "the origin"). so the Euclidean spaces and are useful vector spaces for (hopefully) obvious reasons.
but linear algebra has so many more applications than just geometry. for example, you could have a system which depended linearly on n parameters, none of which affect the others (this is another "physical" interpretation of what "linear independence" means). so now we have a "state space", a "point" represents not an actual spot in space, but some configuration of our system. i'll give a simple example:
a farmer has cows and ducks. he counts heads and legs and finds he has 7 heads and 18 legs. how many of each does he have? now, it's easy to solve this "puzzle" without all of the machinery of linear algebra, but the point is, the solution (2 cows, 5 ducks) does not mean the same thing as the point (2,5) in the plane. it is an abstract point in the "state space" of all possible linear combinations of cows and ducks. now, "cows" and "ducks" could actually be anything (including some of mathematics' favorite animals, functions), linear independence of "cows" and "ducks" boils down to the fact that if you buy another cow, it doesn't affect the number of ducks you have.
in EXACTLY the same way, changing the " " coefficient of a polynomial doesn't change any of the other terms. the different powers of x are linearly independent. and here's the really good part: in the real world, a lot of the problems we want to solve AREN'T linear. the way the system behaves can be quite complicated. but for a lot of interesting systems, they are "locally" linear (this is true of systems governed by differentiable functions). for example, if you want to approximate a surface near a point (like a torus, or doughnut), the tangent plane is a good approximation (as long as you don't go too far). wait a minute...plane, could that have something to do with these "linear algebra" things? indeed it does, and if you give that tangent plane a "basis" (coordinate system), you can describe that tangent plane as two linear derivatives (slopes/lines) passing through the tangent (or "touching") point.
another way in which vector spaces arise in algebra, is by considering extensions of a field. for example, {1,i} form a basis of the complex numbers as a vector space over the real numbers. this allows us to visualize complex numbers as lying in the Euclidean plane (we take z = a+bi, and think of it as the point (a,b) in the plane). the same thing happens whenever we have some field E that contains a smaller field F, we can ask how much "bigger" is it (we call this number dimension).
on a more general level: with any kind of mathematical "thingy", the general "thingy" is hard to describe. it doesn't really make that much sense to say "what" a vector really is, because so many different kinds of disparate things might be vectors. what one can say, and what is far more profitable to say, is how a vector "behaves". that is what the vector space axioms are for. the "reason" one studies the "abstract" form of vectors, is so you don't have to keep re-inventing the wheel every time you come across a new collection of things that satisfies the axioms. if something (like, for example, the rank-nullity theorem) is true for every (in this case finite-dimensional) vector space, then as soon as you've verified your current objects of study ARE a (finite-dimensional) vector space, boom! you can use the rank-nullity theorem. it saves time.