Linear Algebra - Basis and Span

What is the difference between a **basis** and a group of vectors which **span** the space? The two properties for a basis is for the vectors to be linearly independent and to span the space. Given a family of vectors, let's say their linear combinations generate the whole space; then they are said to span the space - so then they must be linearly independent. A trivial example is the unit vectors. I've read numerous times they're a spanning set, and on the other hand they're a basis?

Re: Linear Algebra - Basis and Span

A basis IS a spanning set....but NOT vice-versa. A spanning set does not need to be linearly independent, which is a kind of "minimality" condition. For example, if you add another vector to a basis, it will no longer be a basis, but it will STILL be a spanning set.

A spanning set NEED NOT BE linearly independent....and a linearly independent set need not be a spanning set. A basis is BOTH. Bases are "just the right size": big enough to span the space (spanning), and small enough not to have any redundancy (linear independence). A basis is special..its size captures the DIMENSION of the space, which is an invariant property of a vector space (it never changes...even though you CAN change from one basis for a space, to a different basis).

Spanning is easy to describe: a set spans a vector space if and only if every vector in the space can be written as a linear combination of the spanning vectors. Linear independence is a bit more technical to describe: a set of vectors is linearly independent if and only if the only possible way to describe to 0-vector is the 0-linear combination (every "coordinate" in the basis is 0).

For example: the set {(1,0,1), (0,1,0) , (3,2,3)} in R^{3} is NOT linearly independent. Why? because:

(3)(1,0,1) + (2)(0,1,0) + (-1)(3,2,3) = (3,0,3) + (0,2,0) - (3,2,3) = (0,0,0) is a non-zero linear combination that sums up to the 0-vector.

Given a spanning set, it can be "trimmed down" to a basis, by eliminating those vectors which are linearly dependent on other ones (if any are). Given a linearly independent set, it can be extended to a basis (if it isn't one yet) by adding more linearly independent vectors. A basis cannot be added to, or subtracted from without "breaking" the basis property.

R^{3} has, not surprisingly, 3 dimensions (or "degrees of freedom"...in everyday language we often describe these as: left/right (often the x-axis), in/out (the y-axis) and up/down (the z-axis)). Any basis for R^{3} has to have exactly 3 elements, no more, no less.

It is pretty obvious the set {(1,0,0),(0,1,0),(0,0,1)} is a spanning set (or GENERATING set) for R^{3}: for any vector (x,y,z) can be written as the linear combination: x(1,0,0) + y(0,1,0) + z(0,0,1) = (x,0,0) + (0,y,0) + (0,0,z) = (x,y,z). It's not quite so obvious it is linearly independent:

Suppose a(1,0,0) + b(0,1,0) + c(0,0,1) = (a,b,c) = (0,0,0). Looking at the first coordinate, we see only (1,0,0) contributes to it, so we must have a = 0. Similarly, we must have b = c = 0, as well, so the only way we can sum these up to get the 0-vector is if a,b, and c are ALL 0. Thus it is linearly independent.

The following would be a good exercise for you. Which of the following are linearly independent sets, which span R^{3}, and which are bases?

1. S = {(1,0,0), (0,1,1)}

2. S = {(1,0,0), (1,0,1), (1,1,0)}

3. S = {(2,1,1), (2,-1,1), (0,2,0)}

4. S = {(1,0,1), (0,1,1), (-1,-1,0)}

5. S = {(1,1,0), (2,0,4), (3,-2,1), (0,0,5)}

Re: Linear Algebra - Basis and Span

Quote:

Originally Posted by

**astartleddeer** What is the difference between a **basis** and a group of vectors which **span** the space? The two properties for a basis is for the vectors to be linearly independent and to span the space. Given a family of vectors, let's say their linear combinations generate the whole space; then they are said to span the space - so then they must be linearly independent.

There is your difficulty then. A spanning space does NOT have to be linearly independent. For example, the **entire** set of all vectors in a vector space trivially spans the space- but is certainly NOT "linearly independent". I have no idea where you got that idea.

Quote:

A trivial example is the unit vectors. I've read numerous times they're a spanning set, and on the other hand they're a basis?

And I suspect you are using the term "unit vectors" incorrectly here. There exist an infinite number of unit vectors in, say, , but a basis must have only two vectors. The unit vectors <1, 0> and <0, 1> form a basis. The unit vectors <1, 0> and <-1, 0> do not- they do not even span the space. The unit vectors <1, 0>, < 0, 1> and <sqrt(2)/2, sqrt(2)/2)> are all unit vectors but and span the space but do not form a basis because they are NOT "linearly independent".

In fact, "span" and "linearly independent" are, in a sense, "opposites". If you were to take a sufficiently large number of vectors, for example, as above, the set of **all** vectors in the space, they will span the space but **won't** be independent. On the other hand, a set containing a single (non-zero) vector is **always** "independent" but generally won't span the space. If you have a set of vectors that spans the space but is NOT independent, you can always "discard" a vector and still have a set that spans the space. If you have a set of vectors that is independent but does not span the space, you can always **add** a new vector and have a set that is independent.

If you start with a set of vectors that span the space but is NOT independent, you can keep discarding vectors, always having a spanning set, until they **are** independent, and so a basis. If you start with a set that is independent but does not span the space, you can always add a new vector, always having an independent set, until the **do** span the space, and so a basis.

A vector space is said to be "finite dimensional" (There exist infinite dimensional spaces but they are dealt with in "Functional Analysis", not "Linear Algebra". Linear algebra is, effectively, the "theory of finite dimensional vector spaces".) if and only if there exist a finite set of vectors that spans it. If you perform the above "operation" on such a set, discarding vectors from the set until you have independent vectors, or if you start with a single vector, adding vectors until you get a set that spans the space, you get, remarkably, the same "size" set. A fundamental theorem about finite dimensional vector spaces, proved in any Linear Algebra textbook, is that all sets of vectors that are both "independent" and "span" the space contain the same number of vectors- the **dimension** of the space.

That is, any basis of a (finite dimensional) vector space has three properties:

1) It spans the space.

2) The vectors in the space are linearly independent.

3) The number of vectors in the basis is equal to the dimension of the space.

Futher, any **two** of those imply the third!

So, for example, having shown that the vectors <1, 0, 0>, <0, 1, 0>, <0, 0, 1> are both independent and span , it follows that is "three dimensional" and so **any** basis for contains exactly three vectors. Further, if a set of vectors spans **and** contains three vectors, they must be independent. But just "spans" alone is not enough to say that they are indepedent. For example, {<1, 0, 0>,< 0, 1, 0>, <0, 0, 1>, <1, 1, 1>} span . Since any vector in can be written as a linear combination of the first three, any vector certainly can be written as a linear combination of all four (with the coefficent of the last equal to 0). But since there are more than three vectors in the set they **cannot** be "independent".

And that can be shown directly from the definition of "linearly independent": (1)<1, 0, 0>+ (1)<0, 1, 0>+ (1)<0, 0, 1>+ (-1)<1, 1, 1>= <0, 0, 0>.

Re: Linear Algebra - Basis and Span

Ok, I'm going to print this off now and then go away and set up examples of my own.

Thank you for a first-class response.