Linear independence question Euclidean 3-space

Getting quite confused with how to solve this one,

I have 3 vectors v1 (1,1,1) , v2 (1, m , 1) and v3 (1, 1, n)

I'm asked to find m and n that makes v1 - v3 linearly independent - any help would be appreciated

I want to know if I was on the right track - I was thinking that (putting a value which isn't a multiple of 1 i.e. Pi

or make sure that v3 (1, 1, n) the n is not a linear combination of v2's m

ie making v1 (1, 1, 1) , v2 (1, 2, 1) and v3 (1, 1, 5) for example - would that work?

I have the correct answer at the back of the book but would like to know the steps!

Re: Linear independence question Euclidean 3-space

Well, the **definition** of "linear independent" is that no linear combination, au+ bv+ cw, is equal to 0 except for the trivial a= b= c. So look at a(1, 1, 1)+ b(1, m, 1)+ c(1, 1, n)= (0, 0, 0).

That is, of course, equivalent to a+ b+ c= 0, a+ mb+ c= 0, a+ b+ nc= 0. Solve those three equations for a, b, and c. The solution, of course, will depend on m and n. Find values of m and n so that the obvious a= b= c= 0 is NOT the only solution (probably because they make a denominator 0).

Re: Linear independence question Euclidean 3-space

I'm just learning these proofs in Apostol, is there a greater significance to linear independence and spanning the 0 vector uniquely in a trivial way instead of non-trivial way? so the span L(S)=X of a vector is the sum of linear combinations of a set of vectors and generally can the vector X be spanned in different ways by different linear combinations or a unique way? I'm working through both Apostol and Strang at the same time.

also if you have cv+dw and c+d=1 how does this lead to all linear combinations being on a line based on c+d=1?

Re: Linear independence question Euclidean 3-space

"Linear Independence" and being able to write the 0 vector in a unique way means that we can write **any** vector in a unique way. To see that imagine that we can write vector v in two different ways: $\displaystyle v= a_1v_1+ a_2v_2+ \cdot\cdot\cdot+ a_nv_n$ and $\displaystyle v= b_1v_1+ b_2v_2+ \cdot\cdot\cdot+ b_nv_n$ where at least **some** of the "b"s are different from the corresponding "a"s. Subtracting the two equations, $\displaystyle 0= (a_1- b_1)v_1+ (a_2- b_2)v_2+ \cdot\cdot\cdot+ (a_n- b_n)v_n$ where at least some of the $\displaystyle a_i- b_i$ are not 0. Of course, the 0 vector can be written $\displaystyle 0v_1+ 0v_2+ \cdot\cdot\cdot+ 0v_n$ so if there is one vector that can be written in more than one way, as a linear combination of these vectors, the 0 vector can be written in more than one way. If we add to this the requirement that the set **span** the vector space, that **every** vector can be written as a linear combination of the vectors in the set (so we have a **basis**), we have the nice property that **every** vector can be written **uniquely** as a combination of the vectors in the set.

Quote:

also if you have cv+dw and c+d=1 how does this lead to all linear combinations being on a line based on c+d=1?

You seem to have left "linear algebra" now. Imagine v and w drawn as "vectors" with tails at 0. Their tips are two points and so define a straight line. If we draw "cv+ dw" as a "vector" with its tail at 0, with c+ d= 1, its tip will be on that line.

Re: Linear independence question Euclidean 3-space

thanks very much! I'm having terrible health problems...

so if one variable vector A can be written in 2 different ways as a linear combination then the set is linearly dependent and 0 isn't spanned uniquely? I can remember the proofs from Apostol.

so if all vectors in the space are spanned by these linear combinations of vectors then by definition they are basis vectors?

say every vector in R4 can be written in terms of the basis. so the basis vectors when multiplied by a scalar could be shorter or longer than length 1 (eg i,j,k,l in R4), there isn't a unique basis?

a vector subspace only has to satisfy the 2 properties of being closed under addition and scalar multiplication and if you relax the conditions you move into abstract algebra where you can prove things like the fundamental theorem of algebra? Valenzia seems to be a better more abstract text than Strang, I guess the Springer texts are better usually

is there a deeper theorem or problem technique from the properties you stated-

1.a variable vector can be spanned differently (ai-bi) are not all 0, linear dependence? doesn't LD mean that at least 2 vectors are parallel and colinear in the set?

2.the set spans the whole vector space and is a basis

thanks very much again for your wonderful insight and time

Re: Linear independence question Euclidean 3-space

wow there is a purity and a fundamentalness to Basis, they are like the quarks of the vector space!

let me define a differential basis vector dV, where the unit vector=|1| is the integral from 0 to infinity of dV

Re: Linear independence question Euclidean 3-space

in Gauss's law don't you have to take differential surfaces small enough so that the E vector is constant so you can factor out |E| and simplify the calculation?