Hi guys, I have these two vectors : v1 = (1,0,1) ; v2 = (1,1,-1).
Now I have to find a basis of R³ which contains v1 and v2.
How can I do that??
regards
you can add a canonical vector of that could be one of these three and you'll get a (not yet, you need to verify first) a basis.
since you have three vectors and then you only need to prove that those vectors are linearly independent. (you can prove it by just computing the determinant which is generated by "hanging" these vectors.)
i'm sorry, i forgot to say that!
when you studied linear system of equations, to analize them, if the system is a square one, then if its determinant is not zero, then the system has unique solution.
thus, if that determinant is not zero, then those vectors are linearly independent.
thanks! and how many solutions do we have if the determinant is zero? infinite solutions or no solutions?
Another thing: and if the system is not square, how can you compute a linear independence and the number of solutions??
And, when you compute the number of solutions through the determinant, do you compute the determinant on the coefficient matrix or on the augmented matrix?
What I don't understand is, how can I transform in echelon form a matrix like this:
1 0
2 2
1 3
Since we have 3 rows but 2 columns, how is it possible to get a pivot to each row? unless one row is zero, of course. Perhaps a zero row is the only solution?
I'm sorry, where did you get those? The original question was about v1 = (1,0,1) ; v2 = (1,1,-1). Those two clearly are independent so you just need a third vector that is independent of those two. Krizalid suggested using any one of the three "canonical" basis vectors for (1, 0, 0), (0, 1, 0), or (0, 0, 1). Any one of those, if it is independent of (1, 0, 1) and (1, 1, -1), will give a basis for with those two. Let's check (1, 0, 0). You could write either
, using the three vectors as rows or
using them as columns. In either case, if the determinant is not 0 or the matrix row reduces to a matrix not having any "0" rows, then they are independent and form a basis for .
Personally, I prefer to use the basic definition of "independent". Suppose a(1, 0, 1)+ b(1, 1, -1)+ c(1, 0, 0)= (0, 0, 0). Combining the left side, we have (a+ b+ c, b, a- b)= (0, 0, 0) and must have a+ b+ c= 0, b= 0, a- b= 0. Of course, the second equation says "b= 0". Putting that into the third equation, a- b= a= 0. Putting both a= 0 and b= 0 into thefirst equation, a+ b+ c= c= 0. The only way a linear combination of those three vectors can be equal to 0 is if the coefficients, a, b, and c, are all 0. That is the definition of "independent" so these three vectors are independent and form a basis for .
Oh yes, don't worry that matrix above was just an explanation. Basically from the first question, I was curios to know more about the rules which are back on solving a system of linear equations through determinant.
I know the method you are talking about, and it's fine, just I wanted to know then, in addition to the first question, how to compute the number of solutions of a linear system through a determinant.
Since from a determinant different from zero we get that the vectors are independent, and in the case of linear systems that there is a unique solution, it appeared naturally to me to ask how many solutions implies a determinant equal to zero, or a negative number.
Anyway, thanks dude, and if anyone knows the answer to the little question about determinant, please tell me
if it's zero, the system could have no solution or infinite solutions, but what really matters here is the linear independence.
by scaling the matrix; rank of the matrix of the system must be equal to the rank of the augmented matrix of the system and equal to the # of variables.
you can't get the solutions by using determinants, but by scaling you can make it.