I would use Kramer's rule here. Kramer's Rule tells you that with the vector you have
If now and yet what does that tell you?
is a square matrix length n and is a vector in . For every =1,2,...,n is the matrix that is received from after switching the column with vector . Given, detA=0.
Prove that if det then there is no solution to .
This is what the question means if it was unclear:
I simply replace the first column of A with vector b
Now that it's all cleared up how do I solve this?? It has something to do with detA=0 which makes the matrix singular... and most likely it has to do something with all those properties of an invertibale matrix.
And the question is telling us that there is only a solution to Ax=b if A_1 is singular like A! Why is that so? Why can't A_1 be invertible?
But I just can't put it together. Can someone please help?
I have a question again. I believe your equation is not correct:
Kramers law states that you can only do that if the determinant of A is invertible, and since in our case A is sigular you are not allowed to write
but if we are assuming that is invertable then we can say
and since is invertible you get a solution every time.
So now I feel I am stuck again.
Hmm. I think you're right. I missed that detail. Still, I bet you could use the derivation of Cramer's Rule (I mis-spelled it Kramer) to get the proof you need. The proof of Cramer's Rule, I'm betting, has a construction you could use to prove your point. For example, instead of having the denominator being zero, and having a problem there, do all your constructions with that determinant on the same side of the equation as Perhaps you can get a contradiction or something.
I thought about the question some more and I don't think it has to do with Cramer's rule. I think it has to with this:
and there are no solutions for Ax=b
and there are solution for Ax=b
**there are also cases where both and there is also no solution.
So, i feel like I'm getting closer...just not close enough. Any input would be great.
Well, that second example shows that the theorem you are trying to prove is not true! I recommend you go back and re-read the original problem. If Ax= b is a solvable matrix equation (system of equations) in which none of the solutions is 0, then neither |A| nor , for any k, is 0 yet there are solutions.
I don't understand what you just wrote. The second problem doesn't prove or disprove anything. I was just showing different variations of problems when I change columns by a vector. I was just throwing out ideas which I thought was in the right direction and hoping someone could finish the idea for me.
And what do you mean by reread the original problem??
To disprove the theorem, you would have to exhibit a case where , , and yet there is a solution to jayshizwiz has not provided such an example. In his first example, both the hypothesis and the conclusion of the theorem are satisfied. In his second example, the hypothesis of the theorem is not satisfied, because .
I need to think about this a bit more. The answer certainly isn't popping out at me.
I'm thinking along the lines of using linear independence (abbr. LI) and linear dependence (abbr. LD). Let be the th column of matrix . Since , we know that columns 2 through n are LI, since is LI. Since , and columns 2 through n are LI, it follows that can be written as a linear combination of columns 2 through n, since is LD. We'll say that
where are scalars.
I'm thinking about elementary column operations. You might be able to manipulate those to get what you need.
Hopefully post more later.
So, if we take the augmented matrix
we can perform the combination of elementary column operations as follows:
There are two questions I have in my mind:
1. What does this sequence of elementary column operations do to the array of variables , as well as ?
2. Can we use this notion to show that the system has no solution?
Elementary column operations change the underlying system in a predictable way. For example, suppose I have the system
The solution is
I perform the elementary column operation The system becomes
with solution . But now which is precisely the column operation I performed, but with the indices switched. I think you could prove that, in general, if I perform the elementary column operation , then with the new solution vector , I can say that
I think your proof could go something like this:
We can achieve the equation
as a series of elementary column operations which will, incidentally, leave the value of unchanged. Since the remainder of the columns (2 through n) are linearly independent, we can row reduce the RHS of this equation such that the last row has all zeros in the non-augmented matrix. You would then need to show that there is a corresponding nonzero entry in the vector for that row, thus showing that there is no solution.
Does this make sense?
You can do this by sleight-of-hand: construct the matrixYou would then need to show that there is a corresponding nonzero entry in the vector for that row, thus showing that there is no solution.
You know this matrix is invertible, because its determinant is times the determinant of (you'd have flipped two columns times, which flips the sign of the determinant times).
Because this matrix is invertible, you can perform elementary row operations on it such that the matrix is upper triangular with 1's on the main diagonal. Now you perform an inverse sleight-of-hand and put this upper triangular matrix back into
You now have the result desired.