# Math Help - Question regarding determinants and switching columns of square matrices

1. ## Question regarding determinants and switching columns of square matrices

$A$ is a square matrix length n and $b$ is a vector in $R^n$. For every $k$=1,2,...,n $A_k$ is the matrix that is received from $A$ after switching the $k$ column with vector $b$. Given, detA=0.

Prove that if det $A_1 \neq 0$ then there is no solution to $Ax=b$.

This is what the question means if it was unclear:

A = $\left(\begin{array}{cccc}\alpha_{11} & \alpha_{12} & ... & \alpha_{1n}\\ \alpha_{21} & \alpha_{22} & ... & \alpha_{2n}\\ . & . & . & . \\ \alpha_{n1} & \alpha_{n2} & ... & \alpha_{nn} \end{array}\right)$

b = $\left(\begin{array}{cccc} \beta_1 \\ \beta_2 \\ . \\ \beta_n \end{array}\right)$

$A_1 = \left(\begin{array}{cccc}\beta_1 & \alpha_{12} & ... & \alpha_{1n}\\ \beta_2 & \alpha_{22} & ... & \alpha_{2n}\\ . & . & . & . \\ \beta_n & \alpha_{n2} & ... & \alpha_{nn} \end{array}\right) \rightarrow$ I simply replace the first column of A with vector b

Now that it's all cleared up how do I solve this?? It has something to do with detA=0 which makes the matrix singular... and most likely it has to do something with all those properties of an invertibale matrix.

And the question is telling us that there is only a solution to Ax=b if A_1 is singular like A! Why is that so? Why can't A_1 be invertible?

Thanks!

2. I would use Kramer's rule here. Kramer's Rule tells you that with the vector $x=\langle x_{1},x_{2},\dots\rangle,$ you have

$\displaystyle{x_{1}=\frac{\det(A_{1})}{\det(A)}}.$

If now $\det(A)=0$ and yet $\det(A_{1})\not=0,$ what does that tell you?

3. I would use Kramer's rule here. Kramer's Rule tells you that with the vector you have

If now and yet what does that tell you?
I can't believe how simple that was! Thanks so much.

4. You're welcome. Have a good one!

5. Originally Posted by Ackbeet
I would use Kramer's rule here. Kramer's Rule tells you that with the vector $x=\langle x_{1},x_{2},\dots\rangle,$ you have

$\displaystyle{x_{1}=\frac{\det(A_{1})}{\det(A)}}.$

If now $\det(A)=0$ and yet $\det(A_{1})\not=0,$ what does that tell you?

I have a question again. I believe your equation is not correct: $\displaystyle{x_{1}=\frac{\det(A_{1})}{\det(A)}}.$

Kramers law states that you can only do that if the determinant of A is invertible, and since in our case A is sigular you are not allowed to write $\displaystyle{x_{1}=\frac{\det(A_{1})}{\det(A)}}.$

but if we are assuming that $A_1$ is invertable then we can say
$\displaystyle{x_{11}=\frac{\det(A_{11})}{\det(A_1) }}.$

and since $A_{1}$ is invertible you get a solution every time.

So now I feel I am stuck again.

6. Hmm. I think you're right. I missed that detail. Still, I bet you could use the derivation of Cramer's Rule (I mis-spelled it Kramer) to get the proof you need. The proof of Cramer's Rule, I'm betting, has a construction you could use to prove your point. For example, instead of having the denominator being zero, and having a problem there, do all your constructions with that determinant on the same side of the equation as $x_{1}.$ Perhaps you can get a contradiction or something.

7. Originally Posted by Ackbeet
Hmm. I think you're right. I missed that detail. Still, I bet you could use the derivation of Cramer's Rule (I mis-spelled it Kramer) to get the proof you need. The proof of Cramer's Rule, I'm betting, has a construction you could use to prove your point. For example, instead of having the denominator being zero, and having a problem there, do all your constructions with that determinant on the same side of the equation as $x_{1}.$ Perhaps you can get a contradiction or something.
Ya, that's what I'm thinking. So I'll try it out. But if anyone else has an idea, I'd greatly appreciate it!!

8. I thought about the question some more and I don't think it has to do with Cramer's rule. I think it has to with this:

$A = \left(\begin{array}{ccc}1&2&0\\2&0&4\\3&6&0\end{ar ray}\right)$
$b = \left(\begin{array}{ccc}2\\1\\5\end{array}\right)$
$A_1 = \left(\begin{array}{ccc}2&2&0\\1&0&4\\5&6&0\end{ar ray}\right)$
$|A_1| \neq 0$ , $|A|=0$
and there are no solutions for Ax=b

however, if

$A = \left(\begin{array}{ccc}8&6&2\\0&1&1\\4&3&1\end{ar ray}\right)$
$b = \left(\begin{array}{ccc}4\\3\\2\end{array}\right)$
$A_1 = \left(\begin{array}{ccc}4&6&2\\3&1&1\\2&3&1\end{ar ray}\right)$
$|A_1|=0$ , $|A|=0$
and there are solution for Ax=b

**there are also cases where both $|A|, |A_1| =0$ and there is also no solution.

So, i feel like I'm getting closer...just not close enough. Any input would be great.
Thanks!!!

9. Well, that second example shows that the theorem you are trying to prove is not true! I recommend you go back and re-read the original problem. If Ax= b is a solvable matrix equation (system of equations) in which none of the solutions is 0, then neither |A| nor $|A_k|$, for any k, is 0 yet there are solutions.

10. I don't understand what you just wrote. The second problem doesn't prove or disprove anything. I was just showing different variations of problems when I change columns by a vector. I was just throwing out ideas which I thought was in the right direction and hoping someone could finish the idea for me.

And what do you mean by reread the original problem??

11. HallsofIvy,

To disprove the theorem, you would have to exhibit a case where $\det(A)=0$, $\det(A_{1})\not=0$, and yet there is a solution to $Ax=b.$ jayshizwiz has not provided such an example. In his first example, both the hypothesis and the conclusion of the theorem are satisfied. In his second example, the hypothesis of the theorem is not satisfied, because $\det(A_{1})=0$.

jayshizwiz,

12. I'm thinking along the lines of using linear independence (abbr. LI) and linear dependence (abbr. LD). Let $\alpha_{j}$ be the $j$th column of matrix $A$. Since $\det(A_{1})\not=0$, we know that columns 2 through n are LI, since $\{b,\alpha_{2},\alpha_{3},\dots,\alpha_{n}\}$ is LI. Since $\det(A)=0$, and columns 2 through n are LI, it follows that $\alpha_{1}$ can be written as a linear combination of columns 2 through n, since $\{\alpha_{j}\}$ is LD. We'll say that

$\displaystyle{\alpha_{1}=\sum_{j=2}^{n}c_{j}\alpha _{j}},$ where $c_{j}$ are scalars.

I'm thinking about elementary column operations. You might be able to manipulate those to get what you need.

Hopefully post more later.

13. So, if we take the augmented matrix

$\left[\begin{matrix}\alpha_{1} &\alpha_{2} &\alpha_{3} &\dots &\alpha_{n}\end{matrix}\left|\begin{matrix}b\end{m atrix}\right],$

we can perform the combination of elementary column operations as follows:

$\displaystyle{\left[\begin{matrix}\left(\alpha_{1}-\sum_{j=2}^{n}c_{j}\alpha_{j}\right) &\alpha_{2} &\alpha_{3} &\dots &\alpha_{n}\end{matrix}\left|\begin{matrix}b\end{m atrix}\right]=
\left[\begin{matrix}0 &\alpha_{2} &\alpha_{3} &\dots &\alpha_{n}\end{matrix}\left|\begin{matrix}b\end{m atrix}\right].}$

There are two questions I have in my mind:

1. What does this sequence of elementary column operations do to the array of variables $\{x_{j}\}$, as well as $b$?
2. Can we use this notion to show that the system has no solution?

14. Elementary column operations change the underlying system in a predictable way. For example, suppose I have the system

$\begin{bmatrix}2 &3\\5 &7\end{bmatrix}\begin{bmatrix}x_{1}\\x_{2}\end{bma trix}=\begin{bmatrix}11\\13\end{bmatrix}.$

The solution is $x_{1}=-38, x_{2}=29.$

I perform the elementary column operation $C_{1}-2C_{2}\to C_{1}.$ The system becomes

$\begin{bmatrix}-4 &3\\-9 &7\end{bmatrix}\begin{bmatrix}x_{1}\\x_{2}\end{bma trix}=\begin{bmatrix}11\\13\end{bmatrix},$

with solution $x_{1}=-38, x_{2}=-47$. But now $29=-47-2(-38),$ which is precisely the column operation I performed, but with the indices switched. I think you could prove that, in general, if I perform the elementary column operation $\alpha_{m}+c\alpha_{n}\to\alpha_{m}$, then with the new solution vector $x'$, I can say that $x_{n}=x_{n}'+cx_{m}'.$

I think your proof could go something like this:

We can achieve the equation

$\displaystyle{\left[\begin{matrix}\left(\alpha_{1}-\sum_{j=2}^{n}c_{j}\alpha_{j}\right) &\alpha_{2} &\alpha_{3} &\dots &\alpha_{n}\end{matrix}\left|\begin{matrix}b\end{m atrix}\right]=
\left[\begin{matrix}0 &\alpha_{2} &\alpha_{3} &\dots &\alpha_{n}\end{matrix}\left|\begin{matrix}b\end{m atrix}\right]}$

as a series of elementary column operations which will, incidentally, leave the value of $x_{1}$ unchanged. Since the remainder of the columns (2 through n) are linearly independent, we can row reduce the RHS of this equation such that the last row has all zeros in the non-augmented matrix. You would then need to show that there is a corresponding nonzero entry in the $b$ vector for that row, thus showing that there is no solution.

Does this make sense?

15. You would then need to show that there is a corresponding nonzero entry in the vector for that row, thus showing that there is no solution.
You can do this by sleight-of-hand: construct the matrix

$\begin{bmatrix}\alpha_{2} &\alpha_{3} &\dots &\alpha_{n} &b\end{bmatrix}.$

You know this matrix is invertible, because its determinant is $\pm 1$ times the determinant of $A_{1}$ (you'd have flipped two columns $n-1$ times, which flips the sign of the determinant $n-1$ times).

Because this matrix is invertible, you can perform elementary row operations on it such that the matrix is upper triangular with 1's on the main diagonal. Now you perform an inverse sleight-of-hand and put this upper triangular matrix back into

$\left[\begin{matrix}0 &\alpha_{2} &\alpha_{3} &\dots &\alpha_{n}\end{matrix}\left|\begin{matrix}b\end{m atrix}\right].$

You now have the result desired.