Results 1 to 15 of 15

Math Help - Question regarding determinants and switching columns of square matrices

  1. #1
    Member
    Joined
    Mar 2010
    Posts
    175

    Unhappy Question regarding determinants and switching columns of square matrices

    A is a square matrix length n and b is a vector in R^n. For every k=1,2,...,n A_k is the matrix that is received from A after switching the k column with vector b. Given, detA=0.

    Prove that if det A_1 \neq 0 then there is no solution to Ax=b.


    This is what the question means if it was unclear:

    A = \left(\begin{array}{cccc}\alpha_{11} & \alpha_{12} & ... & \alpha_{1n}\\ \alpha_{21} & \alpha_{22} & ... & \alpha_{2n}\\ . & . & . & . \\ \alpha_{n1} & \alpha_{n2} & ... & \alpha_{nn} \end{array}\right)

    b = \left(\begin{array}{cccc} \beta_1 \\ \beta_2 \\ . \\ \beta_n \end{array}\right)

    A_1 = \left(\begin{array}{cccc}\beta_1 & \alpha_{12} & ... & \alpha_{1n}\\ \beta_2 & \alpha_{22} & ... & \alpha_{2n}\\ . & . & . & . \\ \beta_n & \alpha_{n2} & ... & \alpha_{nn} \end{array}\right) \rightarrow I simply replace the first column of A with vector b

    Now that it's all cleared up how do I solve this?? It has something to do with detA=0 which makes the matrix singular... and most likely it has to do something with all those properties of an invertibale matrix.

    And the question is telling us that there is only a solution to Ax=b if A_1 is singular like A! Why is that so? Why can't A_1 be invertible?

    But I just can't put it together. Can someone please help?

    Thanks!
    Follow Math Help Forum on Facebook and Google+

  2. #2
    A Plied Mathematician
    Joined
    Jun 2010
    From
    CT, USA
    Posts
    6,318
    Thanks
    4
    Awards
    2
    I would use Kramer's rule here. Kramer's Rule tells you that with the vector x=\langle x_{1},x_{2},\dots\rangle, you have

    \displaystyle{x_{1}=\frac{\det(A_{1})}{\det(A)}}.

    If now \det(A)=0 and yet \det(A_{1})\not=0, what does that tell you?
    Follow Math Help Forum on Facebook and Google+

  3. #3
    Member
    Joined
    Mar 2010
    Posts
    175
    I would use Kramer's rule here. Kramer's Rule tells you that with the vector you have



    If now and yet what does that tell you?
    I can't believe how simple that was! Thanks so much.
    Follow Math Help Forum on Facebook and Google+

  4. #4
    A Plied Mathematician
    Joined
    Jun 2010
    From
    CT, USA
    Posts
    6,318
    Thanks
    4
    Awards
    2
    You're welcome. Have a good one!
    Follow Math Help Forum on Facebook and Google+

  5. #5
    Member
    Joined
    Mar 2010
    Posts
    175
    Quote Originally Posted by Ackbeet View Post
    I would use Kramer's rule here. Kramer's Rule tells you that with the vector x=\langle x_{1},x_{2},\dots\rangle, you have

    \displaystyle{x_{1}=\frac{\det(A_{1})}{\det(A)}}.

    If now \det(A)=0 and yet \det(A_{1})\not=0, what does that tell you?

    I have a question again. I believe your equation is not correct: \displaystyle{x_{1}=\frac{\det(A_{1})}{\det(A)}}.

    Kramers law states that you can only do that if the determinant of A is invertible, and since in our case A is sigular you are not allowed to write \displaystyle{x_{1}=\frac{\det(A_{1})}{\det(A)}}.

    but if we are assuming that A_1 is invertable then we can say
    \displaystyle{x_{11}=\frac{\det(A_{11})}{\det(A_1)  }}.

    and since A_{1} is invertible you get a solution every time.

    So now I feel I am stuck again.
    Follow Math Help Forum on Facebook and Google+

  6. #6
    A Plied Mathematician
    Joined
    Jun 2010
    From
    CT, USA
    Posts
    6,318
    Thanks
    4
    Awards
    2
    Hmm. I think you're right. I missed that detail. Still, I bet you could use the derivation of Cramer's Rule (I mis-spelled it Kramer) to get the proof you need. The proof of Cramer's Rule, I'm betting, has a construction you could use to prove your point. For example, instead of having the denominator being zero, and having a problem there, do all your constructions with that determinant on the same side of the equation as x_{1}. Perhaps you can get a contradiction or something.
    Follow Math Help Forum on Facebook and Google+

  7. #7
    Member
    Joined
    Mar 2010
    Posts
    175
    Quote Originally Posted by Ackbeet View Post
    Hmm. I think you're right. I missed that detail. Still, I bet you could use the derivation of Cramer's Rule (I mis-spelled it Kramer) to get the proof you need. The proof of Cramer's Rule, I'm betting, has a construction you could use to prove your point. For example, instead of having the denominator being zero, and having a problem there, do all your constructions with that determinant on the same side of the equation as x_{1}. Perhaps you can get a contradiction or something.
    Ya, that's what I'm thinking. So I'll try it out. But if anyone else has an idea, I'd greatly appreciate it!!
    Follow Math Help Forum on Facebook and Google+

  8. #8
    Member
    Joined
    Mar 2010
    Posts
    175
    I thought about the question some more and I don't think it has to do with Cramer's rule. I think it has to with this:

    A = \left(\begin{array}{ccc}1&2&0\\2&0&4\\3&6&0\end{ar  ray}\right)
    b = \left(\begin{array}{ccc}2\\1\\5\end{array}\right)
    A_1 = \left(\begin{array}{ccc}2&2&0\\1&0&4\\5&6&0\end{ar  ray}\right)
    |A_1| \neq 0 , |A|=0
    and there are no solutions for Ax=b

    however, if


    A = \left(\begin{array}{ccc}8&6&2\\0&1&1\\4&3&1\end{ar  ray}\right)
    b = \left(\begin{array}{ccc}4\\3\\2\end{array}\right)
    A_1 = \left(\begin{array}{ccc}4&6&2\\3&1&1\\2&3&1\end{ar  ray}\right)
    |A_1|=0 , |A|=0
    and there are solution for Ax=b

    **there are also cases where both |A|, |A_1| =0 and there is also no solution.

    So, i feel like I'm getting closer...just not close enough. Any input would be great.
    Thanks!!!
    Follow Math Help Forum on Facebook and Google+

  9. #9
    MHF Contributor

    Joined
    Apr 2005
    Posts
    14,973
    Thanks
    1121
    Well, that second example shows that the theorem you are trying to prove is not true! I recommend you go back and re-read the original problem. If Ax= b is a solvable matrix equation (system of equations) in which none of the solutions is 0, then neither |A| nor |A_k|, for any k, is 0 yet there are solutions.
    Follow Math Help Forum on Facebook and Google+

  10. #10
    Member
    Joined
    Mar 2010
    Posts
    175
    I don't understand what you just wrote. The second problem doesn't prove or disprove anything. I was just showing different variations of problems when I change columns by a vector. I was just throwing out ideas which I thought was in the right direction and hoping someone could finish the idea for me.

    And what do you mean by reread the original problem??
    Follow Math Help Forum on Facebook and Google+

  11. #11
    A Plied Mathematician
    Joined
    Jun 2010
    From
    CT, USA
    Posts
    6,318
    Thanks
    4
    Awards
    2
    HallsofIvy,

    To disprove the theorem, you would have to exhibit a case where \det(A)=0, \det(A_{1})\not=0, and yet there is a solution to Ax=b. jayshizwiz has not provided such an example. In his first example, both the hypothesis and the conclusion of the theorem are satisfied. In his second example, the hypothesis of the theorem is not satisfied, because \det(A_{1})=0.

    jayshizwiz,

    I need to think about this a bit more. The answer certainly isn't popping out at me.
    Follow Math Help Forum on Facebook and Google+

  12. #12
    A Plied Mathematician
    Joined
    Jun 2010
    From
    CT, USA
    Posts
    6,318
    Thanks
    4
    Awards
    2
    I'm thinking along the lines of using linear independence (abbr. LI) and linear dependence (abbr. LD). Let \alpha_{j} be the jth column of matrix A. Since \det(A_{1})\not=0, we know that columns 2 through n are LI, since \{b,\alpha_{2},\alpha_{3},\dots,\alpha_{n}\} is LI. Since \det(A)=0, and columns 2 through n are LI, it follows that \alpha_{1} can be written as a linear combination of columns 2 through n, since \{\alpha_{j}\} is LD. We'll say that

    \displaystyle{\alpha_{1}=\sum_{j=2}^{n}c_{j}\alpha  _{j}}, where c_{j} are scalars.

    I'm thinking about elementary column operations. You might be able to manipulate those to get what you need.

    Hopefully post more later.
    Follow Math Help Forum on Facebook and Google+

  13. #13
    A Plied Mathematician
    Joined
    Jun 2010
    From
    CT, USA
    Posts
    6,318
    Thanks
    4
    Awards
    2
    So, if we take the augmented matrix

    \left[\begin{matrix}\alpha_{1} &\alpha_{2} &\alpha_{3} &\dots &\alpha_{n}\end{matrix}\left|\begin{matrix}b\end{m  atrix}\right],

    we can perform the combination of elementary column operations as follows:

    \displaystyle{\left[\begin{matrix}\left(\alpha_{1}-\sum_{j=2}^{n}c_{j}\alpha_{j}\right) &\alpha_{2} &\alpha_{3} &\dots &\alpha_{n}\end{matrix}\left|\begin{matrix}b\end{m  atrix}\right]=<br />
\left[\begin{matrix}0 &\alpha_{2} &\alpha_{3} &\dots &\alpha_{n}\end{matrix}\left|\begin{matrix}b\end{m  atrix}\right].}

    There are two questions I have in my mind:

    1. What does this sequence of elementary column operations do to the array of variables \{x_{j}\}, as well as b?
    2. Can we use this notion to show that the system has no solution?
    Follow Math Help Forum on Facebook and Google+

  14. #14
    A Plied Mathematician
    Joined
    Jun 2010
    From
    CT, USA
    Posts
    6,318
    Thanks
    4
    Awards
    2
    Elementary column operations change the underlying system in a predictable way. For example, suppose I have the system

    \begin{bmatrix}2 &3\\5 &7\end{bmatrix}\begin{bmatrix}x_{1}\\x_{2}\end{bma  trix}=\begin{bmatrix}11\\13\end{bmatrix}.

    The solution is x_{1}=-38, x_{2}=29.

    I perform the elementary column operation C_{1}-2C_{2}\to C_{1}. The system becomes

    \begin{bmatrix}-4 &3\\-9 &7\end{bmatrix}\begin{bmatrix}x_{1}\\x_{2}\end{bma  trix}=\begin{bmatrix}11\\13\end{bmatrix},

    with solution x_{1}=-38, x_{2}=-47. But now 29=-47-2(-38), which is precisely the column operation I performed, but with the indices switched. I think you could prove that, in general, if I perform the elementary column operation \alpha_{m}+c\alpha_{n}\to\alpha_{m}, then with the new solution vector x', I can say that x_{n}=x_{n}'+cx_{m}'.

    I think your proof could go something like this:

    We can achieve the equation

    \displaystyle{\left[\begin{matrix}\left(\alpha_{1}-\sum_{j=2}^{n}c_{j}\alpha_{j}\right) &\alpha_{2} &\alpha_{3} &\dots &\alpha_{n}\end{matrix}\left|\begin{matrix}b\end{m  atrix}\right]=<br />
\left[\begin{matrix}0 &\alpha_{2} &\alpha_{3} &\dots &\alpha_{n}\end{matrix}\left|\begin{matrix}b\end{m  atrix}\right]}

    as a series of elementary column operations which will, incidentally, leave the value of x_{1} unchanged. Since the remainder of the columns (2 through n) are linearly independent, we can row reduce the RHS of this equation such that the last row has all zeros in the non-augmented matrix. You would then need to show that there is a corresponding nonzero entry in the b vector for that row, thus showing that there is no solution.

    Does this make sense?
    Follow Math Help Forum on Facebook and Google+

  15. #15
    A Plied Mathematician
    Joined
    Jun 2010
    From
    CT, USA
    Posts
    6,318
    Thanks
    4
    Awards
    2
    You would then need to show that there is a corresponding nonzero entry in the vector for that row, thus showing that there is no solution.
    You can do this by sleight-of-hand: construct the matrix

    \begin{bmatrix}\alpha_{2} &\alpha_{3} &\dots &\alpha_{n} &b\end{bmatrix}.

    You know this matrix is invertible, because its determinant is \pm 1 times the determinant of A_{1} (you'd have flipped two columns n-1 times, which flips the sign of the determinant n-1 times).

    Because this matrix is invertible, you can perform elementary row operations on it such that the matrix is upper triangular with 1's on the main diagonal. Now you perform an inverse sleight-of-hand and put this upper triangular matrix back into

    \left[\begin{matrix}0 &\alpha_{2} &\alpha_{3} &\dots &\alpha_{n}\end{matrix}\left|\begin{matrix}b\end{m  atrix}\right].

    You now have the result desired.
    Follow Math Help Forum on Facebook and Google+

Similar Math Help Forum Discussions

  1. Square matrices question.
    Posted in the Algebra Forum
    Replies: 4
    Last Post: April 23rd 2010, 07:21 PM
  2. matrices and determinants
    Posted in the Advanced Algebra Forum
    Replies: 1
    Last Post: April 24th 2009, 08:09 PM
  3. Determinants of Large Matrices
    Posted in the Advanced Algebra Forum
    Replies: 4
    Last Post: September 27th 2008, 09:24 AM
  4. Matrices and Determinants
    Posted in the Algebra Forum
    Replies: 4
    Last Post: January 7th 2007, 10:20 AM
  5. help me(Matrices&Determinants)
    Posted in the Algebra Forum
    Replies: 1
    Last Post: September 12th 2006, 05:45 AM

Search Tags


/mathhelpforum @mathhelpforum