Results 1 to 7 of 7

Thread: Positive semidefinite matrix

  1. #1
    Newbie
    Joined
    Feb 2010
    Posts
    4

    Positive semidefinite matrix

    A and B square matrices and A is positive semidefinite. Prove that $\displaystyle A^2B = BA^2 \Longleftrightarrow AB = BA$

    The $\displaystyle \Leftarrow$ is easy:

    $\displaystyle A^2B = A(AB) = A(BA) = (AB)A = (BA)A = BA^2 $

    but I'm having problems with the $\displaystyle \Rightarrow$.
    Follow Math Help Forum on Facebook and Google+

  2. #2
    MHF Contributor
    Opalg's Avatar
    Joined
    Aug 2007
    From
    Leeds, UK
    Posts
    4,041
    Thanks
    10
    Quote Originally Posted by NeTTuR View Post
    A and B square matrices and A is positive semidefinite. Prove that $\displaystyle A^2B = BA^2 \Longleftrightarrow AB = BA$

    The $\displaystyle \Leftarrow$ is easy:

    $\displaystyle A^2B = A(AB) = A(BA) = (AB)A = (BA)A = BA^2 $

    but I'm having problems with the $\displaystyle \Rightarrow$.
    Outline solution (see if you can fill in the details):

    Let $\displaystyle \lambda_1,\ldots,\lambda_n$ be the eigenvalues of A, and let p(x) be a polynomial such that $\displaystyle p(\lambda_i^2) = \lambda_i$ for $\displaystyle 1\leqslant i\leqslant n$. Then $\displaystyle p(A^2) = A$. But B obviously commutes with $\displaystyle p(A^2)$. So B commutes with A.
    Follow Math Help Forum on Facebook and Google+

  3. #3
    Newbie
    Joined
    Feb 2010
    Posts
    4
    I was thinking something like this:

    $\displaystyle A=A^* \Rightarrow A = U D U^*$, where D diagonal matrix with eigenvalues of A and U unitary.

    $\displaystyle A^2B = BA^2 \Leftrightarrow UD^2U^*B=BUD^2U^* \Leftrightarrow D^2U^*BU = U^*BUD^2$

    $\displaystyle U^*BU = C \Rightarrow D^2C = CD^2$

    $\displaystyle D^2C =
    \begin{pmatrix}
    \lambda_1^2 & 0 & \cdots & 0 \\
    \vdots & \vdots & \ddots & \vdots \\
    0 & 0 & \cdots & \lambda_n^2
    \end{pmatrix}
    \begin{pmatrix}
    c_{1,1} & c_{1,2} & \cdots & c_{1,n} \\
    \vdots & \vdots & \ddots & \vdots \\
    c_{n,1} & c_{n,2} & \cdots & c_{n,n}
    \end{pmatrix}
    $$\displaystyle = \begin{pmatrix}
    \lambda_1^2 c_{1,1} & \lambda_1^2 c_{1,2} & \cdots & \lambda_1^2c_{1,n} \\
    \vdots & \vdots & \ddots & \vdots \\
    \lambda_n^2 c_{n,1} & \lambda_n^2 c_{n,2} & \cdots & \lambda_n^2 c_{n,n}
    \end{pmatrix}$

    $\displaystyle CD^2 =
    \begin{pmatrix}
    c_{1,1} & c_{1,2} & \cdots & c_{1,n} \\
    \vdots & \vdots & \ddots & \vdots \\
    c_{n,1} & c_{n,2} & \cdots & c_{n,n}
    \end{pmatrix}
    \begin{pmatrix}
    \lambda_1^2 & 0 & \cdots & 0 \\
    \vdots & \vdots & \ddots & \vdots \\
    0 & 0 & \cdots & \lambda_n^2
    \end{pmatrix}
    $$\displaystyle = \begin{pmatrix}
    \lambda_1^2 c_{1,1} & \lambda_2^2 c_{1,2} & \cdots & \lambda_n^2c_{1,n} \\
    \vdots & \vdots & \ddots & \vdots \\
    \lambda_1^2 c_{n,1} & \lambda_2^2 c_{n,2} & \cdots & \lambda_n^2 c_{n,n}
    \end{pmatrix}$
    This gives $\displaystyle \lambda_1^2 = \lambda_2^2 = \cdots = \lambda_n^2$ and $\displaystyle \lambda_i$ is an eigenvalue to A. If I take the square root of these, then $\displaystyle DC$ will commute.
    So for a Hermitian matrix, there will be a positive semidefinite matrix that commutes with $\displaystyle B$ and whose square is $\displaystyle A^2$. Now because $\displaystyle A$ and therefore also $\displaystyle A^2$ is positive semidefinite, there exist an unique positive semidefinite matrix whose square is $\displaystyle A^2$ and this matrix is of course A.

    Does this look right?
    Last edited by NeTTuR; Feb 14th 2010 at 01:25 PM.
    Follow Math Help Forum on Facebook and Google+

  4. #4
    MHF Contributor
    Opalg's Avatar
    Joined
    Aug 2007
    From
    Leeds, UK
    Posts
    4,041
    Thanks
    10
    Quote Originally Posted by NeTTuR View Post
    I was thinking something like this:

    $\displaystyle A=A^* \Rightarrow A = U D U^*$, where D diagonal matrix with eigenvalues of A and U unitary.

    $\displaystyle A^2B = BA^2 \Leftrightarrow UD^2U^*B=BUD^2U^* \Leftrightarrow D^2U^*BU = U^*BUD^2$

    $\displaystyle U^*BU = C \Rightarrow D^2C = CD^2$

    $\displaystyle D^2C =
    \begin{pmatrix}
    \lambda_1^2 & 0 & \cdots & 0 \\
    \vdots & \vdots & \ddots & \vdots \\
    0 & 0 & \cdots & \lambda_n^2
    \end{pmatrix}
    \begin{pmatrix}
    c_{1,1} & c_{1,2} & \cdots & c_{1,n} \\
    \vdots & \vdots & \ddots & \vdots \\
    c_{n,1} & c_{n,2} & \cdots & c_{n,n}
    \end{pmatrix}
    $$\displaystyle = \begin{pmatrix}
    \lambda_1^2 c_{1,1} & \lambda_1^2 c_{1,2} & \cdots & \lambda_1^2c_{1,n} \\
    \vdots & \vdots & \ddots & \vdots \\
    \lambda_n^2 c_{n,1} & \lambda_n^2 c_{n,2} & \cdots & \lambda_n^2 c_{n,n}
    \end{pmatrix}$

    $\displaystyle CD^2 =
    \begin{pmatrix}
    c_{1,1} & c_{1,2} & \cdots & c_{1,n} \\
    \vdots & \vdots & \ddots & \vdots \\
    c_{n,1} & c_{n,2} & \cdots & c_{n,n}
    \end{pmatrix}
    \begin{pmatrix}
    \lambda_1^2 & 0 & \cdots & 0 \\
    \vdots & \vdots & \ddots & \vdots \\
    0 & 0 & \cdots & \lambda_n^2
    \end{pmatrix}
    $$\displaystyle = \begin{pmatrix}
    \lambda_1^2 c_{1,1} & \lambda_2^2 c_{1,2} & \cdots & \lambda_n^2c_{1,n} \\
    \vdots & \vdots & \ddots & \vdots \\
    \lambda_1^2 c_{n,1} & \lambda_2^2 c_{n,2} & \cdots & \lambda_n^2 c_{n,n}
    \end{pmatrix}$ Correct as far as here.
    This gives $\displaystyle \lambda_1^2 = \lambda_2^2 = \cdots = \lambda_n^2$ and $\displaystyle \lambda_i$ is an eigenvalue to A. No. If all the eigenvalues of A are equal, then A would have to be a multiple of the identity matrix, and that need not be the case. What you can deduce from equations like $\displaystyle \color{red}\lambda_i^2 c_{i,j} = \lambda_j^2 c_{i,j}$ (where $\displaystyle \color{red}i\ne j$) is that either $\displaystyle \color{red}\lambda_i = \lambda_j$ or $\displaystyle \color{red}c_{i,j} = 0$.
    If I take the square root of these, then $\displaystyle DC$ will commute.
    So for a Hermitian matrix, there will be a positive semidefinite matrix that commutes with $\displaystyle B$ and whose square is $\displaystyle A^2$. Now because $\displaystyle A$ and therefore also $\displaystyle A^2$ is positive semidefinite, there exist an unique positive semidefinite matrix whose square is $\displaystyle A^2$ and this matrix is of course A. Yes, it's true that a positive semidefinite matrix has a unique positive semidefinite square root.
    So if all the eigenvalues are different then all the off-diagonal elements of C must be zero. If some of them are repeated then C can have some nonzero off-diagonal elements.

    Notice that if p(x) is a polynomial then $\displaystyle p(D^2)$ is a diagonal matrix whose diagonal elements are $\displaystyle p(\lambda_1^2),\ldots,p(\lambda_n^2)$. If p(x) is as in my previous comment then it follows that $\displaystyle p(D^2) = D$. Since $\displaystyle A = UDU^*$, it follows that $\displaystyle p(A^2) = A$. I think you'll find that is a much simpler way to show that AB = BA, rather than trying to work with the matrix C.
    Follow Math Help Forum on Facebook and Google+

  5. #5
    Newbie
    Joined
    Feb 2010
    Posts
    4
    Thank you so much for your time. I really appreciate it.

    I do not however fully understand the your proof yet. I haven't seen the theorem your using to be able to say that $\displaystyle p(\lambda_i^2) = \lambda_i \Rightarrow p(A^2) = A$ or that B obviously should commute with the polynomial expression. It's important because there is a follow up question saying: "What can you say if A is only assumed to be Hermitian?". Is the extra assumption that A is positive semidefinite unnecessarily strong?
    Follow Math Help Forum on Facebook and Google+

  6. #6
    MHF Contributor
    Opalg's Avatar
    Joined
    Aug 2007
    From
    Leeds, UK
    Posts
    4,041
    Thanks
    10
    Quote Originally Posted by NeTTuR View Post
    Thank you so much for your time. I really appreciate it.

    I do not however fully understand the your proof yet. I haven't seen the theorem your using to be able to say that $\displaystyle p(\lambda_i^2) = \lambda_i \Rightarrow p(A^2) = A$ or that B obviously should commute with the polynomial expression.
    First fact: If $\displaystyle p(x) = a_kx^k + \ldots + a_1x + a_0$ is a polynomial, and A is an nn matrix, then p(A) denotes the matrix $\displaystyle a_kA^k + \ldots + a_1A + a_0I$, where I is the nn identity matrix. If B is a matrix that commutes with A, then B commutes with all the powers of A (easy proof by induction), and of course B commutes with the identity matrix. It follows that B commutes with p(A).

    Second fact: If D is a diagonal matrix, with diagonal entries $\displaystyle d_1,\ldots,d_n$, then each power $\displaystyle D^k$ of D is also diagonal, with entries $\displaystyle d_1^k,\ldots,d_n^k$ (again, an easy proof by induction). It follows that p(D) is diagonal, with entries $\displaystyle p(d_1),\ldots,p(d_n)$.

    I hope those facts will help to make sense of the claims that I made in previous comments.

    Quote Originally Posted by NeTTuR View Post
    It's important because there is a follow up question saying: "What can you say if A is only assumed to be Hermitian?". Is the extra assumption that A is positive semidefinite unnecessarily strong?
    In fact, the result fails if A is only assumed to be hermitian. For an example, look at the 22 matrix $\displaystyle A = \begin{bmatrix}0&1\\1&0\end{bmatrix}$. This is hermitian, and its square is the identity matrix. So any 22 matrix B will commute with $\displaystyle A^2$. But you should be able to find a matrix B that does not commute with A.
    Follow Math Help Forum on Facebook and Google+

  7. #7
    Newbie
    Joined
    Feb 2010
    Posts
    4
    But where in the proof does the assumption that A is positive semidefinite come in?

    Never mind. You should never do math when you are tired. Of course the polynomial $\displaystyle p(\lambda_i^2) = \lambda_i$ is only guaranteed to exist if all the eigenvalues are non-negative. Otherwise the polynomial might have to take multiple values for the same $\displaystyle \lambda_i^2$. The eigenvalues in your example $\displaystyle A = \begin{pmatrix}0 & 1 \\ 1 & 0 \end{pmatrix}$ are $\displaystyle \lambda = \pm 1$, which gives $\displaystyle p(1) = 1$ and $\displaystyle p(1) = -1$.

    Thank you so much for your help.
    Last edited by NeTTuR; Feb 15th 2010 at 09:19 PM.
    Follow Math Help Forum on Facebook and Google+

Similar Math Help Forum Discussions

  1. positive matrix
    Posted in the Differential Geometry Forum
    Replies: 0
    Last Post: Oct 16th 2010, 01:10 AM
  2. positive semidefinite gram matrix
    Posted in the Advanced Algebra Forum
    Replies: 0
    Last Post: May 26th 2010, 09:24 PM
  3. Positive semidefinite matrices
    Posted in the Advanced Algebra Forum
    Replies: 4
    Last Post: May 5th 2010, 01:43 PM
  4. Replies: 2
    Last Post: May 30th 2009, 12:51 PM
  5. Replies: 0
    Last Post: Jun 4th 2008, 02:39 PM

Search Tags


/mathhelpforum @mathhelpforum