Math Help - Matrix Banach Space

1. Matrix Banach Space

Let M_n(R) be the n x n matrices over the reals R. Define a norm || || on M_n(R) by ||A||= sum of absolute values of all the entries of A. Further define a new norm || ||* by ||A||* = sup{||AX||/||X||, ||X||!=0}.
Show that

1. M_n(R) under || ||* is complete.
2. If ||A||<1, then I-A is nonsingular, where I is the identity matrix.
3. The set of nonsingular matrices in M_n(R) is open.
4. Find ||B||*, where B is 2x2 and b_11=1, b_12=2, b_21=3, b_22=4.

There is a series of over 10 questions on the norm || ||. I've solved most of them but I've been stuck on (have no clue for) these ones above for a week.

I'd appreciate any hints.

2. Originally Posted by nubmathie
Let M_n(R) be the n x n matrices over the reals R. Define a norm || || on M_n(R) by ||A||= sum of absolute values of all the entries of A. Further define a new norm || ||* by ||A||* = sup{||AX||/||X||, ||X||!=0}.
Show that

1. M_n(R) under || ||* is complete.
All finite-dimensional vector spaces over R are complete.

Originally Posted by nubmathie
2. If ||A||<1, then I-A is nonsingular, where I is the identity matrix.
I think the proof for this needs to come in two stages. (i) show that $\|A\|^*\leqslant\|A\|$; (ii) show that if $\|A\|^*<1$ then $I-A$ is nonsingular.

I don't like that asterisk notation for the operator norm, so I'll write $\|A\|_{\text{op}}$ instead of $\|A\|^*$; and I'll use $\|A\|_{\Sigma}$ for the sum of the absolute values of all the entries of A.

To show (i), use the fact that $\|A\|_{\text{op}}^2 = \|A^{\textsc t}A\|_{\text{op}}$ ( $A^{\textsc t}$ is the transpose of A), and check that $\|A^{\textsc t}A\|_{\Sigma}\leqslant \|A\|_{\Sigma}^2$. Then $A^{\textsc t}A$ is a positive matrix, so its operator norm is equal to its largest eigenvalue. This is less than (or equal to) the sum of the eigenvalues, which in turn is the sum of its diagonal elements and therefore less than the sum of the absolute values of all its elements.

Putting all that together, you see that $\|A\|_{\text{op}}^2 = \|A^{\textsc t}A\|_{\text{op}} \leqslant \|A^{\textsc t}A\|_{\Sigma}\leqslant \|A\|_{\Sigma}^2$.

For (ii), use the Neumann series $(I-A)^{-1} = I+A+A^2+A^3+\ldots$, which converges if $\|A\|_{\text{op}}<1$.

Originally Posted by nubmathie
3. The set of nonsingular matrices in M_n(R) is open.
Suppose that A is invertible and that $\|B-A\|_{\text{op}}<\|A^{-1}\|_{\text{op}}^{-1}$. If $C=B-A$ then $B = A+C = A(I+A^{-1}C)$. This is the product of two invertible matrices (because $\|A^{-1}C\|_{\text{op}}<1$) and is therefore invertible. So any matrix sufficiently close to A is invertible, and that shows that the set of invertible matrices is open.
Originally Posted by nubmathie
4. Find ||B||*, where B is 2x2 and b_11=1, b_12=2, b_21=3, b_22=4.
$\|B\|_{\text{op}}^2$ is the larger of the two eigenvalues of $B^{\textsc t}B$, which you can easily calculate. I get $\|B\|_{\text{op}} = \sqrt{15 + \sqrt{221}}$.

3. This was very helpful. But notice that in my definition of || ||_op, X refers to a matrix, so || ||_op is not exactly the usual operator norm. So I'm not sure if some of the results you've used for the operator norm still holds for my || ||_op norm. Further, there are a few "advanced" results that I'm not supposed to use in my proof. So for each of the 4 questions:

1. Is this a consequence of the fact that all norms on R^n are equivalent? (I'm not supposed to use the theorem you stated.)

2. I've shown part i) in a earlier problem, but I don't see how i) contributes to part ii).

3. Many thanks for this one.

4. I believe this is a known result for the usual operator norm. I'm not sure if it holds for my || ||_op.

4. Originally Posted by nubmathie
This was very helpful. But notice that in my definition of || ||_op, X refers to a matrix, so || ||_op is not exactly the usual operator norm.
Oh, I see. I was assuming that X was a column vector in R^n (with the euclidean norm). Your norm is an operator norm, but as you say it is not the usual operator norm. It is the norm that M_n(R) has as an algebra of operators acting on M_n(R) itself (with the Σ-norm).

Originally Posted by nubmathie
So I'm not sure if some of the results you've used for the operator norm still holds for my || ||_op norm.
Nor am I! I was using results for operator norms over Hilbert spaces, whereas your norm is an operator norm over a Banach space.

Originally Posted by nubmathie
1. Is this a consequence of the fact that all norms on R^n are equivalent?
Yes (because M_n(R) can be regarded as R^(n^2)).

Originally Posted by nubmathie
2. I've shown part i) in a earlier problem, but I don't see how i) contributes to part ii).
My proof of 2. was unnecessarily complicated. The essence is to use the completeness of M_n(R) in the $\|\,.\,\|_{\Sigma}$ norm to deduce that the series $I+A+A^2+A^3+\ldots$ converges. For this, you need to show that $\|A^k\|_{\Sigma}\leqslant\|A\|_{\Sigma}^k$ (for k=1,2,3,...). This follows from the fact that $\|\,.\,\|_{\Sigma}$ is an algebra norm (in other words $\|AB\|_{\Sigma} \leqslant \|A\|_{\Sigma}\|B\|_{\Sigma}$), which you can prove by direct computation without involving any other norm.

Originally Posted by nubmathie
4. I believe this is a known result for the usual operator norm. I'm not sure if it holds for my || ||_op.
It certainly doesn't. What you want to do is to maximise the Σ-norm of BX subject to the Σ-norm of X being at most 1. If $X = \begin{bmatrix}w&x\\y&z\end{bmatrix}$ then $BX = \begin{bmatrix}1&2\\3&4\end{bmatrix} \begin{bmatrix}w&x\\y&z\end{bmatrix} = \begin{bmatrix}w+2y&x+2z\\3w+4y&3x+4z\end{bmatrix}$. So you want to maximise $|w+2y| + |x+2z| + |3w+4y| + |3x+4z|$ subject to $|w|+|x|+|y|+|z|\leqslant1$. That looks quite unpleasant. But suppose we cheat a bit and assume that w, x, y and z are all positive. Then the problem becomes: maximise 4w+4x+6y+6z subject to w+x+y+z=1, and the maximum is easily seen to be 6. I would guess that this is the correct answer to the problem, but I don't offhand see how to justify that (without making the cheating assumption).

5. Originally Posted by Opalg
It certainly doesn't. What you want to do is to maximise the Σ-norm of BX subject to the Σ-norm of X being at most 1. If $X = \begin{bmatrix}w&x\\y&z\end{bmatrix}$ then $BX = \begin{bmatrix}1&2\\3&4\end{bmatrix} \begin{bmatrix}w&x\\y&z\end{bmatrix} = \begin{bmatrix}w+2y&x+2z\\3w+4y&3x+4z\end{bmatrix}$. So you want to maximise $|w+2y| + |x+2z| + |3w+4y| + |3x+4z|$ subject to $|w|+|x|+|y|+|z|\leqslant1$. That looks quite unpleasant. But suppose we cheat a bit and assume that w, x, y and z are all positive. Then the problem becomes: maximise 4w+4x+6y+6z subject to w+x+y+z=1, and the maximum is easily seen to be 6. I would guess that this is the correct answer to the problem, but I don't offhand see how to justify that (without making the cheating assumption).
Why are you using the constraint that |w|+|x|+|y|+|z|<=1 ?

6. Originally Posted by nubmathie
Why are you using the constraint that |w|+|x|+|y|+|z|<=1 ?
By definition, $\|B\|_{\text{op}} = \sup\{\|BX\|_{\Sigma}/\|X\|_{\Sigma}:X\ne0\}$. By linearity, this is the same as $\sup\{\|BX\|_{\Sigma}:\|X\|_{\Sigma}\leqslant1\}$. (You can even replace " $\leqslant1$" by "=1" in that last expression.) And $\|X\|_{\Sigma} = |w|+|x|+|y|+|z|$.

7. I have one more question. How would you show that

||AB||_op <= ||A||_op x ||B||_op

Many thanks.

8. Originally Posted by nubmathie
I have one more question. How would you show that

||AB||_op <= ||A||_op x ||B||_op
First note that $\|AX\|_{\Sigma}\leqslant\|A\|_{\text{op}}\|X\|_{\S igma}$ (from the definition of the op-norm). Then $\|ABX\|_{\Sigma}\leqslant\|A\|_{\text{op}}\|BX\|_{ \Sigma} \leqslant\|A\|_{\text{op}}\|B\|_{\text{op}}\|X\|_{ \Sigma}$. Now take the sup over $\|X\|_{\Sigma}\leqslant1$ to get $\|A\|_{\text{op}}\leqslant\|A\|_{\text{op}}\|B\|_{ \text{op}}$.