Proof about linear transformations

Hello everyone. I've been stuck with this for a while...

Let V be a linear space of finite dimension and T:V->V a linear transformation. Show that T=rI where r is some real number and I is the identical transformation if and only if ToS=SoT for any linear transformation S:V->V.

I could really use some help with the non-trivial implication :)

Re: Proof about linear transformations

Hi,

Can you prove that the only n by n matrices that commute with every n by n matrix are the scalar matrices? ( A matrix is scalar iff all non-diagonal elements are 0 and all elements on the main diagonal are the same.) Do you see that this answers your problem?

If you need help proving the matrix result, post again to this thread.

Re: Proof about linear transformations

I do see it. But I can't seem to prove it

Re: Proof about linear transformations

Quote:

Originally Posted by

**johng** Hi,

Can you prove that the only n by n matrices that commute with every n by n matrix are the scalar matrices?

If you need help proving the matrix result, post again to this thread.

I can't. I need help. Sufficiency is easy: if T=Ir, ST=TS for all S. Don't see necessity.

I tried the following approach to OP: If TS=ST all S, let S be non-singular. Then S^{-1}TS=T. Stuck. T=rI obviously works, but coudn't show that's a unique solution, besides 0. (necessity)

Re: Proof about linear transformations

I just saw this approach on a different forum: They take v != 0 belonging to R^n and construct a basis B={v,b2,b3,...,bn}, then they define a matrix A such that Av=v and Abk=0. My only question is: How can we know this matrix exists for any basis? If it exists then TAv=ATv yields Tv=A(Tv), meaning Tv=rv, so every v belonging to R^n is an eigenvector of T. Finally they proceed to prove that all eigenvectors have the same eigenvalue.

Re: Proof about linear transformations

Let's try this in a different way: we'll use induction on dim(V). We will suppose we have chosen an appropriate basis for V, and identified Hom_{F}(V,V) with Mat_{nxn}(F) using this basis, for each n, so we can talk about matrices, instead of abstract linear transformations.

The 1-dimensional case (base case) is easy: EVERY 1x1 matrix is a scalar times 1, the 1x1 identity matrix, and matrix multiplication is just the field multiplication, and all 1x1 matrices commute with each other.

Now suppose that we have ST = TS for all nxn matrices S, if and only if T = rI_{n}.

Now it should be clear that if T = rI_{n+1}, that ST = TS, for every (n+1)x(n+1) matrix S, so we only need to show for our inductive step that if T DOES commute with every matrix S, T = rI_{n+1}.

To do this, we will write (using block matrix multiplication):

where S_{1},T_{1} are nxn matrices, S_{2},T_{2} are nx1 matrices, S_{3},T_{3} are 1xn matrices, and S_{4},T_{4} are just the just the n,n-entries of S and T, respectively.

Now, we have:

while:

Our goal is to first prove that if: , we can find SOME matrix that doesn't commute with T.

Let's look at T_{2}, first. If T_{2} is not all 0's, let t_{k(n+1)} be the first non-zero entry (these are all in the last column of T, so the "k" is really all we're interested in).

For our "S", we'll use E_{(n+1)k}, which has all 0's except for the n+1,k-entry which is 1, in other words:

S_{1} = 0, S_{2} = 0, S_{4} = 0, and S_{3} = (0,0,...1,...0) (a row-vector with 1 in the k-th place). This gives us:

We need to show these 2 matrices are not equal. Note that in ST, the 1x1 matrix in the lower right is S_{3}T_{2}, which is just the dot product of e_{k} and T_{2}, which returns the k-th coordinate of T_{2}, which is t_{(n+1)k} ≠ 0.

However, in TS, this block is 0, so these two matrices CANNOT be equal (we don't even need to look at the other parts).

So if T is to commute with EVERY S (including the particular matrix E_{(n+1)k}), we MUST have that the T_{2} block is all zero.

Hopefully, you can guess what is coming next: if T_{3} is not all 0's, let t_{(n+1)k} be the first non-zero entry. This time, we'll use the "S" matrix S = E_{k(n+1)}, so that:

S_{1} = 0, S_{3} = 0, S_{4} = 0, and S_{2} = e_{k} (as a column vector). Here, we have:

.

Again, we only have to look at the lower right corner, in ST, this block is 0; while in TS it is T_{3}e_{k}, which again returns the k-th entry of T_{3} = t_{(n+1)k} ≠ 0.

So, here, again, we find that if T is to commute with EVERY S, then T_{3} = 0.

This means that T is of the form:

, and we have simpler forms for ST and TS:

Now, comparing these 2 matrices (in particular at the nxn block in the upper left), we see that for them to be equal, we must have: S_{1}T_{1} = T_{1}S_{1}.

For this to be true of ANY nxn matrix S_{1}, we must have T_{1} = rI_{n}, by our induction hypothesis. So we can simplify even further:

T_{4} is just a single number, let's call it r'. Writing ST and TS with what we now know, we have:

Comparing the upper right blocks together, and the lower left blocks together, we see for these two matrices to be equal we need r = r', thus:

, quod erat demonstrandum.

Re: Proof about linear transformations

Masterly proof Deveno. Thank you so much.

Re: Proof about linear transformations

Deveno has a good start. As a matter of interest, lets pick up from:

TS=

ST=

("reply with quote" didn't work)

Continuing on a different track as a matter of interest:

Let S2 = S3 = 0.

Then:

ST = (S1T1,S1T2 ; S4T3,S4T4)

TS = (T1S1,T2S4 ; T2S1,S4T4)

S1=0, S4≠0 -> T3=0

S4=0, S1≠0 -> T2=0

So T is diagonal with n r’s and one r’.

To force r’=r, Use S with first row all 1’s and everything else 0.

Therefore T=rI for n+1 X n+1

Re: Proof about linear transformations

Actually, still using Deveno’s notation, it’s easier to start with:

S=(S1,0;0,1) T= (T1,T2;T3,T4)

Then from ST=TS you get:

S1T2=T2, & S1=0 -> T2=0

T3S1=T3, & S1=0 -> T3=0

So T is diagonal with n r’s and one r’.

To force r’=r, Use S with first row all 1’s and everything else 0.

Therefore T=rI for n+1X n+1

Re: Proof about linear transformations

Quote:

Originally Posted by

**Hartlw** Actually, still using Deveno’s notation, it’s easier to start with:

S=(S1,0;0,1) T= (T1,T2;T3,T4)

Then from ST=TS you get:

S1T2=T2, & S1=0 -> T2=0

T3S1=T3, & S1=0 -> T3=0

So T is diagonal with n r’s and one r’.

To force r’=r, Use S with first row all 1’s and everything else 0.

Therefore T=rI for n+1X n+1

It took me a while to get what you were saying, but, yes: if we choose our "S" to be:

then:

which gives with much less work. So that makes for a substantial improvement.

I think the matrix:

works just as well to show r = r', but starting from:

"T_{4} is just a single number..."

it's clear that the off-diagonal (block) elements only differ by the scalar in front, so the two scalars r,r' must be the same:

If (for example) rS_{1} = r'S_{1}, then rS_{1} - r'S_{1} = 0, thus (r - r')S_{1} = 0.

Since we can choose S_{1} freely, we can choose it to be non-zero, which then forces r-r' = 0 (for any vector space V, with v in V (and kxm matrices do form a vector space for any k,m), if av = 0 with v non-zero, then a = 0).

That said, there is nothing wrong with picking the S you prefer, it clearly works quite well.

*******

(Project Crazy Project has a nice proof as well, but it's a little "subscript-heavy" if you know what I mean)

Re: Proof about linear transformations

Deveno, you write (in quotes):

“it's clear that the off-diagonal (block) elements only differ by the scalar in front, so the two scalars r,r' must be the same:” Not clear to me at all.

“If (for example) rS1 = r'S1, then rS1 - r'S1 = 0, thus (r - r')S1 = 0.” Where does this come from?

Instead of my previous proof of r=r’, you might prefer:

S=(I,1;0,0), T=(rI,0;0,r’) , which gives from ST=TS:

r=r’

EDIT:

In __very, very__ abbreviated notation (S1=nxn and the others have to satisfy requirements of partitioning), you could also use:

S=(S1,1;0,0), T=(r,0;0,r’), ST=TS -> r=r'

Re: Proof about linear transformations

Quote:

Originally Posted by

**Deveno** It took me a while to get what you were saying, but, yes: if we choose our "S" to be:

then:

which gives

with much less work. So that makes for a substantial improvement.

I think the matrix:

works just as well to show r = r', but starting from:

That's it. Perfect. End of proof, end of story. What's the point of the rest?

Because I didn't understand e_{1}^{T}, which is (1,0,0...), 1xn, I simply duplicated your r=r' proof without knowing it. Sorry.

Re: Proof about linear transformations

Just as a matter of consistency of style with the rest of the proof:

S=(S1,0;0,1), T=(T1,0;0,T4) gives T4=r