Linear transformation and kernel

Hi everybody

We have a linear transformation f: R^{2x2 }--> R, with respect to the standard basis in R^{2x2 }and the standard basis in R, and a transformation matrix F = (1,1,1,0)

How do I find the kernel for f ?

Normally I would solve f(x) = 0 :

(1,1,1,0|0)

but that doesn't make sense ?

Re: Linear transformation and kernel

sure it does. suppose x + y + z + 0w = 0.

well surely you can see that "w" can be anything. also, we can pick two of {x,y,z} to be anything we please, but then the third choice is forced upon us: if we specify y and z, then x = -y - z.

so the kernel of f is the matrices (in the standard basis) of the form:

.

let's pick three arbitrary numbers for y,z and w: how about 7,32, and -5? so we're asserting that the matrix:

is in the kernel of f.

let's write M in the standard basis for R^{2x2}:

.

so M = -39E_{1} + 7E_{2} + 32E_{3} - 5E_{4}.

in the standard basis, this is the (column) vector (-39,7,32,-5)^{T}. so the standard basis representation of f(M) is:

(and 1x1 matrices are just scalars).

it should be obvious that the kernel of f has dimension 3 (because its image has dimension 1), here is one basis:

.

(the "trick" here is that 2x2 real MATRICES can be thought of as "4-vectors" (elements of R^{4}), just line up the columns "end-to-end" to make one long column).

Re: Linear transformation and kernel

This might be a stupid question but I don't understand how you get the first matrix ?

I understand that x=-t1-t2 and I thought that was the kernel of f?

Re: Linear transformation and kernel

the 4 "coordinates" of a 2x2 matrix in the standard basis are the "entries" of that matrix in its usual "block form".

what the matrix F tells us is:

"we take the first "coordinate" of a matrix A (its 1,1-entry), and the second "coordinate" (its 1,2-entry), and the third "coordinate" (its 2,1-entry) and add them"

(we ignore the "fourth coordinate" (the 2,2-entry), because it gets multiplied by 0).

so the ker(f) consists of those vectors that in the standard basis are in the null space of the matrix F.

when one has "free variables" (or parameters) it is customary to use those for the non-pivot-columns of the matrix for f, F.

i could write an element of null(F) as (x,y,z,w), or if you prefer, (x_{1},x_{2},x_{3},x_{4}).

we know that x_{1} + x_{2} + x_{3} = 0.

for example, the 4-vector (1,0,-1,300) works. it really does not matter if we write:

-1 = -1 - 0 (fix the first two coordinates, and derive the third)

0 = -1 - (-1) (fix the first and third, and derive the second), or:

1 = -0 - (-1) (fix the second and third, and derive the first).

i'll show you how this works:

prove that vector of the form (u,v,-u-v,w) is of the form (-y-z,y,z,w), and vice-versa:

set u = -y-z, and v = y.

then -u-v = -(-y-z) - y = (y+z) - y = z.

set y = v, and z = -u-v.

then -y - z = -v - (-u-v) = -v + (u+v) = u.

a similar proof works for showing any vector of the form (a,-a-b,b,w) is of either of the two forms above. they all specify the same set of 4-vectors.

but remember, coordinates alone, do not tells us which "vector" we have (unless our 4-vector is in R^{4}, and the "standard basis" is being implicitly used).

that is why i specified what the 4 "basis matrices" E_{1},E_{2},E_{3} and E_{4} ARE.

the matrix i give is simply:

(-y-z)E_{1} + yE_{2} + zE_{3} + wE_{4} in "block form".

Re: Linear transformation and kernel