You solve MX=0, 0 is the null vector. M is the matrix, X=(x1,x2,x3,x4). You will have 4 equations.
How does one find the kernel of a linear transformation? I have used row reduction to find the determinant of the following matrix, but do not know what it means to find the "kernel" of a linear transformation.
The given matrix is:
-1 2 -2 4
4 2 2 1
0 0 0 -1
5 0 1 -1
Any help is highly appreciated.
in your case you are looking for vectors (x_{1},x_{2},x_{3},x_{4}) with:
this is exactly the same as solving the 4 linear equations:
-x_{1}+2x_{2}-2x_{3}+4x_{4} = 0
4x_{1}+2x_{2}+2x_{3}+x_{4} = 0
-x_{4} = 0
5x_{1}+x_{3}-x_{4} = 0
row-reducing the matrix will give you the same set of solutions as the original matrix (row-reduction just fomalizes the process of "elimination and back-substitution").
the solutions to this system of linear equations ARE the null space of the matrix of the system (because these are homogeneous linear equations (which is a fancy way of saying: "all 0's on one side")).
the kernel of a a linear transformation is the set of vectors in the null space of the matrix for that linear transformation.
generally speaking, the more equations you have, the more constraints you are putting on the number of vectors that can possibly satisfy them. in a loose sort of way, this tends to make the kernel smaller. that is, sometimes you can tell a little bit about your solutions just from the size of the matrix. the most interesting case is when we have a "square" matrix, our hope is that we have "a perfect fit", just one solution. but matrices can be "defective" (the technical term is "singular"), and zero out more than just the 0-vector. in this case, we say the kernel is non-trivial (there's some vector(s) x that are non-zero, but Ax = 0). if the ONLY element of the kernel is {0} (the 0-vector), then we say the matrix is of full rank (and if it's square it's invertible: invertiblity can be thought of as a kind of "faithfulness", the matrix may "scramble" the vectors, but we don't lose any information: given Ax = b we can recover x as A^{-1}b. multiplication by an invertible matrix is "undo-able").
but if a matrix is singular, the kernel gets bigger: if we have x ≠ 0, but Ax = 0, then for any vector x', A(x+x') = A(x') (so in this case, A must "collapse" the original space into some "smaller" space. the size (dimension) of the kernel measures how extreme this collapsing is...for example, the matrix consisting of all 0's has EVERYTHING in the domain in its kernel, so you can think of the 0-matrix as "shrinking everything down to the origin").
while all of this might sound rather imposing and complicated, it's really nothing more than the same solving of linear equations you do in high-school (but often with a LOT more equations). above are 4 equations. solve them for x_{1},x_{2},x_{3} and x_{4}. row-reduction will work: if you get the identity, your kernel is trivial.
In other words, your linear transformation is invertible and maps every vector in to a single vector in . The only vector that is mapped to (0, 0, 0, 0) is (0, 0, 0, 0) itself.
You said in your first post that you had found the determinant of the matrix. The fact that the matrix was non-zero tells you that the linear transformation is invertible.