# Thread: How to determine if a transformation is invertible

1. ## How to determine if a transformation is invertible

Is there a general process to use to determine if a transformation has an inverse (and not necessarily to FIND the inverse, just to determine if it is indeed invertible)?

The only thing my books seems to say is that T is invertible if and only if [T] with respect to $\beta$ and $\gamma$ is invertible. But then, how do you determine if that matrix is invertible?

Is there any other way to determine this? Thanks for any help!

2. Oops, I think I just came across a lemma that states that for a transformation to be invertible, the vector spaces have to be of the same dimension? For example, if T:V ---> W, then T is invertible if and only if dim(V) = dim(W). Is this true?

EDIT: Does this also mean that if dim(V) = dim(W) then T is invertible? Sorry, the way my book words it is differently than above and it's confusing me! Any help is appreciated!

3. A linear transformation $T:V\rightarrow W$ is invertible iff it is injective, i.e. $\mbox{dim }T(V) = \mbox{dim } V$. Equivalently it is invertible iff $\mbox{ker } T = \{0\}$.

The dimensions of V and W tell you very little about wheter T is invertible. T can't be invertible if $\dim V > \dim W$, but in all other cases you need more information about T.

Think about your question "Does this also mean that if dim(V) = dim(W) then T is invertible?". T could be many things. What if T takes all of V to 0?

4. A linear transformation is invertible if and only if its matrix has a non-zero determinant. It is surely easier to calculate the determinant than the inverse, so this is a sensible l thing to do. The determinant is the measure of the transformed unit "hypercube", so is non-zero if and only if the kernel is trivial.
On an even more practical level the distinction between "invertible" and "non-invertible" is not so clear as it is in theory. If you try to calculate the determinant of a matrix and get an answer very close to 0 but not exactly 0 you may well not really be sure if you got a non-zero answer because of round-off errors in the calculation or in the entries in the matrix itself.

5. Originally Posted by alunw
A linear transformation is invertible if and only if its matrix has a non-zero determinant. It is surely easier to calculate the determinant than the inverse, so this is a sensible l thing to do. The determinant is the measure of the transformed unit "hypercube", so is non-zero if and only if the kernel is trivial.
On an even more practical level the distinction between "invertible" and "non-invertible" is not so clear as it is in theory. If you try to calculate the determinant of a matrix and get an answer very close to 0 but not exactly 0 you may well not really be sure if you got a non-zero answer because of round-off errors in the calculation or in the entries in the matrix itself.
Not all linear transformations can be represented by a square matrix, so you can't always evaluate the determinant. This is only possible when the domain and image spaces have the same (finite) dimension. For infinite-dimensional spaces this approach fails also.

Moreover finding the matrix of a transformation and then calculating its determinant is a rather cumbersome process; typically it is much easier to determine the kernel directly.

The distinction between "invertible" and "non-invertible" is quite clear. I am not sure why you are talking about round-off errors; this is completely irrelevant.

6. If a linear transformation isn't represented by a square matrix then it certainly is not invertible in the sense of there being an inverse defined on the whole of the co-domain. Of course if it is into a space of higher dimension the restriction to the subspace spanned by image might be invertible, but that is not the same transformation.

You can use determinants even on non square matrices. For example a linear transformation given by a 10*12 matrix would have rank 10 if and only if you could find a 10*10 minor with a non-zero determinant. But this is not very useful in practice.

I talked about the distinction between being invertible in principle and in practice because it is very important to the people who need to solve large linear systems. See for example Chapter 2 of the "Numerical recipes" books. I found a link to a PDF of the relevant section http://www.mathstat.uohyd.ernet.in/m...icalc/c2-0.pdf. Or see the Wikipedia article kernel (matrix). I find it really annoying when mathematicians ignore practical issues. Many of the computational methods I was taught at school and university are very bad numerically. I'm now a computer programmer, and many times have had to fix problems caused by a programmer having done some calculation the way taught at school instead of the right way. Even things like the usual formula for solving a quadratic equation should come with a very big health warning.

I don't really understand what you mean when you say finding the matrix of a transformation is a cumbersome process. Usually one knows the matrix to begin with. If a transformation is specified by a set of point pairs then it is easy in principle to find the matrix as long as both spaces have the same dimension. If they don't then it would be
awkward. But I don't know how you would go about calculating the kernel of a transformation whose matrix you didn't know.

I am not suggesting you should calculate the determinant in the naive way. For a large matrix one should do LU decomposition, and you would have to worry about numeric overflow as well. But for real world geometry where you are likely to be dealing with a matrix that is no more than 4*4 determinants are very useful. For example a computer program that wanted to test if four points were coplanar might well evaluate the 4*4 determinant that tells you the volume of the tetrahedron defined by the points.
Wikipedia says that even for 4*4 matrices I should do LU decomposition to calculate the determinant. But I doubt many programs would bother to do that in practice, especially as these days computers can perform 4*4 matrix calculations very efficiently.

I don't remember anything about transformations between infinite dimensional spaces though I remember having a whole term of lectures on infinite dimensional spaces. I wish instead I'd had a term of lectures on geometry of the spaces I've needed to know about ever since.

7. Originally Posted by alunw
If a linear transformation isn't represented by a square matrix then it certainly is not invertible in the sense of there being an inverse defined on the whole of the co-domain.
Huh?

Note that there are left inverses and right inverses. A right inverse of $f: A \rightarrow B$ is a function $g$ such that $f \circ g = 1_B$; a left inverse is such that $g \circ f = 1_A$.

$f$ is injective if it has left inverse.
$f$ is surjective if it has right inverse.
$f$ is bijective if it has both a left inverse and a right inverse.

Injections are invertible. They're just not necessarily invertible on both sides.

8. You are of course correct that there are such things as left and right inverses. I'm accustomed to the word invertible being used to describe a transformation that has both a left and right inverse. That's how the word is used when discussing monoids for example.
I'm also used to using the word transformation to mean a mapping from a space to itself rather than a mapping between different spaces.

From now on I'm only thinking about spaces of finite dimension.

If a transformation is from space A into a space B of a higher dimension it can't possibly be a surjection. But if it is an injection then it is a bijection between A and the subspace C of B spanned by the images of any basis of A is a space of the same dimension as A. So the restriction to the range can be described by an invertible square matrix. Now you can extend that inverse mapping to be a left inverse on the whole of B to get a "left inverse" by completing a basis of C to get a basis of B. But you can freely choose the images in A of the B basis vectors not in C, so these left inverses are not unique. So in my last post I should have said "If a linear transformation isn't represented by a square matrix then it certainly is not invertible in the sense of there being a UNIQUE inverse defined on the whole of the co-domain".

The original poster wanted to know how to test if a transformation was invertible. You have suggested checking that the transformation has a trivial kernel. How would you do this in practice? For the sake of argument let's say A is $R^{10}$ and B is $R^{12}$ and the transformation is specified as a 10*12 matrix.

,

,

,

,

,

,

,

,

,

,

,

,

,

,

# linear transformation is invertible if

Click on a term to search for related topics.