# Thread: Inverse of a matrix in GL(r,Zn)

1. ## Inverse of a matrix in GL(r,Zn)

I want to find the inverse of a matrix in GL(r,Zn). (as you are aware r is rank of matrix and GL is a closed linear group of invertible matrices). The matrices are all modulo of the product of two prime numbers say 7 and 11. So the modulus is 77. Please explain with a simple 2 x 2 example how to find the inverse. Thanks in advance...

2. If you know how to find a matrix inverse using gaussian elimination, this is going to be a piece of cake! For $\mathbb{Z}_n$, you perform gaussian elimination on the matrix but instead of dividing you will multiply by the inverse of the element in question. Then after you perform calculations you will just mod out the modulus.

Let A be your matrix with entries in $\mathbb{Z}_5$:

$\left[
\begin{array}{cc}

4 & 1 \\
3 & 3

\end{array}\right]
$

The determinant of this matrix is just $4.3 - 1.3 = 12 - 3 = 9 \equiv 4 \pmod{5}$ and is thus non-zero. Our matrix A is thus invertible.

Set it up as for Gaussian elimination

$\left[
\begin{array}{cc|cc}

4 & 1 & 1 & 0\\
3 & 3 & 0 & 1

\end{array}\right]
$

The inverse of 4 in $\mathbb{Z}_5$ is 4 itself and the inverse of 3 is 2 so we can multiply each row by each of these respectively to get

$\left[
\begin{array}{cc|cc}

4.4 & 4.1 & 4.1 & 2.0\\
2.3 & 2.3 & 2.0 & 2.1

\end{array}\right] \equiv \left[
\begin{array}{cc|cc}

1 & 4 & 4 & 0\\
1 & 1 & 0 & 2

\end{array}\right] \pmod 5
$

Add -1 times the first row to the second to get

$$\equiv \left[ \begin{array}{cc|cc} 1 & 4 & 4 & 0\\ 0 & -3 & -4 & 2 \end{array}\right] \equiv \left[ \begin{array}{cc|cc} 1 & 4 & 4 & 0\\ 0 & 2 & 1 & 2 \end{array}\right]\pmod 5$$

From here on you multiply the 2nd row by the inverse of 2, which is 3. Then subtract 4 times the second row times from the first

$$\equiv \left[ \begin{array}{cc|cc} 1 & 4 & 4 & 0\\ 0 & 1 & 3 & 1 \end{array}\right] \equiv \left[ \begin{array}{cc|cc} 1 & 0 & 4-4.3 & -4.1\\ 0 & 1 & 3 & 1 \end{array}\right] \equiv \left[ \begin{array}{cc|cc} 1 & 0 & 2 & 1\\ 0 & 1 & 3 & 1 \end{array}\right]\pmod 5$$

EDIT: Lets perform a check!

$
\left[
\begin{array}{cc}

4 & 1\\
3 & 3

\end{array}\right]\left[
\begin{array}{cc}

2 & 1\\
3 & 1

\end{array}\right] = \left[
\begin{array}{cc}

4.2+1.3 & 4.1+1.1\\
3.2+3.3 & 3.3+3.3

\end{array}\right] = \left[
\begin{array}{cc}

11 & 5\\
15 & 6

\end{array}\right] \equiv \left[
\begin{array}{cc}

1 & 0\\
0 & 1

\end{array}\right] \pmod{5}

$

3. Originally Posted by Vlasev
If you know how to find a matrix inverse using gaussian elimination, this is going to be a piece of cake! For $\mathbb{Z}_n$, you perform gaussian elimination on the matrix but instead of dividing you will multiply by the inverse of the element in question. Then after you perform calculations you will just mod out the modulus.

Let A be your matrix with entries in $\mathbb{Z}_5$:

$\left[
\begin{array}{cc}

4 & 1 \\
3 & 3

\end{array}\right]
$

The determinant of this matrix is just $4.3 - 1.3 = 12 - 3 = 9 \equiv 4 \pmod{5}$ and is thus non-zero. Our matrix A is thus invertible.

Set it up as for Gaussian elimination

$\left[
\begin{array}{cc|cc}

4 & 1 & 1 & 0\\
3 & 3 & 0 & 1

\end{array}\right]
$

The inverse of 4 in $\mathbb{Z}_5$ is 4 itself and the inverse of 3 is 2 so we can multiply each row by each of these respectively to get

$\left[
\begin{array}{cc|cc}

4.4 & 4.1 & 4.1 & 2.0\\
2.3 & 2.3 & 2.0 & 2.1

\end{array}\right] \equiv \left[
\begin{array}{cc|cc}

1 & 4 & 4 & 0\\
1 & 1 & 0 & 2

\end{array}\right] \pmod 5
$

Add -1 times the first row to the second to get

[LaTeX ERROR: Convert failed]

From here on you multiply the 2nd row by the inverse of 2, which is 3. Then subtract 4 times the second row times from the first

[LaTeX ERROR: Convert failed]

EDIT: Lets perform a check!

$
\left[
\begin{array}{cc}

4 & 1\\
3 & 3

\end{array}\right]\left[
\begin{array}{cc}

2 & 1\\
3 & 1

\end{array}\right] = \left[
\begin{array}{cc}

4.2+1.3 & 4.1+1.1\\
3.2+3.3 & 3.3+3.3

\end{array}\right] = \left[
\begin{array}{cc}

11 & 5\\
15 & 6

\end{array}\right] \equiv \left[
\begin{array}{cc}

1 & 0\\
0 & 1

\end{array}\right] \pmod{5}

$
Thank you Sir....its very kind of you...

4. Or, you could just do it the way you do for a normal matrix, using the determinant, but you need to work out 1 over the determinant modulo n.

For example, if $\frac{1}{det(A)}=1/2$ and $n$ is $7$ then you just need to find the inverse of 2 mod 7. This is 4. Therefore, $\frac{1}{det(A)}$ is 4.

So, to work out the inverse of $\left(\begin{array}{cc} 4&1\\3&3\end{array}\right)$ one simply calculates the determinant, which is 12-3=9, and $\frac{1}{9} = \frac{1}{4}=4$ modulo $5$.

Thus, our inverse is

$4\cdot \left(\begin{array}{cc} 3&-1\\-3&4\end{array}\right)$
$=4\cdot \left(\begin{array}{cc} 3&4\\2&4\end{array}\right)$
$=\left(\begin{array}{cc} 12&16\\8&16\end{array}\right)$
$=\left(\begin{array}{cc} 2&1\\3&1\end{array}\right)$

as required.

5. Swlabr, your method works great if you have already calculated the determinant for the matrix, then you just find the inverse as you said, multiply all elements and mod out. However, if you need to start from scratch on a bigger matrix, the method I showed will be the fastest to do by hand.

6. Originally Posted by Vlasev
Swlabr, your method works great if you have already calculated the determinant for the matrix, then you just find the inverse as you said, multiply all elements and mod out. However, if you need to start from scratch on a bigger matrix, the method I showed will be the fastest to do by hand.
I'm not sure I agree - Gaussian elimination is awful to do, especially on larger matrices, and mistakes can creep in easily. Anyway, you should calculate the determinant to begin with for either method!

7. Originally Posted by Swlabr
Or, you could just do it the way you do for a normal matrix, using the determinant, but you need to work out 1 over the determinant modulo n.

For example, if $\frac{1}{det(A)}=1/2$ and $n$ is $7$ then you just need to find the inverse of 2 mod 7. This is 4. Therefore, $\frac{1}{det(A)}$ is 4.

So, to work out the inverse of $\left(\begin{array}{cc} 4&1\\3&3\end{array}\right)$ one simply calculates the determinant, which is 12-3=9, and $\frac{1}{9} = \frac{1}{4}=4$ modulo $5$.

Thus, our inverse is

$4\cdot \left(\begin{array}{cc} 3&-1\\-3&4\end{array}\right)$
$=4\cdot \left(\begin{array}{cc} 3&4\\2&4\end{array}\right)$
$=\left(\begin{array}{cc} 12&16\\8&16\end{array}\right)$
$=\left(\begin{array}{cc} 2&1\\3&1\end{array}\right)$

as required.
One more doubt.... The elements of the example matrix are less than 5.as in your example you take the modulus 5 a prime so obviously the elements are relatively prime. What if I take a modulus say 33. Do the elements of matrix need to be relatively prime? I Think so..for a inverse to exist GCD(a,b)=1 for integers.Is it the case for GL(r,Zn)?

8. What do you mean? I mean, the element $11$ is $\mathbb{Z}_{33}$ doesn't have an inverse, and so we have linear transformations over a ring, not a field. There will be some matrices without inverses which would have inverses in $\mathbb{R}$. For example,

$\left(\begin{array}{cc}11 & 0\\ 0 & 3 \end{array}\right)$.

Both methods are equally valid. It just depends whether you think one is faster than the other...

9. jsr, the elements being non coprime doesn't guarantee an inverse as swlabr showed. However, if the determinant is non-zero you should have an inverse. For the actual gaussian elimination, it's definitely going to work if all the matrix elements are invertible elements. If not, then maybe you might get multiple solutions or none at all, i'm not sure.

10. Originally Posted by Vlasev
jsr, the elements being non coprime doesn't guarantee an inverse as swlabr showed. However, if the determinant is non-zero you should have an inverse. For the actual gaussian elimination, it's definitely going to work if all the matrix elements are invertible elements. If not, then maybe you might get multiple solutions or none at all, i'm not sure.
Your inverse must be unique, because it is an inverse (if $ab=cb=1$ then you can cancel the $b$'s to get that $a=c$). Of course, because you are working over a ring you may have left and right inverses; these too are always unique (by the same logic), and if you have both a left and a right inverse then they must be the same element, as $a_l=a_laa_r=a_r$.