I am struggling also with this problem, any help would be great.
Use determinants and co-factors to find the inverse of the following matrix:
1 3 2
1 2 3
0 1 1
Greetings:
P.Hckr is right on the money per status-quo. Your text/teacher may refer to performance of steps #1-3 as "constructing the adjoint of the given matrix, M" (i.e., adj[M]).
If you are familiar with the above noted terminology, i.e., the 'adjoint' of a matrix, then you may be working with the following definition of "matrix inverse", i.e., [M]^-1:
Def(matrix inverse): Let [M] denote an nXn square matrix.
The inverse of [M], denoted [M]^-1, is given by: [M]^-1 = (1/det[M])*adj[M], where adj[M] is the nXn matrix whose elements are determined by applying the 'sign law' to each co-factor of [M]-transpose.
Read on and I shall walk you through...
We are given: [M] = [1 3 2_1 2 3_0 1 1] (note the underscores serve to distinguish 3 separate rows within the given 3X3 matrix, [M].
M^t, i.e., [M]-transpose, is determined by interchanging the rows and columns of [M]. That is,
M^t = [1 1 0_ 3 2 1_2 3 1]. The transpose can also be viewed as the 3X3 matrix, each element of which is determined by reflecting each element in [M] about the main diagonal.
Albeit tedious indeed, we must now replace each element in [M]^t with its respective cofactor. Beginning with [M]^t(1,1) (row 1, column 1 of [M]^t), we cross out, entirely, row 1 and column 1. This leaves us with the 2X2 matrix [2 1_3 1]. The desired cofactor is the determinant of this 2X2 matrix which, as it happens, is 2*1-3*1 = -1.
As another example, let's find the cofactor of [M]^t(3,2). Crossing out row 3 and column 2 in [M]^t leaves the 2X2 matrix [1 0_3 1]. And the determinant of this matrix is 1*1-3*0 = 1. So the element in position [M]^t(3,2), currently 3, is replaced with 1.
The 3X3 matrix of cofactors should be as follows upon completing the process: [-1 1 5|_1 1 1_1 1 -1]
Now, we apply the sign law to this matrix. To this end, we alternate the signs in positions (1,2), (2,1), (2,3) and (3,2). The end result is said to be the adjoint of [M], denoted adj(M). Hence,
adj(M) = [-1 -1 5|_-1 1 -1_1 -1 -1].
Finally, referring back to the definition of [M]^-1, we multiply adj(M) by the scalar 1/det[M]. The determinant is -2 (be sure you can find this on your own). Therefore,
[M]^-1 = (-1/2)* [-1 -1 5|_-1 1 -1_1 -1 -1] or,
[M]^-1 = [0.5 0.5 -2.5|_0.5 -0.5 0.5_-0.5 0.5 0.5].
As is the case within any 'mathematical system', the 'product' of an element with its multiplicative inverse must yield the multiplicative identity. In matrix arithmetic, if the elements are of dimension nXn, then the multiplicative identity is the nXn matrix with 1 in each position along the main diagonal, and zeroes elsewhere. These things being said, I recommend, as a check, that you multiply [M] with [M]^-1 as determined above. If the result is other than the multiplicative identity matrix, then an error has occurred somewhere along the line. At such juncture, the troubleshooting process shall be left for your pleasure, entirely.
Regards,
Rich B.
3)Refill transpose by is cofactors.
Okey-doke, I'm gonna baby-step. I hope you appreciate this.
Personally, I think this is a rather tedious and obsolescent way of going about finding the inverse of a matrix.
Oh well, enough pontificating, let's find the inverse.
First, find the det of your matrix by cofactors. Always try to choose the column or row that has the most 0's. That means you'll have less work to do.
In that case, we'll expand along the bottom row. It has a zero.
Using the entries of the bottom row, 0,1,1, take your finger, pencil, cross-tie, whatever you have to cover up the column and row which has the 0 in it. That would be the 1st column and 3rd row. What you have exposed is a 'submatrix' of which you will find the determinant.
$\displaystyle \begin{vmatrix}1&3&2\\1&2&3\\0&1&1\end{vmatrix}$
Here's the submatrix you'll have after covering the 3rd row and 1st column:
$\displaystyle \begin{vmatrix}3&2\\2&3\end{vmatrix}$
Now, cover up the row and column that has the 1 in it. Row 3 and column 2.
That leaves this part exposed:
$\displaystyle \begin{vmatrix}1&2\\1&3\\\end{vmatrix}$
Now, cover the row and column which have the last 1 in it. Cover the 3rd row and 3rd column. You'll have:
$\displaystyle \begin{vmatrix}1&3\\1&2\end{vmatrix}$
See?. I hope so. It's not difficult, just tedious.
Your mission is to traverse this winding road by avoiding all arithmetic errors along the way.
So, here's what we have by putting it together:
Don't forget to alternate signs. Plus minus plus minus.....
$\displaystyle (0)\begin{vmatrix}3&2\\2&3\end{vmatrix}{-}(1)\begin{vmatrix}1&2\\1&3\end{vmatrix}+(1)\begin {vmatrix}1&3\\1&2\end{vmatrix}$
I hope you can do det of 2 by 2 matrices.
Anyway, you wind up with -2 for an answer.
Take the reciprocal of this and set it aside: $\displaystyle \Large\frac{1}{-2}$
Now, to find the adjoint:
You do the same thing by covering up row 1, column 1 and taking the determinant of what's left exposed; cover up row 1 column 2 and take the det of what's left exposed; cover up row 1, column 3 and take the det of what's exposed; cover the 2nd row, 1st column and take det of what's left, etc, etc. After going through all those steps you end up with the following matrix of cofactors:
$\displaystyle \begin{vmatrix}-1&-1&1\\-1&1&-1\\5&-1&-1\end{vmatrix}$
Take the transpose:
$\displaystyle \begin{vmatrix}-1&-1&5\\-1&1&-1\\1&-1&-1\end{vmatrix}$
Now, take the reciprocal of the det we set aside, 1/-2, and multiply it by the adjoint transpose:
$\displaystyle \frac{-1}{2}\begin{vmatrix}-1&-1&5\\-1&1&-1\\1&-1&-1\end{vmatrix}$
You get:
$\displaystyle *\boxed{\Large\begin{bmatrix}\frac{1}{2}&\frac{1}{ 2}&\frac{-5}{2}\\\frac{1}{2}&\frac{-1}{2}&\frac{1}{2}\\\frac{-1}{2}&\frac{1}{2}&\frac{1}{2}\end{bmatrix}}$
Voila!, there's the inverse.
*Hey Soroban, that box thing is the bomb.