# [SOLVED] Finding inverse matrix

• Jan 15th 2007, 11:04 PM
newbie66
[SOLVED] Finding inverse matrix
Hi,I am trying to find the inverse of following matrix,
[0 1 0 0
-1 0 1 0
0 -1 0 1
0 0 -1 0]
I got every cofactor correctly except for one, a34,
Below is the procedure of trying to find the cofactor of a34, please tell me where I made the mistake,
[ 0 1 0
-1 x (-1)^(3+4)* -1 0 0
0 -1 1]
This becomes,
[ 0 1 0
-1 0 0
0 -1 1]
Now try to find the determinant using the first row, second column,

-1^(2+1) [ -1 0
0 1]
The answer i got is 1.
However, using the calculator the correct answer is -1.
• Jan 16th 2007, 12:39 AM
CaptainBlack
Quote:

Originally Posted by newbie66
Hi,I am trying to find the inverse of following matrix,
[0 1 0 0
-1 0 1 0
0 -1 0 1
0 0 -1 0]
I got every cofactor correctly except for one, a34,
Below is the procedure of trying to find the cofactor of a34, please tell me where I made the mistake,
[ 0 1 0
-1 x (-1)^(3+4)* -1 0 0
0 -1 1]
This becomes,
[ 0 1 0
-1 0 0
0 -1 1]
Now try to find the determinant using the first row, second column,

-1^(2+1) [ -1 0
0 1]
The answer i got is 1.
However, using the calculator the correct answer is -1.

Do you have to invert the matrix this way? It seems much easier to use
Gaussian elimination.

RonL
• Jan 16th 2007, 02:05 AM
AfterShock
Quote:

Originally Posted by newbie66
Hi,I am trying to find the inverse of following matrix,
[0 1 0 0
-1 0 1 0
0 -1 0 1
0 0 -1 0]
I got every cofactor correctly except for one, a34,
Below is the procedure of trying to find the cofactor of a34, please tell me where I made the mistake,
[ 0 1 0
-1 x (-1)^(3+4)* -1 0 0
0 -1 1]
This becomes,
[ 0 1 0
-1 0 0
0 -1 1]
Now try to find the determinant using the first row, second column,

-1^(2+1) [ -1 0
0 1]
The answer i got is 1.
However, using the calculator the correct answer is -1.

Augment the matrix to which you're trying to find an inverse with the identity matrix and solve. You'll end up with a pivot in every column/row in the matrix of coefficients, and the solution in the augmented column.
• Jan 16th 2007, 06:30 AM
ThePerfectHacker
Quote:

Originally Posted by newbie66
Hi,I am trying to find the inverse of following matrix,
[0 1 0 0
-1 0 1 0
0 -1 0 1
0 0 -1 0]

What I do not understand why you are using the adjoint matrix. People for some reason love that formula, but the algorithm is much much slower than Gassuan-Jordan elimination in general. Always try elimination first.
It it really easy here,
$
\left[
\begin{array}{cccccccc}
0&1&0&0&1&0&0&0\\
-1&0&1&0&0&1&0&0\\
0&-1&0&1&0&0&1&0\\
0&0&-1&0&0&0&0&1
\end{array}
\right]$

Interchange second second and first row.
$
\left[
\begin{array}{cccccccc}
-1&0&1&0&0&1&0&0\\
0&1&0&0&1&0&0&0\\
0&-1&0&1&0&0&1&0\\
0&0&-1&0&0&0&0&1
\end{array}
\right]$

$
\left[
\begin{array}{cccccccc}
-1&0&0&0&0&1&0&1\\
0&1&0&0&1&0&0&0\\
0&0&0&1&1&0&1&0\\
0&0&-1&0&0&0&0&1
\end{array}
\right]$

Interchange third and fourth rows.
$
\left[
\begin{array}{cccccccc}
-1&0&0&0&0&1&0&1\\
0&1&0&0&1&0&0&0\\
0&0&-1&0&0&0&0&1\\
0&0&0&1&1&0&1&0
\end{array}
\right]$

Change signs on first and third rows.
$
\left[
\begin{array}{cccccccc}
1&0&0&0&0&-1&0&-1\\
0&1&0&0&1&0&0&0\\
0&0&1&0&0&0&0&-1\\
0&0&0&1&1&0&1&0
\end{array}
\right]$

Thus,
$A^{-1}= \left[ \begin{array}{cccc}0&-1&0&-1\\1&0&0&0\\0&0&0&-1\\1&0&1&0\end{array} \right]$
• Jan 16th 2007, 08:12 AM
CaptainBlack
Quote:

Originally Posted by ThePerfectHacker
What I do not understand why you are using the adjoint matrix. People for some reason love that formula, but the algorithm is much much slower than Gassuan-Jordan elimination in general. Always try elimination first.
It it really easy here,
$
\left[
\begin{array}{cccccccc}
0&1&0&0&1&0&0&0\\
-1&0&1&0&0&1&0&0\\
0&-1&0&1&0&0&1&0\\
0&0&-1&0&0&0&0&1
\end{array}
\right]$

Interchange second second and first row.
$
\left[
\begin{array}{cccccccc}
-1&0&1&0&0&1&0&0\\
0&1&0&0&1&0&0&0\\
0&-1&0&1&0&0&1&0\\
0&0&-1&0&0&0&0&1
\end{array}
\right]$

$
\left[
\begin{array}{cccccccc}
-1&0&0&0&0&1&0&1\\
0&1&0&0&1&0&0&0\\
0&0&0&1&1&0&1&0\\
0&0&-1&0&0&0&0&1
\end{array}
\right]$

Interchange third and fourth rows.
$
\left[
\begin{array}{cccccccc}
-1&0&0&0&0&1&0&1\\
0&1&0&0&1&0&0&0\\
0&0&-1&0&0&0&0&1\\
0&0&0&1&1&0&1&0
\end{array}
\right]$

Change signs on first and third rows.
$
\left[
\begin{array}{cccccccc}
1&0&0&0&0&-1&0&-1\\
0&1&0&0&1&0&0&0\\
0&0&1&0&0&0&0&-1\\
0&0&0&1&1&0&1&0
\end{array}
\right]$

Thus,
$A^{-1}= \left[ \begin{array}{cccc}0&-1&0&-1\\1&0&0&0\\0&0&0&-1\\1&0&1&0\end{array} \right]$

And in this case Gaussian elimination does not even involve any horrendous arithmetic.

(I have this on a scrap of paper, but didn't fancy typing all those augmentedmatrices:eek: )

RonL
• Jan 16th 2007, 10:35 AM
ThePerfectHacker
Quote:

Originally Posted by CaptainBlack
And in this case Gaussian elimination does not even involve any horrendous arithmetic.

(I have this on a scrap of paper, but didn't fancy typing all those augmentedmatrices:eek: )

RonL

As an algorthmist you are probably familar the algorithm is Gauss-Jordan, not adjoint matricies. The speed (I think) is $O(n^3)$. While adjoints while elegant are way beyond that, note sure, maybe $n!$.
• Jan 16th 2007, 11:10 AM
CaptainBlack
Quote:

Originally Posted by ThePerfectHacker
As an algorthmist you are probably familar the algorithm is Gauss-Jordan, not adjoint matricies. The speed (I think) is $O(n^3)$. While adjoints while elegant are way beyond that, note sure, maybe $n!$.

Its a long time since I coded up a matrix inversion routine, I have done
Gauss-Jordan, iterative methods (who's name I forget), LU decomposition
inversion, Cholesky decomposition. But I don't remember zilch about them
other than that they work, and that in general the problems that I have
are not demanding on matrix inversion performance.

There has been so much work done on them that if you need one you
just pick up an appropriate library. (In fact on the special purpose processors
that we use it would be difficult to improve significantly on the vendor supplied
optimised libraries without spending a very considerable time playing
with low level code)

Also I use tools where they are built in these days.

Most matrix inversion routines are O(n^3), and differ mainly in the leading
coefficient. However it is theoretically O(n^log_2(7)).

RonL
• Jan 16th 2007, 11:23 AM
ThePerfectHacker
Quote:

Originally Posted by CaptainBlack
iterative methods (who's name I forget),

Gram-Schmitt maybe :confused:
(I can imagine how you can forget such a name. It reminds of a theorem: Nullezstenzas (which I always forget how to spell)).
• Jan 16th 2007, 11:30 AM
CaptainBlack
Quote:

Originally Posted by ThePerfectHacker
Gram-Schmitt maybe :confused:
(I can imagine how you can forget such a name. It reminds of a theorem: Nullezstenzas (which I always forget how to spell)).

Gram-Schmitt is to generate an orthogonal (orthonormal) basis from
an arbitrary basis (for inner product spaces). I usually use it for constructing
polynomials orthogonal over slightly odd spaces wrt slightly odd measures,
but not matrix inversion. (Sounds more complicated than it is really)

It is one of the things that really impressed me as an undergraduate
for some reason.

RonL
• Jan 16th 2007, 11:49 AM
ThePerfectHacker
Quote:

Originally Posted by CaptainBlack
It is one of the things that really impressed me as an undergraduate
for some reason.

I realized engineering/physics students for some reason are impressed by differencial equations. I think because their non-math professors say they are used in applications and hence they want to learn them. And after they learn them they think they know the most complicated math, while in fact, all they know is how to use formulas. Maybe your situation was similar.
• Jan 17th 2007, 11:36 AM
AfterShock
Quote:

Originally Posted by CaptainBlack
Gram-Schmitt is to generate an orthogonal (orthonormal) basis from
an arbitrary basis (for inner product spaces). I usually use it for constructing
polynomials orthogonal over slightly odd spaces wrt slightly odd measures,
but not matrix inversion. (Sounds more complicated than it is really)

It is one of the things that really impressed me as an undergraduate
for some reason.

RonL

I'm assuming the "Gram-Schmitt" process is the same as the "Gram-Schmidt" process, which is an algorithm for producing an orthonormal basis or orthogonal basis for any nonzero subspace of R^n. Although, I was not sure since google seemed to have a few entries for "Gram-Schmitt".

Given the basis {x_1, ..., x_p} for subspace W of R^n,

If v_1 = x_1
v_2 = x_2 - [(x_2*v_1)/(v_1*v_1)](v_1)
.
.
.
v_p = x_p - [(x_p*v_1)/(v_1*v_1)](v_1) - [(x_p*v_2)/(v_2*v_2)](v_1) - ... - [(x_p*v_(p-1))/(v_(p-1)*v_(p-1))](v_(p-1))

Then {v_1, ..., v_p} is an orthogonal basis for W (and Span(v_1, ..., v_k} = Span{x_1, ..., x_k}
• Jan 17th 2007, 11:41 AM
CaptainBlack
Quote:

Originally Posted by AfterShock
I'm assuming the "Gram-Schmitt" process is the same as the "Gram-Schmidt" process, which is an algorithm for producing an orthonormal basis or orthogonal basis for any nonzero subspace of R^n. Although, I was not sure since google seemed to have a few entries for "Gram-Schmitt".

Google usualy has hits for mispelled words. I didn't check the spelling so
you are probably right.

It will find an orthonormal basis from a basis for any inner-product space
with a countable basis.

RonL