Hi,I am trying to find the inverse of following matrix,
[0 1 0 0
-1 0 1 0
0 -1 0 1
0 0 -1 0]
I got every cofactor correctly except for one, a34,
Below is the procedure of trying to find the cofactor of a34, please tell me where I made the mistake,
[ 0 1 0
-1 x (-1)^(3+4)* -1 0 0
0 -1 1]
This becomes,
[ 0 1 0
-1 0 0
0 -1 1]
Now try to find the determinant using the first row, second column,
-1^(2+1) [ -1 0
0 1]
The answer i got is 1.
However, using the calculator the correct answer is -1.
What I do not understand why you are using the adjoint matrix. People for some reason love that formula, but the algorithm is much much slower than Gassuan-Jordan elimination in general. Always try elimination first.
It it really easy here,
Interchange second second and first row.
Add second row to third. Add forth row to first.
Interchange third and fourth rows.
Change signs on first and third rows.
Thus,
Its a long time since I coded up a matrix inversion routine, I have done
Gauss-Jordan, iterative methods (who's name I forget), LU decomposition
inversion, Cholesky decomposition. But I don't remember zilch about them
other than that they work, and that in general the problems that I have
are not demanding on matrix inversion performance.
There has been so much work done on them that if you need one you
just pick up an appropriate library. (In fact on the special purpose processors
that we use it would be difficult to improve significantly on the vendor supplied
optimised libraries without spending a very considerable time playing
with low level code)
Also I use tools where they are built in these days.
Most matrix inversion routines are O(n^3), and differ mainly in the leading
coefficient. However it is theoretically O(n^log_2(7)).
RonL
Gram-Schmitt is to generate an orthogonal (orthonormal) basis from
an arbitrary basis (for inner product spaces). I usually use it for constructing
polynomials orthogonal over slightly odd spaces wrt slightly odd measures,
but not matrix inversion. (Sounds more complicated than it is really)
It is one of the things that really impressed me as an undergraduate
for some reason.
RonL
I realized engineering/physics students for some reason are impressed by differencial equations. I think because their non-math professors say they are used in applications and hence they want to learn them. And after they learn them they think they know the most complicated math, while in fact, all they know is how to use formulas. Maybe your situation was similar.
I'm assuming the "Gram-Schmitt" process is the same as the "Gram-Schmidt" process, which is an algorithm for producing an orthonormal basis or orthogonal basis for any nonzero subspace of R^n. Although, I was not sure since google seemed to have a few entries for "Gram-Schmitt".
Given the basis {x_1, ..., x_p} for subspace W of R^n,
If v_1 = x_1
v_2 = x_2 - [(x_2*v_1)/(v_1*v_1)](v_1)
.
.
.
v_p = x_p - [(x_p*v_1)/(v_1*v_1)](v_1) - [(x_p*v_2)/(v_2*v_2)](v_1) - ... - [(x_p*v_(p-1))/(v_(p-1)*v_(p-1))](v_(p-1))
Then {v_1, ..., v_p} is an orthogonal basis for W (and Span(v_1, ..., v_k} = Span{x_1, ..., x_k}