# matrices

• Aug 27th 2007, 03:59 AM
ksssudhanva
matrices
Hi,
pleane any one give me proof for cramers method
• Aug 27th 2007, 04:40 AM
TKHunny
Have you tried an arbitrary solution of a 2 variable system? Just set up two equations with ALL arbitrary parameters and solve them using substitution or something else. Then do it again using Cramer's Method.

ax + by = c
dx + ey = f

Solve that every way you can and see what it looks like.
• Aug 27th 2007, 09:32 AM
ThePerfectHacker
Quote:

Originally Posted by ksssudhanva
Hi,
pleane any one give me proof for cramers method

$\left\{ \begin{array}{cc}a_{11}x+a_{12}y=k_1\\a_{21}x+a_{2 2}y=k_2 \end{array} \right.$

Multiply the top equation by $a_{21}$ and bottom by $a_{12}$. Now subtract these equations and solve for $y$. (Really messy). Then do the same for $x$.
• Aug 28th 2007, 12:56 AM
ksssudhanva
matrices
Thanks
but how can you prove that cramers method is correct?You may prove it by taking a system of arbitary equation.But how did Cramer invented this method?(how did he know that he can solve a system of equations by doing the steps we follow to solve the system of equations using cramers maethd?)
• Aug 28th 2007, 04:57 AM
TKHunny
Mr. Cramer probably solved a system of equations with arbitrary coefficients. Give it a try.

There are two ways to know for sure:

1) Ask Mr. Cramer. Too bad he no longer is with us.
2) Find the original text in which it first appears and for which the method was assigned his name.
• Aug 28th 2007, 05:21 AM
Soroban
Hello, ksssudhanva!

Cramer probably got tired of solving every system separately
. . and sought a generalized solution.
You could have done it, too.

We have: . $\begin{array}{cccc}ax + by & = & h & [1]\\ cx + dy & = & k & [2]\end{array}$

$\begin{array}{cccc}\text{Multiply [1] by }d: & adx + bdy & = & dh \\
\text{Multiply [2] by -}b: & \text{-}bcx - bdy & = & \text{-}bk \end{array}$

Add: . $adx - bcx \:=\:dh - bk\quad\Rightarrow\quad(ad-bc)x \:=\:dh-bk\quad\Rightarrow\quad\boxed{x \:=\:\frac{dh-bk}{ad-bc}}$

$\begin{array}{cccc}\text{Multiply [1] by -}c: & \text{-}acx - bcy & = & \text{-}ch \\
\text{Multiply [2] by }a: & acx + ady & = & ak\end{array}$

Add: . $ady - bcy \:=\:ak-ch\quad\Rightarrow\quad(ad-bc)y \:=\:ak-ch\quad\Rightarrow\quad\boxed{ y \:=\:\frac{ak-ch}{ad-bc}}$

The following is my speculation on what happened.

Then he wondered, "How am I going to memorize those formulas?"

He noticed that the denominators are the determinant of the coefficients:
. . . . . . $ad - bc \:=\:\begin{vmatrix}a & b \\ c & d\end{vmatrix}$

Then he did some mental juggling to see that:
. . . . . . $dh - bk \:=\:\begin{vmatrix}{\color{blue}h} & b \\{\color{blue}k} & d\end{vmatrix}$ . and . $ak-ch \:=\:\begin{vmatrix}a & {\color{blue}h} \\ c & {\color{blue}k}\end{vmatrix}$

. . where the constants replace the respective coefficients.

Then he said, "Hey, I may be onto something here . . . "
. . and tested this pattern for larger systems
. . and eventually proving the procedure in general.

Then he and his buddies traded high-fives and said, "It's Miller time!"

But, of course, I'm guessing . . .

• Aug 28th 2007, 06:48 AM
ThePerfectHacker
Here is the way to prove the generalized version of Cramer's rule. :eek:

Given,
$A\bold{x}=\bold{b}$,
Where, $A=\left[ \begin{array}{cccc}a_{11}&a_{12}&...&a_{1n}\\a_{21 }&a_{22}&...&a_{2n}\\...&...&...&...\\a_{n1}&a_{n2 }&...&a_{nn}\end{array} \right]$ and $\bold{b} = \left[ \begin{array}{c}b_1\\b_2\\...\\b_n\end{array} \right]$.
Since $\det(A)\not = 0\implies A \mbox{ invertible }$.
So, there exists a unique solution given by,
$\bold{x} = A^{-1}\bold{b}=\frac{1}{\det(A)}\cdot \mbox{adj}(A)\bold{b} = \frac{1}{\det(A)}\left[ \begin{array}{cccc}C_{11}&C_{21}&...&C_{n1}\\C_{12 }&C_{22}&...&C_{n2}\\...&...&...&...\\C_{1n}&C_{2n }&...&C_{nn}\end{array} \right]\left[ \begin{array}{c}b_1\\b_2\\...\\b_n\end{array} \right]$.
Let me explained what just happened. There is a rule that $A^{-1}$ is equal to its "adjoint" matrix divided by its determinant. Now the "adjoint" matrix is the transpose of the "cofactor matrix". That is what those $C$'s area. They are cofactors. But they are written backwarks because of the transpose operator.
Multiply them out,
$\bold{x} = \frac{1}{\det(A)} \left[ \begin{array}{c}b_1C_{11}+b_2C_{21}+...+b_nC_{n1}\ \b_1C_{12}+b_2C_{22}+...+b_nC_{n2}\\...+...+...+.. .\\b_1C_{1n}+b_2C_{2n}+...+b_nC_{nn} \end{array}\right]$.
Now $\bold{x}$ is the solution matrix to the system of equations. The $k$-th entry in this matrix is:
$x_k = \frac{b_1C_{1k}+b_2C_{2k}+...+b_nC_{nk}}{\det(A)}$.
Now consider a matrix $M_k$ obtained from $A$ but its $k$-th coloum is replaced by $\bold{b}$. What is the determinant of this matrix? It is the same as $x_k$! Because if we compute the determinant of this matrix by cofactor expansion along the $k$-th coloum we get just that!
Thus,
$\boxed{ x_k = \frac{\det(M_k)}{\det(A)} }$
Q.E.D.