Hi,
pleane any one give me proof for cramers method
Have you tried an arbitrary solution of a 2 variable system? Just set up two equations with ALL arbitrary parameters and solve them using substitution or something else. Then do it again using Cramer's Method.
ax + by = c
dx + ey = f
Solve that every way you can and see what it looks like.
$\displaystyle \left\{ \begin{array}{cc}a_{11}x+a_{12}y=k_1\\a_{21}x+a_{2 2}y=k_2 \end{array} \right.$
Multiply the top equation by $\displaystyle a_{21}$ and bottom by $\displaystyle a_{12}$. Now subtract these equations and solve for $\displaystyle y$. (Really messy). Then do the same for $\displaystyle x$.
Thanks
but how can you prove that cramers method is correct?You may prove it by taking a system of arbitary equation.But how did Cramer invented this method?(how did he know that he can solve a system of equations by doing the steps we follow to solve the system of equations using cramers maethd?)
Mr. Cramer probably solved a system of equations with arbitrary coefficients. Give it a try.
There are two ways to know for sure:
1) Ask Mr. Cramer. Too bad he no longer is with us.
2) Find the original text in which it first appears and for which the method was assigned his name.
Hello, ksssudhanva!
Cramer probably got tired of solving every system separately
. . and sought a generalized solution.
You could have done it, too.
We have: .$\displaystyle \begin{array}{cccc}ax + by & = & h & [1]\\ cx + dy & = & k & [2]\end{array}$
$\displaystyle \begin{array}{cccc}\text{Multiply [1] by }d: & adx + bdy & = & dh \\
\text{Multiply [2] by -}b: & \text{-}bcx - bdy & = & \text{-}bk \end{array}$
Add: .$\displaystyle adx - bcx \:=\:dh - bk\quad\Rightarrow\quad(ad-bc)x \:=\:dh-bk\quad\Rightarrow\quad\boxed{x \:=\:\frac{dh-bk}{ad-bc}}$
$\displaystyle \begin{array}{cccc}\text{Multiply [1] by -}c: & \text{-}acx - bcy & = & \text{-}ch \\
\text{Multiply [2] by }a: & acx + ady & = & ak\end{array}$
Add: .$\displaystyle ady - bcy \:=\:ak-ch\quad\Rightarrow\quad(ad-bc)y \:=\:ak-ch\quad\Rightarrow\quad\boxed{ y \:=\:\frac{ak-ch}{ad-bc}}$
The following is my speculation on what happened.
Then he wondered, "How am I going to memorize those formulas?"
He noticed that the denominators are the determinant of the coefficients:
. . . . . . $\displaystyle ad - bc \:=\:\begin{vmatrix}a & b \\ c & d\end{vmatrix}$
Then he did some mental juggling to see that:
. . . . . . $\displaystyle dh - bk \:=\:\begin{vmatrix}{\color{blue}h} & b \\{\color{blue}k} & d\end{vmatrix}$ . and . $\displaystyle ak-ch \:=\:\begin{vmatrix}a & {\color{blue}h} \\ c & {\color{blue}k}\end{vmatrix}$
. . where the constants replace the respective coefficients.
Then he said, "Hey, I may be onto something here . . . "
. . and tested this pattern for larger systems
. . and eventually proving the procedure in general.
Then he and his buddies traded high-fives and said, "It's Miller time!"
But, of course, I'm guessing . . .
Here is the way to prove the generalized version of Cramer's rule.
Given,
$\displaystyle A\bold{x}=\bold{b}$,
Where, $\displaystyle A=\left[ \begin{array}{cccc}a_{11}&a_{12}&...&a_{1n}\\a_{21 }&a_{22}&...&a_{2n}\\...&...&...&...\\a_{n1}&a_{n2 }&...&a_{nn}\end{array} \right]$ and $\displaystyle \bold{b} = \left[ \begin{array}{c}b_1\\b_2\\...\\b_n\end{array} \right]$.
Since $\displaystyle \det(A)\not = 0\implies A \mbox{ invertible }$.
So, there exists a unique solution given by,
$\displaystyle \bold{x} = A^{-1}\bold{b}=\frac{1}{\det(A)}\cdot \mbox{adj}(A)\bold{b} = \frac{1}{\det(A)}\left[ \begin{array}{cccc}C_{11}&C_{21}&...&C_{n1}\\C_{12 }&C_{22}&...&C_{n2}\\...&...&...&...\\C_{1n}&C_{2n }&...&C_{nn}\end{array} \right]\left[ \begin{array}{c}b_1\\b_2\\...\\b_n\end{array} \right]$.
Let me explained what just happened. There is a rule that $\displaystyle A^{-1}$ is equal to its "adjoint" matrix divided by its determinant. Now the "adjoint" matrix is the transpose of the "cofactor matrix". That is what those $\displaystyle C$'s area. They are cofactors. But they are written backwarks because of the transpose operator.
Multiply them out,
$\displaystyle \bold{x} = \frac{1}{\det(A)} \left[ \begin{array}{c}b_1C_{11}+b_2C_{21}+...+b_nC_{n1}\ \b_1C_{12}+b_2C_{22}+...+b_nC_{n2}\\...+...+...+.. .\\b_1C_{1n}+b_2C_{2n}+...+b_nC_{nn} \end{array}\right]$.
Now $\displaystyle \bold{x}$ is the solution matrix to the system of equations. The $\displaystyle k$-th entry in this matrix is:
$\displaystyle x_k = \frac{b_1C_{1k}+b_2C_{2k}+...+b_nC_{nk}}{\det(A)}$.
Now consider a matrix $\displaystyle M_k$ obtained from $\displaystyle A$ but its $\displaystyle k$-th coloum is replaced by $\displaystyle \bold{b}$. What is the determinant of this matrix? It is the same as $\displaystyle x_k$! Because if we compute the determinant of this matrix by cofactor expansion along the $\displaystyle k$-th coloum we get just that!
Thus,
$\displaystyle \boxed{ x_k = \frac{\det(M_k)}{\det(A)} }$
Q.E.D.