For every matrix ordered (n x n)
det(A)=det(A^{t) Can someone prove this? }
What's probably best if you really want to understand this is to take a 3x3 matrix and work out it's determinant using cofactors, then do a row permutation and repeat.
You might even do it real quick with a 2x2, it's a 10 second exercise, to see why the sign change comes about.
Once you have a good instinct for why the sign changes understanding the lingo about permutation indices etc. will make more sense.
I did, And I understand the sign and all, but I think there is something missing, i mean we have to come to
a1g(1)a2g(2).....ang(n)
What is written above is the inverse function I think of what is written in the proof however I have to proof that their determinant is equal, don't I?
Assume Einstein summation convention: sum over repeated indices: (Ex: aibi=a1b1+a2b2+a3b3)
If D =e_{ijk}a_{1i}a_{2j}a_{3k}
D^{T}=e_{ijk}a_{i1}a_{j2}a_{k3}
Every term of D is a term of DT. The catch is (romsek), you have to deal with sign.
D=e_{123}e_{ijk}a_{1i}a_{2j}a_{3k}
In each term, transpose ijk to 123 and simultaneously transpose 123, then there is no change in sign:
D=e_{lmn}e_{123}a_{l1}a_{m2}a_{n3}
For example e_{123}e_{312}a_{13}a_{21}a_{32}=e_{231}e_{123}a_{21}a_{32}a_{13}
Let's write out "long-hand" what the definition:
really means, for n = 3.
To do this, we are going to have to calculate the various σ in S_{3}. Let's list these, there are 6 = 3! of them:
e, the identity permutation (which sends k to k, for all k = 1,2,3).
(1 2)
(1 3)
(2 3) these are all transpositions, and have negative sgn (they are odd permutations, and thus we take -1 to an odd power).
(1 2 3)
(1 3 2) these are 3-cycles and can be written as the product of 2 transpositions:
(1 2 3) = (1 3)(1 2)
(1 3 2) = (1 2)(1 3) so they have positive sgn.
So our first term will be:
a_{11}a_{22}a_{33}
the next term is:
-a_{12}a_{21}a_{33} (since sgn((1 2)) = -1).
our third term is:
-a_{13}a_{22}a_{31}
the 4th:
-a_{11}a_{23}a_{32}
the 5th:
a_{12}a_{23}a_{31}
and finally:
a_{13}a_{21}a_{32}.
Now suppose we look at A^{T} = (a_{ji}), the transpose of A = (a_{ij}).
The first term remains unchanged by switching the indices.
The second term becomes:
-a_{21}a_{12}a_{33}, which is still the same (since multiplication of scalars is commutative) as the second term for det(A).
Similarly, the third and fourth terms are also unchanged (we may have to shuffle the order in the product, but commutativity lets us do that).
Now let's see what happens to the 5th term, it becomes:
a_{21}a_{32}a_{13} = a_{13}a_{21}a_{32}
which is the 6th term in our expansion of det(A), and similarly the 6th term for det(A^{T}) turns into the 5th term for det(A).
So we get the exact same sum (with the terms in perhaps a slightly different order), thus the determinants are equal.
In particular:
and sgn(σ) = sgn(σ^{-1})
and the mapping σ → σ^{-1} is a bijection of S_{n} with itself, so summing over the σ^{-1}'s gives the same result as summing over the σ's.
If it’s not clear, consider simultaneously changing e_{123}e_{312} to e_{231}e_{123 }
e123
e312→
e213
e132→
e231
e123
The changes in sign associated with each transposition cancel out.
EDIT: Just in case, e_{ijk}... is +1 for an even permutation and -1 for an odd permutation.
Ah the infamous Levi-Civita symbol. This is the same as the sgn function when i ≠ j ≠ k ≠ i, and 0 otherwise. Since this is an alternating tensor, it stands to reason that det, being a linear combination of alternating tensors is likewise an alternating tensor.
In fact, det is the unique alternating multilinear functional such that det(I) = 1. This can be used to give an easy proof that det(AB) = det(A)det(B):
Consider the function B → det(AB) which is easily seen to be a multilinear functional of the columns of B. It follows that:
det(AB) = c(det(B)) for some scalar c. Setting B = I, we obtain c = det(A).
We can adapt this approach to this problem, as well:
Consider A → det(A^{T}) as a function of the rows of A. This is a multilinear functional of the rows, and from the alternating character of det, we see it is alternating as well.
Hence det(A^{T}) = c(det(A)) for some scalar c.
Since taking A = I, we get det(I^{T}) = det(I) = c(det(I)), we obtain c = 1, hence det(A^{T}) = det(A).
(Both of these proofs rely on the fact that the space of alternating n-linear functionals on an n-dimensional space has dimension 1, which is not exactly trivial.
In fact:
which can be seen by counting the basis elements ).
Here we go again. Hang on.
Deveno wrote:
"In fact, det is the unique alternating multilinear functional such that det(I) = 1"
Circular definition. det defined in terms of det. But let's move on.
What is the unique alternating multilinear functional such that "det(I)" =1?
To clarify the axiomatic definition of determinant for those interested (I looked it up because I don't like to be intimidated):
A linear functional on a vector space V over F is a mapping L(x):V→F which satisfies
L(αx)=αL(x),
L(x+x’)=L(x)+L(x’)
A multi-linear functional on a vector space V over F is a mapping L(x1,x2,..xn):V→F which satisfies:
L(x1,..,αxi,..,xn)=α L(x1,..xi,..,xn)
L(x1,..,xi+xi’,..,xn)= L(x1,..,xi,..,xn)+ L(x1,..,xi’,..,xn), xi=1,2,..n
(Note: The most general definition is for x1εV1, …,xnεVn)
Axiomatic definition of determinant:
1) det is a multi-linear functional on the rows (x1,x2,…,xn), (a1,a2,…,an), of the matrix A.
2) If two rows of A are identical, detA=0
3) detI=1
a) Does detA exist?
b) Is it unique?
c) How do you calculate detA
Theorem: If det satisfies the axioms,
4) detA=Σ{(signπ)a_{1pi(1)}a_{2pi(2)}…..a_{npi(n)}},
where Σ is sum over all permutations pi(1) pi(2)….. pi(n) of 1,2..n and signpi is the determinant of the permutation matrix of the permutation pi. Uniqueness has to be proven.
Note the circular definition here signpi defined by determinant and det defined by signpi. There is a way around this, “determinant” is a separate definiton than “det.”
On the other hand, you can start with a standard definition of determinant:
detA=eijk..(a1i,a2j,…),
sum over the n repeated indices i.j,… and eijk is +1 or -1 for ijk.. an even or odd permutation, ie sign(ijk..). Which is exactly the same thing as 4).
Note: Using the axiomatic definiton, detAB is proven by first showing detEB=detEdetB where E is a product of elementary operations.
A complete development of the axiomatic definition can be found in Schneider and Barker, “Matrices and Linear Algebra.”
CONCLUSION: Why? After the smoke clears, it's just the standard definition.
EDIT: But I did get something out of this, "multi-linear functional," so there is one less thing I can be snowed by, which is why I pass it on as a public service. But the list is endless, and the only motivation in a particular instance like this is annoyance.
EDIT: And I didn't like my nice solution (post 6) to a question by OP buried by an incomplete, confusing, irrelevant, rambling reply right after I posted it.
Lol, you are funny.
The Levi-Civita symbol you are so fond of is itself a multilinear functional, which acts much like a determinant, and the Einstein summation convention you seem to feel so happy about has its origins in tensor analysis, which deals with such things (among others) as multilinear functionals.
The "sign function" sgn is definitely NOT determined by a determinant (it could be, but to avoid circularity, one would then need some alternate definition of determinant). The sign function is determined by the parity (oddness or evenness) of the number of transpositions in a decomposition of a permutation into transpositions (2-cycles). The number of transpositions required is not unique, a 3-cycle might be written as a product of 2 transpositions or 4, or 6, but it cannot be written as the product of an odd number of transpositions.
It can be shown (but I will not) that sgn is the UNIQUE group homomorphism from S_{n} to the set {-1,1} of integers under ordinary multiplication of integers. Proving that sgn is an invariant of a permutation is somewhat of a chore, many proofs invoke the so-called "Vandermonde polynomial":
and let S_{n} act on the indices.
As for going further to say my post was: "incomplete, confusing, irrelevant and rambling" let's examine each of those objections in turn:
Incomplete: I stated quite clearly at the beginning of the post that I was only addressing the case n = 3. The size of S_{n} grows quite rapidly as n does, a 4x4 matrix has 24 terms in the sum that defines it, a 5x5 matrix has 120 terms. Moreover, each term is an n-fold product making for 57,160 individual binary operations (not counting evaluating the sign of the permutation). Except in very special cases, manually calculating a 5x5 determinant is only for the extremely persistent. I explicitly gave all 6 terms in the first determinant expansion, and only took a slight short-cut in indicating the six terms in the expansion of the determinant of the transpose. I only used terms that the OP has shown previously from attachments he was familiar with.
Confusing: This is a subjective assessment; if you were confused, it may be understandable, as determinants are kind of complicated.
Irrelevant: I addressed the topic matter of the thread. How else to judge relevance? I try not to stray from discussing the topic at hand with ad hominem attacks. If you believe otherwise, I urge you to report my post(s) to the moderation staff.
Rambling: Perhaps fair, but somewhat at odds with the "incompleteness" claim. I strive to be CLEAR, but what is clear to one's self is not always so to others.
As for "burying your post", the fact that someone else posted after you did is something you have no control over, it's the nature of the beast on any public forum. Unless you have staff privileges on this site, I do not think you are able to close a thread after you post to ensure your post remains "the last word". I don't see these forums as "a competition", rather I see different approaches as "complementary" and I would HOPE the original poster is/was astute enough to take what is useful to him, no matter WHAT the source. There's no "right" way to do mathematics, different methods and approaches have different utilities.
Especially in a subject like Linear Algebra, it is possible to view it on "different levels" ranging from the practical and computational (such as analyzing static forces in a rigid body truss to determine stresses) to the extremely abstract (such as studying linear representations of F[G]-algebras for a group G). I have seen just in this particular forum alone questions ranging from nilpotent linear operators to solving two simple equations in two unknowns with integer coefficients.
My feeling is, the original poster is reading a "medium-level" book on Linear Algebra. I suspect he is Israeli, judging from a book attachment he posted written in Hebrew, and perhaps making what he wants clear is all the more difficult for him if English is not his native tongue. Perhaps because of this, he responds less vocally with approval or confusion to anything that is posted here. My posts are based on a "guess" at his sophistication level. As with any guesswork, I certainly might be wrong.
The definition of det obfuscated by Deveno is niceley summarized in Finkbeiner by:
detA=F (A matrix, F scalar) is uniqueley defined by the conditions:
1) det is linear in the columns
2) det is 0 if two columns are 0.
3) detI=1
The conditions are used to test an existing det function, for ex Schneider with std def of det, or a derivation is attempted.
Finkbeiner attempts the derivation as follows (details omitted):
Let C=AB, then it follows from conditons 1) and 2) that
detC=detB(Σ_{p}(a_{p1}a_{p2}…a_{pn})
if B=I, C=A, and detB=1 from 3).
The result is a circular definition because function detB depends on function detA which depends on function detB.
To get around this problem and to prove det function exists, Finkbeiner uses induction and def of cofactor as det of an n-1Xn-1 matrix, and deta=a. Then
detA=Σ_{i}±a_{ij}lA_{ij}l
But that is just a standard definition of determinant. So after all the smoke clears, you are left with a standard definition of det satisfies conditions 1)-3). But I knew that all along.
Uniqueness is claimed because detA is defined by its components. How do you know that there isn’t some other function of the components which satisfies 1)-3)?
It turns out that we can form a vector space out of multilinear functions: (F^{n})^{n}-->F.
It also turns out that elements of this vector space that satisfy (2) form a subspace.
If we try to build this subspace up "from the inside out", the construction we come up with closely parallels how we define a determinant with minors and co-factors.
Simple combinatorics tells us that the basis elements for multi-linear functions for the subspace of k-linear functions (F^{n})^{k}-->F that satisfy (2) has dimension "n choose k", the basic way of doing this is with the wedge product, or exterior product, which turns two 1-vectors (ordinary vectors) into a 2-vector (or: blade-that is, an oriented plane). Blades can be viewed as "scalable parallelograms". 3-vectors can also be viewed as "scalable parallelpipeds", where the 3rd component is the cross-product of the other 2. In this view, the determinant is the "signed volume element" of our 3-vector. This can be generalized to n dimensions.
As n choose n is 1, the subspace of n-linear alternating functions of n n-dimensional vectors has dimension 1, any such function is completely determined by its value at some element (some collection of n n-dimensional vectors) which doesn't map to 0, which we can, of course, view as an nxn matrix. So to UNIQUELY determine the det function, it suffices to know what det is at some invertible matrix. If we wish to have:
det(AB) = det(A)det(B)
we clearly want the alternating n-linear function on the columns of A such that:
det(A) = det(AI) = det(A)det(I),
which tells us we want the alternating n-linear function on n n-dimensional vectors in F^{n}, such that the det on the collection of the standard ordered basis is 1, that is: det(I) = 1.
It might be instructive to see how this actually plays out in the case n = 2.
We seek a bilinear function F:R^{2}xR^{2}-->R such that:
1. F(u,u) = 0 for any u in R^{2}
2. F((1,0),(0,1)) = 1.
Following standard examples, we will set for F(u,v), u = (a,c), v = (b,d), so that F can be viewed as a function of the matrix:
[a b]
[c d].
As F is bilinear, we can view F(u,v) as the matrix product u^{T}Mv, for some 2x2 matrix M. Writing M =
[m_{1} m_{2}]
[m_{3} m_{4}]
we get:
F((a,c),(b,d)) = (ab)m_{1} + (bc)m_{3} + (ad)m_{2} + (cd)m_{4}.
Using property (1) above, and setting b = a, d = c, we have:
0 = F((a,c),(a,c)) = a^{2}m_{1} + (ac)(m_{2} + m_{3}) + c^{2}m_{4} = 0.
Using property (2) above, we have (setting a = d = 1, b = c = 0):
1 = m_{2}.
Using property (1) for a = 1, c = 0, gives us:
0 = m_{1}.
Using property (1) for a = 0, c = 1, gives us:
0 = m_{4}.
Substituting these values in for F((a,c),(a,c)) we have:
0 = (ac)(1 + m_{3}) , so choosing a,c ≠ 0, we get m_{3} = -1, that is: M =
[ 0 1]
[-1 0], so that:
F((a,c),(b,d)) = ad - bc.