1. Re: Determinant of transposed matrix

Deveno, In an effort to understand your language, I took some notes from Halmos, Finite Dimensional Vector Spaces, and pass them on as a lanquage excercise:

These are essentially lanquage notes, not math notes.

Linear funnctional on a vector space V is a scalar valued function y defined for every vector x with the property that, for scalars ai
y(a1x1+a2x2)=a1y(x1)+a2y(x2)

The set V’ of all linear functionals on V is a vector space (A set of elements is a vector space if…..). V’ is called the dual space of V.

Def. The annihilator S0 of any subset S of V is the set of all vectors y in V’ st [x,y], (y=f(x)), =0.

Def. Dual Basis: A basis of V’
Theorem: If X is an n-dim basis of V, there is an n-dim basis of V’.

Def. Direct Sum U(+)V of U and V (over F): All ordered pairs <x,y>, x in U and y in V.

Def: Tensor Product U(X)V is the dual of all bilinear forms on U(+)V.
Surprinsingly, we can piece it together.
Theorem: W=U(+)V is a vector space.
Def. w(x,y) is the value of w at <x,y>
Def. w is a bilinear form, (or functional) if:
w(a1x1+a2x2,y) = a1w(x1,y)+a2w(x2,y)
w(x,a1y1+a2y2)=a1w(x,y1)+a2w(x,y2)

If w(x,y)=a1w1(x,y) + a2w2(x,y), w is a bilinear form, the set of all bilinear forms onW is a vector space.
If z belongs to W, the set of all functions y(z) is the dual of W.

NEXT: Permutations, Multi-Linear forms, Alternating Forms, and Determinants.

Def: A permutation P maps S(n)onto S(n).For ex, if P(1)=2, P(2)=3, P(3)=1,
P(1,2,3)=(P(1),P(2),P(3))=(2,3,1)
P(3,1,2)=(3,1,2)

If Q and R are arbitrary permutations:
(QR)i=Q (Ri), E exists st EP=PE=1
P’ exists st P’P=PP’
The set of permutations form a group. (Remember, these are from Halmos’ notes, not my knowledge)

The representation of a permutation as a product of permutations is not unique.

At this point the development goes into the std def of sgnP (Mirsky,Perlis), as sgnΣi<j(xi-xj), but in such an obtuse development that I finally gave up. The only point being that any transposition changes the sign of P so that any sequence of even transpositions always has a + sign and any sequence of odd transpositions always have a - sign.

The whole permutation development was simply an excruciating expositon on the Levi-Civita symbol, eijk…

So I looked ahead to Alternating Forms, and determinants, and gave up. For a mathematician this may be mother’s milk; for my limited capacity, it’s like flying to Paris to potty and then not being able to find one.

I must admit, up to permutations it was a some-what interesting logical succession.

But it’s all yours Deveno.

2. Re: Determinant of transposed matrix

These concepts are useful to "some people", perhaps not to you. Halmos' book on vector spaces is regarded by some as a "classic", he had a reputation as an excellent text-writer.

Permutations can be expressed various ways: as bijective functions of a finite set, as "re-arragements" of a set of letters, or seating arrangements, etc., as a certain kind of group which acts on a set, as certain sets of matrices (which come about by "jumbling" the columns of the identity matrix). The approach taken can make permutations seem like quite different animals, but there are certain facts about them which are true "no matter how we express them".

If you find Halmos' style of exposition (or mine, for that matter) not to your liking, that is your perogative. I do note in passing that the Levi-Civita symbol is a kind of shorthand, and it might be an interesting question to ask: short-hand for what, exactly?

3. Re: Determinant of transposed matrix

Deveno,

The Levi-Civa symbol, eijk.., is short-hand for a function of the integers 1,2…,n, with certain properties:

It is ±1 acording as ijk.. is an even or odd permutation of 1,…n, all different, and 0 otherwise. But this is not unique, often not mentioned. You have to prove that.

You would probably prefer it is a skew symmetric tensor whose components have the absolute value 1. But you still need a prescription for the sign of eijk…, for arbitrary ijk…, which isn’t determined by the requiremment of anti-symmetry because you need a prescription for assigning signs beyond just anti-symmetry. So so much for skew-symmetric tensor.

When it comes to sign, Halmos, after careful and systematic development of elaborate machinery, abandons it all and resorts to a standard method, accompanied by a pg of confusing verbosity. The standard method for uniqueley assigning sign (Mirsky, Perlis), when the problem is even admitted, is
sgnP(n)= sgn(eijk…) = sign of Σi<j(xi-xj),
which is quite straight-forward. It is difficult to show from this that a transposition of integers changes sign.

It then follows that all sequences of transpositions from a to b must be either all even or all odd. ie, the sign of eijk… is uniqueley determined by whether it is even or odd by the standard definition.

I note that 2 textbooks I have seen, after elaborate and detailed mathematical development of determinants from a definition of multi-linear forms, which, incidentally, ultimateley depend on the standard definition of determinant anyhow, define the sign of a permutation either by graphical means, or plus if even or minus if odd, ignoring the uniqueness problem.

Appendix: Halmos and what you say I think.

I had no objection to Halmos’ book till I hit sign of a permutation, but I was amazed by a statement of his that he developed multi-linear algebra in order to define determinants. I already know what determinants are, but if they are used to clarify the abstractions, fine, they didn’t.

Frankly, I was pleasantly surprised that every definition and theorem followed logically from previous ones. Wish I could remember them- my limitation. It all came to a crashing halt when it came to defining the sign of a permutation.

I note that using the lanquage of Halmos does not equate to equality with Halmos.

When I was a maintenance man, one of the tenants, a young lady, recited to the maintenance crew a formula (I believe it was the wave equation). I was kind enough, which I generally am not, not to ask her what it meant nor to point out that I could teach a six-year old to recite the same formula.

I suppose she thought that reciting the formula would justify her circumstances and status, as opposed to ours. That’s why it’s vital not to be caught out. Unfortunateley, the same strategem is not available to me because I have a lousy memory, though I can remember enough to use it occassionally.

My apologies for responding with a diatribe. As the rabbit said when it sat down in the snow, my tale is told.

4. multi-linear functional to deteminant

Halmos actually derived determinant from gneral principles of multi-linear forms (functionals), without any gaps, in a logical way. In outline:

(x1,…xn) are vectors belonging to Vn and.
def: multi-linear (k-linear) form w(x1,..,xn)
w(x1,..,xi+xi’,...,xn) = w(x1,..,xi,...,xn) + w(x1,..,xi’,...,xn)
w(x1,..,αxi,...,xn) = αw(x1,..,xi,...,xn), for all i.

w is skew-symmetric if πw=-w for π odd, where π is an odd permutation of the indices.
def: w is alternating if w(x1,..,xn) is zero when any two of the x’s are 0.

Theorem: Every alternating linear form is skew-symmetric.

Theorem: let A be a linear transformation on V:

Theorem: The vector space of alternating n-linear forms on Vn is one-dimensional.
ie, To every A there exists a scalar δ(A) st Aw = δ(A)w.

1) def: δ(A) = det(A).
then:
2) detAw(x1,..,xn) = w(Ax1,..,Axn)*

If A is the matrix of a linear transformation in some coordinate system (x1,…,,xn), and each xi is replaced by aijxj (summation convention), then

3)detA = ΣPi(sgn Pi)a1,Pi(1)••an,Pi(n),

which is the same as staneard def: detA=eijk..a1ia2j….,
where eijk.. plus if ijk.. is an even permutation of ijk.. and neg for an odd permutation;

* The explanation is very difficult to see.
For V2
w(Ax1,Ax2)=w(a11x1+a12x2,a21x1+a22x2), and after expansion as a multi-linear form,
w(Ax1,Ax2)=a12a21w(x2,x1) + a11a22w(x1,x2)
but w(x2,x1) = -w(x2,x1), (w is skew-symmetric) so that:

w(Ax1,Ax2)=(a12a21 - a11a22)w(x1,x2), and
detA = (a12a21 - a11a22)

There is one idiosyncrasy buried in Halmos’ general derivation. for a 3d orthonormal coordinate system, eijk is pos for a right-hand coordinate system and neg for a left-hand coordinate system. I don’t know how this plays out for a general orthonormal coordinate system, ie, what’s righ-handed and what’s left handed; as far as I can see, the question is ignored.

The general development is logically structured withiout gaps, but is very difficult to follow because of the abstraction and having to remember relevant definitions and where they were. Without some sense of what is going on, it’s almost impossible. “Multi-linear forms” in Akivis Goldberg helps to give an intermediate perspective. Beyond that, unless you can accept and memorize abstract definitions and not worry about what they mean, it’s pretty much inaccessible.

But I most admit that it (multi-linear form (functional) to determinant, could be done without a catch, or reference to standard def of determinant. Quite impressive. I wonder where he got it from. Authors don’t always reference work they got their material from. But Halmos’ references Van der Waerden, which, if you look up the contents on Amazon, has sections on multi-linear forms and determinants. And it’s available (1950) for a quarter (ok, shipping also), so I got it to satisfy my curiousity. (damn, just got confirm for Vol II but add said Vol I).

5. Re: Determinant of transposed matrix

Van der Waerden is widely regarded as "the father of modern algebra". He was a student of perhaps the greatest female mathematician ever, Emmy Noether who literally revolutionized algebra in the early 20th century.

As far as your specific question as to "which" oriented coordinate systems are positive, and which are negatively oriented, it's largely a matter of convention. Given a hyperplane (a space of n-1 dimensions) in an n-dimensional inner product space, we have two possible choices for a "unit normal" to that hyperplane, (for example in 3 dimensions the unit normal to the xy-plane might be chosen to be (0,0,1) or (0,0,-1) the former leading to the "right-hand rule", the latter to the "left-hand rule").

So basically, we choose det so that:

det((1,0,...0,0),(0,1,...,0),...,(0,0,...,1)) = 1 where we considered the STANDARD ordered basis {(1,0,...0,0),(0,1,...,0),...,(0,0,...,1)} to be "positively oriented". It's just convention to assign (1,0,0) as "the (positive) unit x-axis" , (0,1,0) as "the unit y-axis" ,etc., there is no inherent GEOMETRIC reason to do so. "x", "y" and "z" are just names, there is an isomorphic vector space that uses these in a "jumbled" order (this is similar to using the convention "up/right" is positive for 2-dimensional space, or that "counterclockwise" is positive for angular measurements).

The important fact is that we have just two orientations to choose from and the the Levi-Civita symbol tells us whether we keep the same orientation, or reverse it when changing from one coordinate system to another. Simply put: if we have two orthogonal vectors in an ordered orthonormal basis, and we exchange the order of those two (which is what a transposition does), we get the "mirrored" orientation.

Yes, it IS "abstract" and abstract information is typically of higher "density" than more "down-to-earth" expositions. This is both the strength AND the weakness of an abstract approach. Most "naive" explanations are tied in their terminology and methodology to the application at hand, abstractions are more portable, but at the same time much harder to follow (one can visualize Euclidean space in 2 or 3 dimensions "concretely", but a general n-dimensional space over an arbitrary field is a bit harder to do).

I believe that the origins of what is called "multilinear algebra" has its origins in the work Die Lineale Ausdehnungslehre, ein neuer Zweig der Mathematik, by Hermann Grassmann (1844). This work was regarding as "too confusing and unintelligilbe" to the mathematicans of his day, and the reception it received so dismayed Grassmann that he quit mathematics entirely and devoted the rest of his life to translating ancient tests in Sanskrit. The modern notion of "vector space" was somewhat slow in developing, although individual examples such as the Euclidean plane, the Argand plane, and the quaternions already were already in wide-spread use by the time Peano gave what are essentially the "modern" definitions in 1888.

The corner-stone of linear algebra (as it is commonly learned by mathematicians today) is that for a vector space, knowing what a basis does tells you everything. For finite-dimensional spaces, a basis is a finite set, and we can use combinatorics to study the various possibilities (this is where permutations come in).

The power of the abstract approach shows its utility when one thinks of vector spaces as defined by the axioms, instead of say: "points in space". It can then be seen that we can have "spaces of functions" and apply "geometric" concepts to "analytic problems" (this is what happens with fourier analysis: we choose "trig functions of multiples of a given period" as a basis for a space of periodic functions, the "fourier coefficients" are given by certain integrals which turn out to be inner products). We can then analyze how functions behave in terms of their eigenfunctions under a given linear operator, which can turn a messy problem into an easier one.

Multi-linear forms also provide a succinct language for studying differential geometry, which has its uses in say, analyzing flows on a surface. The determinant of the Jacobian matrix (the linear function corresponding to the tangent plane of the surface at a given point) provides us with a "scaling" of how "far from flat" our surface is. The basic idea is, we can examine what is happening on "curvy things" by looking at them locally where they are "almost linear", and then patch all the local bits together, to find out what is the "net result". Modern 3-D graphics leverages this by "polygonalizing" a 3-D object, and using linear transformations on the polygons (which are now Euclidean objects) to describe movement.

For many people, they won't "need" the highest levels of abstraction. I can assure you, though, that most professional mathematicans *will*. Your typical 2nd-year college math student would have been an intellectual GIANT in the 17th century, having the same knowledge. Such is the nature of progress.

6. Re: Determinant of transposed matrix

I take back my opinion of Halmos’ development of determinants from alternating multilinear functionals.
As defined by Halmos, an alternating multi-linear functional has all the properties of a determinant and is unique. You can stop right there. A multilinear alternating functional IS a determinant, ie, just another name for it.

So the statement that a multilinear functional is a determinant is correct in the same sense that if I define blahblah to be a determinant then blahblah is a determinant.

So a determinant is what is given by the standard definition, the abstract blahblah notwithstanding:

detA=eijk…(a1i,….a1n)

7. Re: Determinant of transposed matrix

Almost. The function:

F(v1,..,vn) = 2*det(v1,...,vn) is also an alternating multi-linear functional.

So we need to fix the value of F at some linearly independent collection of n-vectors to know "which" alternating multi-linear function we have.

Conventionally, the identity matrix is used, since the standard (ordered) basis is usually a priori known to be linearly independent.

You are correct in one thing, though: any alternating multi-linear functional (n-linear on n-vectors) which returns the value 1 on the standard ordered basis, IS the determinant, no matter how many convoluted "other" formulas we describe it by. So pick the formula you like best, that's perfectly fair.

8. Re: Determinant of transposed matrix

Actually, you don't even need the definition of alternating multilinear functional. It's just a tortuous, obscure way of finding what you already know.

Let x1,x2,x3 be three vectors in V3 with a basis e1,e2,e3. Extension to any dimension is obvious.

A multilinear functional w(x1,x2,x3) is uniqueley defined if w(ei,ej,ek) is specified for all I,j,k.

Let w(ei,ej,ek) =ϵijk, the Levi-Civita symbol.

If x1=αiei, x2=βjej, x3=Γkek, summation convention,

w(x1,x2,x3) = w(αiei, βjej, Γkek) = αi βjΓk w(ei,ej,ek) = ϵijk αi βjΓk,
which is the determinant of the matrix with rows x1,x2,x3.

So we have defined a particular multi-functional form to be a determinant. So what?

Halmos notes he only introduced multi-linear functionals in order to derive the determinant function. To do so, he assumes the determinant function. The whole thing is circular, just like the other “derivations” of determinant starting with definition of multi-linear form and based on definition of determinant.

Ref: Halmos

9. Re: Determinant of transposed matrix

Originally Posted by davidciprut
For every matrix ordered (n x n)

det(A)=det(At)

Can someone prove this?
So far the OP has not been answered. Post #2 is incorrect, and other posts purporting to prove it are an incoherent collection of abstract terms and unproven assertions.

For a proof see:
Determinant of Transpose (DetA=DetAT)

10. Re: Determinant of transposed matrix

Originally Posted by Hartlw
So far the OP has not been answered. Post #2 is incorrect, and other posts purporting to prove it are an incoherent collection of abstract terms and unproven assertions.

For a proof see:
Determinant of Transpose (DetA=DetAT)
I beg to differ, in post #7, I showed what such a proof entails using the "sgn" defintion of determinant. It really makes no difference whether you define "sgn" or the Levi-Civita symbol first, each can be used to define the other.

Personally, I find using "sgn" to prove things about determinants a bit of a bother, the notation is cumbersome, and often hard to follow. In this respect, the Levi-Civita symbol and the summation convention represent a notational improvement.

The definition of the sgn function has nothing to do with determinants, per se. It is a function of permutations. In fact, it is the UNIQUE group homomorphism from Sn to {1,-1}, although proving this is harder than it looks. As I have indicated in other posts, the main problem is ensuring the sgn is "well-defined" since a given permutation can have several decompositions into smaller cycles.

The key to UNDERSTANDING the proof given on the proof wiki is to see that a sum of the form:

∑ sgn(σ)a1σ(1)...anσ(n)

can be re-arranged, term-by-term as:

∑ sgn(σ-1)aσ-1(1)σ-1(σ(1))...aσ-1(n)σ-1(σ(n))

because sgn(σ) = sgn(σ-1) (if σ can be written as k transpositions, we can write σ-1 as the same k transpositions in reverse order).

As σ ranges over all possible permutations, so does σ-1, so we can write the latter sum as:

∑ sgn(σ-1)aσ-1(1)1...a-1(n)n = ∑ sgn(τ)aτ(1)1...aτ(n)n = det(AT).

Explicitly writing this out for n > 3 is prohibitively time-consuming.

Basically the reasoning behind this is that if σ sends j to σ(j) then σ-1 sends σ(j) to j, and j to σ-1(j). Again, for 3 dimensions:

let's say σ = (1 3 2), so the term this occurs in is a13a21a32. sgn(1 3 2) = 1, so this term comes in positive.

In this case σ-1 = (1 2 3). this term is a12a23a31. Note that this is just the term above for the transposed matrix (we have to re-order the factors aij in this term, but that's OK, field elements commute).

The correspondence {1,2...,n} to {σ(1),σ(2),...,σ(n)} is one-to-one, permutations are by definition bijective maps. So we can carry out this re-arrangement for all terms in the sum, with each "rearranged" term coming in with the original sign as the original, and this process will in both cases account for all the terms possible. The original terms are the determinant, the "rearranged" terms are the determinant of the transpose.

Now, nowhere did I mention "alternating multilinear functions", all that was used was properties of permutations. Why permutations? Because we have a finite set of possible indices, and we want exactly one index from each row AND each column to multiply entries together.

Granted, someone studying linear algebra may not want to know THIS much group theory. But if you want to use the Levi-Civita symbol to DEFINE determinants (and perhaps to calculate them, as well), you're going to have to EXPLICITLY define how it's calculated. As I see it, you have two choices:

1) Define it as an alternating multi-linear tensor (this is, actually, not all that uncommon) similar in spirit to the "Kronecker delta" (another tensor)
2) Define it in terms of the sgn function (in which case, you have to define...the sgn function, which leads back to permutations).

Saying: "nothing changes for ijk...." doesn't exactly tell you what:

ε15432 is, although it's clear from inspection that it's non-zero. In fact, it's 1, but that's not clear from inspection.

Page 2 of 2 First 12