Frenet Frame: Are these two representations the same?

I have a regular curve, $\displaystyle \underline{a}(s)$, in ℝ^{N} (parameterised by its arc length, $\displaystyle s$). I am trying to define the moving (Frenet) frame of orthonormal vectors $\displaystyle \left\{\underline{u}_1(s),\underline{u}_2(s),\dots , \underline{u}_N(s) \right\}$. However, looking in different books, I find subtly different definitions (both based on Gram-Schmidt orthogonalisation). I believe the two methods (described in full below) are equivalent, essentially because $\displaystyle \underline{u}_{k-1}^{\prime}(s)$ is a linear combination of the derivatives $\displaystyle \underline{a}^{\prime}(s), \underline{a}^{ \prime \prime}(s), \dots, \underline{a}^{(k)}(s)$. However, I would like to be absolutely sure. To sum up, my question is:

Do the following two approaches yield the same result?

$\displaystyle \underline{u}_k(s)=\frac{\underline{a}^{(k)}(s) - \sum\limits_{m=1}^{k-1}\left(\underline{u}_m^T(s)\underline{a}^{(k)}(s) \right) \underline{u}_m(s)}{\Vert numerator \Vert}$

... suggested in, for example, [1, p. 13] (link) and [2] (link).

$\displaystyle \underline{u}_k(s)=\frac{\underline{u}_{k-1}^{\prime}(s) - \sum\limits_{m=1}^{k-1}\left(\underline{u}_m^T(s)\underline{u}_{k-1}^{\prime}(s) \right) \underline{u}_m(s)}{\Vert numerator \Vert}$

... suggested in, for example, [3, p. 159].

In other words, is the subspace spanned by $\displaystyle \left\{\underline{a}^{\prime}, \underline{a}^{ \prime \prime}, \dots, \underline{a}^{(k)}\right\}$ the same as the subspace spanned by $\displaystyle \left\{\underline{u}_1, \underline{u}_2, \dots, \underline{u}_{k-1}, \underline{u}_{k-1}^{\prime} \right\}$?

__References__:

[1] W. Kühnel, "Differential Geometry: Curves - Surfaces - Manifolds".

[2] Wikipedia, "Frenet–Serret formulas".

[3] H. W. Guggenheimer, "Differential Geometry", McGraw Hill (or Dover Edition), 1963 (1977).

Re: Frenet Frame: Are these two representations the same?

Hey weetabixharry.

In the standard orthonormalization procedures, the first vector is important since it will define the first vector that will define the rest of the orthonormalization process.

The frame should be the same if a) you have the first two orthonormal vectors be the same (as calculated by the Gram-Schmidt and the first vector will always be just the first one in your list that is normalized) and b) the orientation of the set of both vectors is also the same (which means you need to consider the determinant of the matrix of all vectors to see if it's positive or negative).

If the above is the case, then you should always generate the exact same orthonormal basis for both sets.

The reason for the orientation is that basically different directions change the chirality: for example i X j = k by i x -j = -k.

Basically the orientation property will take care of most of it, but ultimately to get the exact same frame you want the first two orthonormal vectors generated to be exactly the same and if the orientation aspect is correct then the rest will take care of itself.

Re: Frenet Frame: Are these two representations the same?

Quote:

Originally Posted by

**chiro** The frame should be the same if a) you have the first two orthonormal vectors be the same (as calculated by the Gram-Schmidt and the first vector will always be just the first one in your list that is normalized) and b) the orientation of the set of both vectors is also the same (which means you need to consider the determinant of the matrix of all vectors to see if it's positive or negative).

Thank you so much for your response. What you have said seems consistent with what I have been doing, but I still can't seem to find a series of logical steps that prove the equivalence. In particular, I wonder if you could elaborate on why your point (a) should lead us towards equivalence.

For the the types of vectors I'm working with, I have proved equivalence for the first 2 orthonormal vectors by writing the expressions out long-hand. (Doing this for the 3rd vector was unbelievably cumbersome - it took me about a day).

Even though I can see that the next derivative of $\displaystyle \underline{u}(s)$ always includes a new term in the next derivative of $\displaystyle \underline{a}(s)$, I just can't seem to prove they lead to the same result.

Thanks for your help!

Re: Frenet Frame: Are these two representations the same?

One method you could do is to show that if for each new sub-space you generate, the next vector you wish to orthonormalize with respect to the other vectors should have the same sign in regards to its distance from the plane spanned by that particular sub-space.

So basically every time you add a vector and it becomes part of the orthnormalized basis, what you do is create a higher dimensional plane structure in which the new orthnormal vector will be a normal to that space.

So in line with this, you can use the above to show many equivalences.

Firstly: if the signs of all the distances are the same between pairs, then that's one way: if the plane definition at each stage is the same then that is another way.

The equation for a plane can be written down in a very simple form when it goes through the origin (and these are vectors in a vector space so they will go through the origin) in the way that n . r = 0 where n is the normal to the plane (and this is the exact normal that you will be finding at every step of the way).

Now r is a point on the plane, but this is just a linear combination of the vectors that exist from your orthnormalization procedure which is really easy.

So if your next vector is p and you can show that p . r has the same sign for both sets, then you're done as well (remember r is just a linear combination of all existing orthonormal vectors so you can just add all the ones you have up).

So if you show sign, or show that the normal is equal, or show that the plane equation is the same at each step then all of these are pages of the same book.

Re: Frenet Frame: Are these two representations the same?

Quote:

Originally Posted by

**chiro** if the plane definition at each stage is the same then that is another way.

The equation for a plane can be written down in a very simple form when it goes through the origin (and these are vectors in a vector space so they will go through the origin) in the way that n . r = 0 where n is the normal to the plane (and this is the exact normal that you will be finding at every step of the way).

Does "**n . r**" mean the inner product of **n** and **r**? So, in my case, **n** is the next orthonormal vector (which we want to calculate)... and **r** is any point lying in the space spanned by the previous orthonormal vectors?

If I understand correctly, when calculating the kth orthonormal vector, it must lie in the (N-k+1)-dimensional space which is orthogonal to the existing frame (and be of unit length). This is what **n . r** = 0 does. However, it is not clear to me why it ensures that the two methods are the same. (We can be sure that the next vector will lie in some (N-k+1)-dimensional subspace for both methods... but we want to show that they're exactly the same vector, in the same 1-dimensional space).

Re: Frenet Frame: Are these two representations the same?

Yeah basically n will be your new normal and r will correspond to any vector spanned by the existing sub-space you calculated in all the previous steps. If you want a unit normal, you can just pick any random vector in your existing sub-space (like adding all the vectors you've found already), then normalize it and then find a vector for n and normalize it.

The Gram-Schmidt process basically does this kind of thing repeatedly: you have an existing sub-space and you want to get the next vector that is orthogonal to the existing sub-space and if every vector is linearly independent from the other it must exist, but you always have two choices: it can be left or right handed and this is just expressed in the sign of the dot-product.

The removal of all the projections in the Gram-Schmidt is equivalent to the above. You can prove it if you want by decomposing an arbitrary vector into it's projections with regard to the subspace and the complement of the subspace (Think of how to write a vector in terms of a linear combination of others like v = ax + by + cz + ... and that if x,y,z are orthogonal then the projection is a = <v,x> if x is unit length).