Okay, I hope you will read this whole thing, because I'm trying
to go somewhere useful with it, and was hoping someone might have
answers that could help me.
In regards to 3D animation graphics, I'm not sure if you know
what Temporal Coherence is, but that's when each frame
smoothly and seamlessly transitions to the next one,
without any weird artifacts to distract the eye from
the smoothness.
An interpolated transformation ("tween") is a good
example of this. A good counter-example is when one
uses one of those painterly stroke-based renderers to
create an animation, and they end up with
creepy-crawly strokes that resemble what some call a
"shower door" effect. This is of course very
distracting to the viewer, and is often considered
undesirable.
3D animation can also be rendered as a sequence of
2D-projected vector geometry. In such a 2D vector-rendering
of a 3D animation, it is still possible to perceive
2 distinct levels of temporal coherence within the same
final animation output: the 2D and 3D levels.
In the "shower door effect" rendered animation example
I cited, there is 2D temporal decoherence, but
temporal coherence is still be there at the 3D level,
beause it's underlying 3D movement is being depicted
seamlessly (ie. there's no choppiness -- "shower door"
effect is not the same as choppiness)
So this then finally brings us to the exact opposite
case in 2D vector rendering of 3D animation: where you
have 2D Coherence with 3D decoherence.
That's where the 3D animation information is choppy,
but the final 2D-vector frames still maintain
seamless/smooth transition from one frame to the next,
which would be accomplished by 2D tweens spanning the
gaps in the 3d choppiness.
And so that's where I was thinking about the idea of
rendering to 2D-vector output, since only 2D vector
offers the possibility of further 2D-tweening even
when the underlying/preceding 3D information was
choppy (2D coherence with 3D decoherence).
So how would I achieve that today, by using a manual
approach. Well, I'd vector-render my 3D animation at a
very low framerate for choppiness (eg. 3 frames/sec?).
I'd then treat this resulting sparsely-rendered 2D
frameset as a set of of 2D keyframes intended for a
higher ultimate framerate (eg. 15fps). I would then
have to PAINSTAKINGLY AND LABORIOUSLY go through each
of these 2D-vector frames to then tweak it and lay down
appropriate shape-tweens (interpolated transforms), so
that the 2D geometry in each frame would smoothly
tween into that of the next frame. The end result would be an
animation having temporal coherence in 2D at 15fps),
even while having temporal decoherence at the 3D
level.
Why do this?
Well, if you've ever seen cel-shaded 3D animation on
TV or in the movies, it's very easily identifiable and
glaringly stands out from the regular 2D animation,
because that cel-shaded 3D animation is still
communicating 3D temporal coherence to the viewer. If
you were to render only a still image, then the
cel-shading looks believable as 2D. But as soon as you
try to render an animation with it, then the 3D
temporal coherence gives the whole thing away as 3D
CG.
But if you could achieve a fully smoothly 2D-tweened
animation, even while its underlying 3D information
was choppy, then you wouldn't have that troublesome 3D
temporal coherence that ruins the illusion of 2D.
How to achieve this?
Well, that's where you need to simultaneously deal
with the parent 3D geometry, as well as the
associated 2D child geometry derived from it.
Thus it would be conceivable to link/reference the 2D
curves in any 2D vector-rendering back to its 3D
parent geometry.
The purpose of doing so would be for determining how
to lay down the 2D vertices and their tweens to
achieve smooth 2D-tweening from one 2D frame to the
next.
So that last sentence sounds like the critical
blackbox devil-is-in-the-details step. They key word
here is TOPOLOGY, because that's what a 3D surface is.
Topology - Wikipedia, the free encyclopedia
Manifold - Wikipedia, the free encyclopedia
What one would look at is the dynamically animating 3D
geometry and the transition paths of their local
extrema used to create the 2D vector renders. What one
would have to test for is continuity/connectedness, in
regards to these transition paths. As with most
software algorithms, have to be able to do this using
linear algebra / matrix / numerical methods rather
than analytical methods.
Riemannian manifold - Wikipedia, the free encyclopedia
Hilbert space - Wikipedia, the free encyclopedia
So the reason I'm posting here is to ask if anyone has
any insight on how I might be able to achieve this, or
if you know anyone who would.