Im completely stuck on a subject and need some more information to get a grib on it.
Im working on a installation, this installation has 2 cameras. One is on the top looking downward (x,y) the other camera is on the side (z). With this system i can track the x,y,z position of an object which is in sight of these camera's. In a script im gonna combine these positions to recreate a 3d model of these movements.
Everything works fine but i am encountering a problem related to the perspective distortion. The tracked movements always result in a pyramid shaped model. If i lower an object in the center of the camera, the x,y positions are correctly sent to the 3d generation script. But the further the object is to the left or right on the x-axis the 3d script writes a diagonal line even if i lower the object completely straight. I know why this happens but i don't know how this is called or how to convert this real world perspective coordinates to 3d coordinates which correct these distortions.
Does anyone know how this phenomena is called? or if there are any formulas for correcting this distortion? or which area of mathematica is involved. Just some things to get me on track.
Thanks in advance.
But i have to take the distance on the z-axis in account right?
So it is a static value instead of an increasing difference?
The problem is, because the z-axis comes from a camera, i'm not sure of this value. The same perspective phenomena happens with that camera.
Am i wrong if i also need to think of polar/cartesian conversion?