Hi, I've got a problem regarding stereo vision. I've solved half of it, but the remaining part is a bit tricky. I'm hoping to save alot of time by asking you guys for help, so I'm glad I found this forum.
In this picture, two parallel cameras are looking at some points. My first problem was to calculate the position of the points. It was fairly easy, using some basic trigonometry.
I know the angle/field of view for the cameras. I also know the distance between them (pink line), and the resolution of the pictures from both cameras, and thereby the positions of the points in the pictures.
So by using trigonometry I can calculate the position of the points.
Next problem was to go backwards. In real life, it would be quite tricky to measure that pink line. So by measuring the distance between two points on the floor instead,
and calculate backwards using the same formula as before, I could calculate the distance between the cameras (pink line). 3 points on the floor in these pictures but I only needed the distance between two of them.
So far so good, I've worked out all above. Here's my math for calculating 3D points from pictures from 2 parallel cameras:
Now to the stuff that I haven't worked out yet. What if the cameras are rotated a little? Like this:
How do I calculate the position of points in 3D when the cameras are rotated?
And, can I use 3 points on the floor with measured distances to calculate the distance between the cameras AND their rotation?
I find trigonometry easier than matrices, but I googled a bit and found people talking about these:
Fundamental matrix (computer vision) - Wikipedia, the free encyclopedia
Essential matrix - Wikipedia, the free encyclopedia
However, I didn't understand that math...
In my examples, in order for it to work the cameras must be placed at the same height, and they can't be tilted up or down or along the direction they are looking, for lack of better words.
That means, when setting up cameras in real life I must make sure this rule is followed. It would be great if the math was able to handle all sorts of rotations, so that I don't have to be so careful when setting up the
cameras. Could the math handle unknown rotations in all directions?
And what if I even didn't have to care about the focal length? Could the math handle unknown focal lengths?
I don't know if matrices are a better solution than trigonometry, let's see what you guys think. If I can understand the matrix math I could consider using that.
So my questions are:
How do I calculate 3D points from pictures taken from two rotated cameras (left/right rotation)
How do I work out the distance/angle between the cameras when the distance between some points on the floor is known?
And finally what if I rotate the cameras an unknown amount in all directions / have them at different (unknown) heights / different unknown focal lengths / everything at once?
Hope I made my problem clear enough, and thank you in advance! I will really appreciate help with this problem.