Hey Encircled.
Can you explain what exactly it is you are trying to do? If you want to measure something, can you write down what you are measuring in terms of a function of the pixels or the data itself?
In a photo each pixel represents an angle.
In an image of two objects with known angle (e.g. two stars), one could count the number of pixels between them and determine the ratio pixels/angle degree of that camera for all other purposes.
I try to ignore the true ratio and just use pixel as an angular unit. However, trigonomectric functions like sine and cosine fail to work then, it seems. Why is that? I mean, what if I had a camera with very many pixels per angular degree, and one with extremely few, why should it matter? Why can't I simply define that one pixel represents one degree? Or ten?
Hey Encircled.
Can you explain what exactly it is you are trying to do? If you want to measure something, can you write down what you are measuring in terms of a function of the pixels or the data itself?
I want to measure the size of objects in photos. But I am only interested in their relative sizes.
Anyway, say that the angle between points A and B is 354 pixels, corresponding to 0.1 radians angle.
And between points B and C 708 pixels, corresponding to 0.2 radians.
But sin(354)/sin(708) is not equal to sin(0.1)/(sin(0.2)
The unit with which angles is measured seems quite important, but I don't quite understand why this is so.
Must I calculate the true number of pixels per radian for each camera, by phtographing for example stars, before I can use trignometry on its photos?
What "angle" are you talking about? The angle between lines from the camera to one star and from the camera to another star has nothing to do with the angle between the lines, say, from the camera to one tree top and from the camera to another tree top.
Doesn't it?
If the camera has a field of view of 40 degrees horizontally, then when a photo is taken with 2048 pixels horizontal resolution, certainly 2048/40 pizels represent an angle of 1 degree. Regardless if it's between stars or trees. I want to calculate the pixels/degree 'backwards'. Only practical problems with measurement precision make me think that stars with known separation angles are to be prefered.
I think the answer is that an angle must be a fraction of circle. 354 is an angle only if we know that 15043 is 'the whole way around' so to speak. An angle seems to fundamentally be a relationship between two other values.
Encircled,
3d computer graphics is modeled on the way a camera works. I used to teach graphics so I know exactly what happens. There is a "center of projection" COP (eye of the user when she is aiming the camera) and a projection plane (plane of pixels in a modern camera). An object is projected onto this plane by projecting each of the object's points onto the projection plane. A point is projected onto the plane by intersecting the line between the point and the COP with the projection plane. So for example, two stars in a photo may be exactly the same size in pixels, but this has nothing to do with their actual sizes. The size of an object in pixels is dependent upon the above projection process and this is entirely dependent on the distance the object is from the COP. Even if you know the distances that two objects are from the COP, you have to apply the geometry/trigonometry of the projection. It turns out that the projection involves division by the distance from the COP. In sum, I think your efforts are doomed to failure.
Interesting!
I don't understand what the problem would be. If I take an image of two objects, A and B, then the pixel distance between them must represent the angle A-camera-B. No matter if they are objects within a few decimeters of the camera, a couple of kilometers away, or several light years away. I'm not interested in the size of a star, but of the distance between them. Maybe that's the misunderstanding?
To first find out the pixels/radian, I take a photo of the horizon. I identify two buildings, A and B, on a map and measure the true length of the sides in the triangle camera-A-B. Then the law of cosine calculates the angles. And I simply count the number of pixels between the buildings. My problem is only that 1% error in length measurement on the map gives about 15% error in the angle! So I get a variation of 5% in different tests. Stars have fixed angles, I don't need to know the distance to them, I can just look up in some table the exact angle between them. The problem now, however, is that they are very faint and hard to identify with my phone camera (and weather is constantly cloudy). But I think a more careful attempt will work.
Then when I take an image of two objects, I will, by counting their pixel distance, know the angle between them as seen from me.
I believe that your thinking is correct to objects that are very far away, however check out this image (from Angle of View):
Note how the "angle" between the bottles changes depending on the focal length. This is a known behaviour in optics of trying to map a spherical image onto a flat plane (camera sensor). Imagine taking a piece of paper and wrapping it around a balloon. Now take the same piece of paper and lay it on a much, much larger sphere. In terms of the paper, the larger sphere is "almost nearly flat", which is the bottom scene in the photo above. So, in summary, I think your idea will work, but only for a large focal length, and for distant objects. You would also have to keep the settings of the camera constant, for every photo thereafter, unless you produced a new data table for each "setting" that would allow you to correlate the number of pixels to the angle (and create this data table with baseline information such as a known angle between stars, as you mentioned). How would you calculate the angle of objects that are too close? Take a panorama with your (constant) settings until you have both objects in view. Compose this photo and count the number of pixels (also note that this is only an approximation, as is the single photo)