Please help. I'm a researcher and want to use Maya to generate test sequences with ground-truth depth maps. They are only useful when I can compute the exact location of a 3D point from pixel coordinates + depth. Additionally, they must depend only on camera parameters (focal length, fov, far&near clip plane).
I now have the following problem: The depth maps Maya writes are always greyscale images covering all values from 0 to 255 (at least after I convert them from bloody IFF to some format I can read with QT). Conclusion: One has to know the location of the nearest/most far away object point of the scene which is visible in a camera image in order to calibrate the mapping which computes coordinates from depth. The fact that the way depth maps are constructed depends on the objects in the scene is totally absurd IMO. It is certainly quite different from the way the OpenGL Z-Buffer behaves.
Does anybody know how to create more sensible Depth Maps with Maya? I have a deadline for a paper in one week and would really be grateful for any advice. Also, a formula how to compute the exact Z-Coordinate (in camera coordinates) from the depth map value would be quite welcome.
Thanks in advance,
Bastian.