The blog post introduces single view metrology in computer vision.

In the last article, we covered the basics of camera models and homogenous coordinates and applied them to camera calibration. However, the camera calibration required sufficient to be known, which is unavailable in most real-world scenarios. In fact, we often would like to reason about . Hence, in this article, we will explore how properties of homogenous coordinates can be used to calibrate a camera and estimate from a single image or perform single view metrology.
2D Transformations
Before diving into single view metrology, we need to understand the various transformations in 2D. Isometric transformations are transformations that can be described by rotation and translation, and they preserve distances. In homogenous coordinates, we can express them using a matrix containing the rotation matrix in the top left, the translation vector in the top right, zeros in the bottom left, and 1 in the bottom right.
Similarity transformations include scaling transformations applied to isometric transformations, preserving shapes. Hence, a scaling matrix , containing a scalar factor in the diagonal matrix, can be multiplied by the rotation matrix in the isometric transformation matrix to arrive at the similarity transformation matrix. Here, we can realize that the isometric transformation is a special form of similarity transformation when .
Affine transformations include shearing transformations applied to similarity transformations, resulting in linear transformations with a matrix and translation by . Affine transformations preserve points, straight lines, and parallelism. Hence, both isometric and similarity transformations are special forms of affine transformations. Projective transformations further generalize affine transformations by transforming additional dimensions using and as follows.
As a result, projective transformations no longer preserve parallelism and instead preserve colinearity of points. This means lines will be mapped to lines with possibly different angles and lengths. A line in homogenous coordinates in 2D can be defined by , where the slope and y-intercept are captured by and , respectively. All points on the line satisfy . Since projective transformations preserve colinearity of points, they map to with new slope and intercept.
Points & Lines at Infinity
When two lines, and , intersect, the point of intersection must satisfy both and , meaning must be orthogonal to both and . We can use these orthogonalities to find , which constrains to be orthogonal to both and by the definition of the cross product. Using this, we can compute the hypothetical intersection between the parallel lines and , where the slopes are equal (i.e., meaning and ).
We can see that the last entry is zero, meaning the two parallel lines intersect at infinity. We can define lines at infinity as well, where all the points at infinity (the intersection of parallel lines) lie. These lines are represented as , where is an arbitrary value that can be simply set to 1. We find that projective transformations of points and lines at infinity do not necessarily map them to another points and lines at infinity due to the influence of , while affine transformations do. (This intuitively makes sense since parallel lines that construct a point at infinity will no longer be guaranteed to be parallel after projective transformations.)
Vanishing Points & Lines
In the 3D world, we need to introduce the concept of a plane, which can be expressed as , where forms the normal vector of the plane and represents the distance between the origin and the normal vector. A plane can be formally defined by all points such that . Lines can be defined as the intersection between two planes, although expressing lines in 3D with 4 degrees of freedom is complicated.

When parallel lines in 3D point towards the direction in the camera coordinate system, it means that the point at infinity is . The projective transformation by in camera models maps this to a vanishing point on the 2D image plane, which may no longer be a point at infinity. This can be expressed as or , where is the camera matrix. Further derivation yields the direction of the parallel lines that led to , .

Similarly, the lines at infinity, comprising points at infinity on a plane , are projected to a line called the horizontal line on the image plane, which may also be no longer a line at infinity. Since all directions of parallel lines on the same plane that led to vanishing points on the horizontal line must lie on the plane and be orthogonal to the normal vector of the plane, we have . Given that vanishing points are on the horizontal line as wel, , it follows that and .
Therefore, if we can obtain through calibration and recognize the horizontal line associated with a plane in an image, we can estimate the normal vector of the plane and capture the orientation of a surface in 3D. These equations can be further developed to derive the angle between two directions ( and ) corresponding to distinct vanishing points ( and ) and the angle between two planes ( and ) corresponding to horizontal lines ( and ) using the cosine rule as the above shows.
Single View Metrology
Based on the mathematics established above, we can estimate various quantities about the camera and the subject in an image. For example, we can obtain two vanishing points from two pairs of parallel lines on two planes that are orthogonal to each other, and we can use the cosine equation to arrive at . However, has at least 3 degrees of freedom, assuming no skewness and square pixels, and having only one constraint does not allow us to solve for from . Therefore, we can take another vanishing point from another plane that is orthogonal to both planes. This provides three constraints (, , and ) to solve for without knowing .

Once we know , we can use to estimate the orientation of the planes and use them to reconstruct the estimated 3D scene from a single image. However, this method does not allow us to obtain the scale and position of the planes, nor does it account for occluded objects, which are essential for properly reconstructing the 3D scene captured in the image. (Different objects with different sizes, positions, and orientations can result in perfectly identical projections.) It demonstrates how we can extract rich information from a single image by using the properties of projective transformation of points and lines at infinity, but also reveals the inherent limitations of the approach of single view metrology.
It also explains the difficulty of depth estimation from a single image, even with deep learning, which can infer the size and occluded parts of an object by learning from training data to some extent. (This is why we tend to use scale-invariant loss for single view depth estimation to make the task easier and to let the model focus on learning the orientations.) Humans may be even more capable of interpreting an image and understanding the orientations of objects and inferring their sizes from our experiences in the real world, but the same physical limitations apply to us humans, making us susceptible to optical illusions (especially for objects with unusual shapes and sizes).
Conclusion
This article covered various transformations in 2D, points and lines at infinity, vanishing points and lines as the result of projective transformation of points and lines at infinity, and single view metrology for calibration and plane orientation estimation made possible by these concepts. We discovered how single view metrology is helpful but has inherent limitations due to the inevitable loss of information about the scale and position of the surfaces. However, the concepts and mathematics we covered might remain relevant in the future, where we will discuss another approach to understanding the 3D scene. Therefore, I recommend learning them extensively by reading this article and the resources cited below until you have no confusion left.
Resources
- Hartley, R. & Zisserman, A. 2002. Multiple View Geometry in Computer Vision, Second Edition. Academic Press.
- Hata, K. & Savarese, S. 2025. CS231A Course Notes 2: Single View Metrology. Stanford.
- Savarese, S. & Bohg, J. 2025. Lecture 4 Single View Metrology. Stanford.