Basics
- Category: Uncategorised
- Published on Tuesday, 19 May 2020 22:25
- Written by Super User
- Hits: 160
To understand what multi-plane imaging is it is worth to recall what classical imaging is. In order to obtain the image of an object we use the lens. As can be seen in Figure 1 the lens collects the light rays that emerge from a point of the object of observation and concentrate them (focus) in another plane in one point, forming the object point image. Each object point emits light rays and the ones that are collected by the lens form the image of object. We are saying that the object lies in the object plane, while the image of the object is formed in the image plane.

If a second objet is located closer/farther relative to the lens, than the first object, then its image is formed accordingly farther/closer to the lens on its other side, than the first object image - as shown in Figure 2

A typical camera is equipped with a much more complicated lens than the single-element lens shown in Figs 1 and 2. The object is in most cases much further away from the lens. The image is formed at a distance of over a dozen of milimeters at the camera digital sensor. Because any parasite light that fall at the sensor without passing through the lens is highly unwanted, the lens and the camera are packed in a case, which is black painted (inside, the outside is less important).
Let's take a look at Fig. 2 once again. We see that the rays that emerge from the red arrow object do not cross at the plane in which the green arrow image is formed. As a result, the image of the red arrow at this plane is blurred. In order to form a sharp image of this arrow the camera user needs to move the lens away from the camera sensor. That's what is happening when one focuses the camera at an object. Even in smartphones.
The smartphones as well as old cheap cameras have a important property, that strongly simlifies their usage - their camera lenses have extended depth-of-field.
The depth-of-field concept is illustrated in Fig. 3. When the objects of observation are separated by a distance small enough, then the rays that emerge from the most extreme objects still do not cross at the same plane at which is formed the image of the central object.

However, while the images of the extreme objects in the mentioned range, are still blurred, this so-called "out-of-focus" can be acceptable for the human eye. Extending the depth-of-field is achieved by reducing the size of the hole (so-called aperture) through which light enters the camera. Specifically, What is important is the ratio of the camera lens focal length to its aperture diameter, called F-number. When one set F to f/16 it means the aperture diameter is 16 times smaller than the lens focal length. Cameras with large depth-of-field have high F-numbers, thus small lens aperture relative to their lens focal length. Looking at Fig. 3 you can recognise that limiting the diameter of the lens will lead to smaller angles formed by the extreme rays with the horizontal optical axis of the lens (not shown). A a result the blur of the images of the two extreme objects will be smaller, their images will look sharper.
The problem with small aperture is it limits the amount of light that enters the camera. Therefore, the image formed during a selected time range (so called exposure time) is dark. Increasing the image brightness requires: a) decreasing depth-of-field and/or b) increasing electronic amplification (gain) of the signal that is furnished by the camera sensor.
In typical day-light conditions this problem is unimportant. But, in low-light conditions, as in a room in the evening, it makes sometimes taking of image of high quality very hard.
That is where multi-plane imaging enters. And it is used in cinematography for decades now.