Category Archives: Practice

About the Sense and Nonsense of Resolution

Resolution

What is it in general?

Resolution describes in general the ability of a “system” to provide “details”. Systems can be anything, a thermometer , a speedometer in a car, a TV screen, a printer, … and last but not least cameras and lenses.

resolution

How much resolution makes sense?

When people talk about cameras, easily a Megapixel race starts. The general optinion in the consumer market may be “the more, the better”.

Marketing is the big player behind the scenes.  If my competitor offers 1 Mega-something and I can offer two Mega-something, this is usually reason enough for the subconcious mind of the consumer to go for the higher number.

All depends on the information we need. If we are interested in the position of the Alpes with an accuracy of +/-100km , the right image above definitely has enough resolution. No need for a map with a resolution that shows individual cars.

Do we need to know the temperature with a resolution of 1/100 degree?

For half a century a VGA TV resolution (640×480 pixels) was just fine for us to get the “needed” information. These days HDTV and 4K resolution is  a must. Again .. it’s nice to have, but is it needed?

A strange phenomenon in terms of resolution are  ..

Desktop printers: They once offered like 200dpi .. some offer 9.600dpi these days.
dpi is short for “dots per inch”. It does not mean however that a printer could print 1/200th  of an inch thin lines then nor 1/9600th of an inch wide lines now!

Resolution comes at a cost

When a new TV with extra high resolution appears at the market, the prices are very high. This is because the manufactures obviously  can  sell it at that price and because the development costed lots of money.

Once this new resolution becomes standard, the prices drop considerably.
But as long as its not mass product:

High resolution = higher precision, better production machines, better inspection tools, better workers, .. needed, maybe even brand new production strategies)As a result: Low volume production -> high prices

In our daily lives we learnt:

  • Media with higher writing speed need a new media writer
  • Media with higher capacity, BlueRay  disks etc, need needs a special player
  • Highres lossless sound recordings need more space

The “hidden” costs of high sensor resolution:

  • Highres Cameras need much better lenses.
  • Highres cameras need a faster interface to the computer
  • As the bandwidth (=pixels per second) of an interface is limited, if the number of pixels grows by a factor, the frames per second go down by this factor.
  • When we can get color images the same resolution as greyscale images before, the software got to be adapted.
  • If we there are more pixels per image, we have to process more pixels, if there are more frames per second, we have to process more frames per second. A faster software, a faster computer, even a better programmer might be needed.
  • With shrinking pixel sizes, the “noise” increases. Algorithms might have to be adapted to the previously absent noise, additional light sources might be necessary to provide enough light for a low noise image.
  • With shrinking pixel sizes, the light sensitivity is reduced, which might cause problems in dusk and dawn.
  • As color images in general work with a  ratio Red :  Green: Blue, special care has to be taken about sensor noise,  as this influences the ratios, say, the colors.
  • The higher the resolution, the more difficult it is to have a high “local contrast”.

Contrast:

The optical term contrast of an image is pretty much what we would expect from our daily use of the word.

contrast1

However, we have to distinguish global contrast …
contrast2
… from local contrast :
local contrast

The global contrast in the two images above is about the same, however the local contrast (the change from pixel to pixel) is less high in the lower image, because of the slight blurring.

The limits of resolution:

Lens resolution limits:

Apart from the production quality, the resolution of a lens is limted by a physical effect called “diffraction”.
The “best possible” lenses are called “diffraction limited”, read : they are as good as allowed by physics … “only limited by diffraction”.
In short, diffraction is an (unexpected) change in direction of light particles that occurs if they don’t have neighbors “travelling” in the same direction. As a result diffraction occurs at the rim of a lens iris, at the surface of metal rods, threads etc.

The degree of diffraction depends on the amount of “rim” compared to the “clear” area.
The area of a circle is A = PI * radius * radius.
The length of the circumference of a circle is R = 2 * PI * radius.
So Rim / Area = R / A = (2 * PI * radius) / (PI * radius * radius) = 2/radius
Say, the smaller the radius, the more influence has the rim, compared to the center.Also, the higher the energy of the light, the less diffraction occurs. Say, blue light has a lower diffraction than red light and red has a lower diffraction than infrared light.
According to the “Rayleigh Criterion” the smallest dot a diffraction limited lens can generate has a diameter of
D = 2 * 1.22 * wavelength * F#
The resolution of such a lens is R = 1.22 * wavelength * F# at about 20% contrast.
For a wavelength of 400nm we get (for 400nm light)
D = 2 * 1.22 * wavelength * F# = 2 * 1.22 * 400nm * F# = 976nm * F#
As 1000nm = 1um, we get as a rule of thumb D = F# um , say, the F# in micrometers.
Accordingly we get as resolution (for 400nm light and 20% contrast) R = 1.22 * wavelength * F# = 1.22 * 400nm * F# = 488nm * F# , say “half the F#” in micrometers
For a wavelength of 800nm we get (for 800nm light)
D = 2 * 1.22 * wavelength * F# = 2 * 1.22 * 800nm * F# = 1952nm * F#
As 1000nm = 1um, we get as a rule of thumb D = 2*F# um , say, two times the F# in micrometers.
Accordingly we get as resolution (for 800nm light and 20% contrast) R = 1.22 * wavelength * F# = 1.22 * 800nm * F# = 976nm * F# , say “the F#” in micrometers

As a side result we notice :

The best possible resolution of a lens by physics depends (linearly) on the wavelength : double wavelength = double size of the smallest details that can be resolved
One way to achieve a better resolution it to use a smaller wavelength for the lens design (i.e. 440nm blue instead of 660nm red or 660nm red instead of 890nm infrared.

For diffraction limited lenses with an F# below the optimal aperture : The higher the F#, the higher is the DOF, the lower the resolution and thus the lower is the local contrast (= lower MTF) .
If lenses are not diffraction limited, increasing the F# makes means using more the center parts of the lens elements (which have lower aberrations). Therefore the resolution increases for a while until the critical aperture is reached, then it decreases.

Why Megapixel? Lenses don’t have a Pixel-Structure after all!

Some centuries ago, people noticed with some surprise, that in a dark room sometimes an (upside down) image of the environment is projected across a small opening in a wall.
The old latin word for room (chamber) is camera.
That’s why the first cameras got the name “camera obscura” ( = “dark chamber”). One of the first real-life-applications was portrait painting.

CameraObscura

The same principle is used in so called “pinhole-cameras”:
camera_obscura

Its immediately clear, why the image is upside down.
The advantage is however, that the Image is were it would be mathematically expected. There is no distortion ! (rectangles on object side becomae rectangles on image side). There’s no visible dependency from the wavelength. The depth of Field is infinitely large.
The disadvantage is that the resulting image is very dark, (so the room must be even darker for the image to be seen at all. The needed exposure times to take an image with todays cameras could well be minutes!

Idea: Lets use a larger hole :

large_hole_camera_obscura

Now, however, the image not only gets brighter (as intended) but also gets blurry, because the light not only passes through the center of the hole. So not only the correct position of the image is exposed to the light, but also the direct neighbours.

As a result, the image of an object point is not just a point, but instead a little disk, the so called “Circle of Confusion” (CoC).

For long distance objects the diameter of the CoC equal the diameter of the hole!
For short distance objects, even larger. Read, the “resolution” is very bad.

Whish: Each image point shall be just a mathematical point and not a circle.

Idea: lets place a biconvex (“collecting lens”) lens into the hole:

collecting-lens.in-iris

Note: every point of the front lens is reached by light from the object.

How to predict what size the image will have and where the position of the Images of object points is?

Two simple rules apply:

Image construction:
Rays through the center of the lens pass straight through the lens.
Rays arriving parallel to the optical axis and through the object point are “bent” through the focal point of the lens.
Where these two rays meet, is the image of the object point.

We note:

All object points on the plane perpendicular to the optical axis (the “object plane”) are mapped to another plane perpendicular to the optical axis, the “image plane”.
Image-plane

If image and object distnces are given, we can calculate the focal length of the lens.
This appoach is used in all them focal length calculators online.

In real life, we notice a slight difference between the theoretical values and the real distances:
thick-lens-mapping

Due to this difference between theory and parctice :

All focal length calculators that ignore the thickness of the lens give just approximate results, especially in short distances and for wide angles

But even the model of the thick lenses (the “paraxial image model”) works with

Implicite assumptions:
The lenses are perfect, say don’t have optical aberrations.
In case of the thin lenses : all lenses are infinitely thin.
Monochrome light is used.
The model assumes sin(x) = x, which is an approximation that holds only very close to the optical axis.

There’s good and bad news :

Good news: The Circle of Confusion (“CoC”) can be drastically reduced by the use of collecting lenses

We also notice that :
Objects at different distances result in CoCs of different size.
The “acceptable” maximal size of the CoC thus results in the so called “depth of field”
CoC

Bad news: The Circle of Confusion (“CoC”) can not become arbitrarily small. It willl always stay a disk and never becomes a mathematical point.

Say : there are no perfect lenses. (even if they could be produced arbitrarily accurate)

The theoretical size of the smallest CoC possible even for close to perfect lenses , so called diffraction limited lenses) is described by the so called Rayleigh criterion.

Rule of Thumb:
For white light it’s not possible to generate CoCs smaller than the F# measured in micrometers.
The theoretical resultution is half thatvalue
A diffraction limited lens of F#4 can not generaqte image points smaller than 4um in diamter.
The theoretical best resultion is 4um / 2 = 2um
An image appears focussed, if the CoC is smaller than the pixel structure of the sensor.
See also Why can color cameras use lower resolution lenses than monochrome cameras?.

If the image can appear focussed on a sensor with n megapixels, then the lens is classified as an n Megapixel lens

Keep in mind that the Megpixel refer to the maximum image circle that a lens has. If a sensor uses just 50% of the area of image circle, only half the pixels are supported.
If a 5 Megapixel 1″ lens (i.e. image circle 16mm) is used on a 1/2″ sensor (image circle 8mm) one should not expect a resolution bettern than 1.3 (!) Megapixels. This is because the area of a 1/2″ sensor is 1/4 (!) of the area of an 1″ sensor!. So you lose factor 4 of the Megapixels.

Can I increase the DOF by changing the focal length, if FOV and brightness are constant?

Per Definition
DOF := Far PointNear Point.

The formulas for these are really complicated and contain several times the focal length.

Surprisingly, despite the formulas given there:

The focal length has no influence on the DOF if FOV and F\# are constant.

in other words :

As long as FOV and F# are both constant, the DOF is the same from whatever distance.

Because the brighness shall stay the same, the F# is the same.
Because the FOV is the same, also the Magnification \beta is the same for both lenses.

Because the F# is the same, also the NA (Numerical Aperture) is the same.
This is because F\# = \frac{1}{2*NA(image side)}.
As NA = \sin(\alpha), also the angle \alpha doesn’t change.

Our definition of sharpness, the allowed circle of confusion, of course shall not change.

Also, due to the Rayleigh Criterion, the resolution of the lenses is the same (assuming lenses of the same, diffraction limited excellent quality).

So if we have a close look at the location where the light gathers for one pixel, we see ..

SVG of Diffraction at Pixel Level

Diffraction at pixel Level

The red disk shows the smallest Airy disk Possible, according to Rayleigh – controlled only by Wavelength and F\#. (See Rayleigh Criterion). The diameter is (according to Rayleigh) twice the lens’ resolution.
The green disk is the user’s definition of “is still focussed”. As the angle \alpha is constant, so is 2 \alpha, which is the opening angle of the double cone at pixel level.
As angle and slimmest part of the double cone are the same, so is the upper and lower point where the Green disk touches. The distance of upper and lower max. point of the green disk is the image side Depth of Focus. The object side DOF (depth of field) is Depth of Focus / Magnification \beta, which is constant, because the FOV is constant.