Why Megapixel? Lenses don’t have a Pixel-Structure after all!

Some centuries ago, people noticed with some surprise, that in a dark room sometimes an (upside down) image of the environment is projected across a small opening in a wall.
The old latin word for room (chamber) is camera.
That’s why the first cameras got the name “camera obscura” ( = “dark chamber”). One of the first real-life-applications was portrait painting.

CameraObscura

The same principle is used in so called “pinhole-cameras”:
camera_obscura

Its immediately clear, why the image is upside down.
The advantage is however, that the Image is were it would be mathematically expected. There is no distortion ! (rectangles on object side becomae rectangles on image side). There’s no visible dependency from the wavelength. The depth of Field is infinitely large.
The disadvantage is that the resulting image is very dark, (so the room must be even darker for the image to be seen at all. The needed exposure times to take an image with todays cameras could well be minutes!

Idea: Lets use a larger hole :

large_hole_camera_obscura

Now, however, the image not only gets brighter (as intended) but also gets blurry, because the light not only passes through the center of the hole. So not only the correct position of the image is exposed to the light, but also the direct neighbours.

As a result, the image of an object point is not just a point, but instead a little disk, the so called “Circle of Confusion” (CoC).

For long distance objects the diameter of the CoC equal the diameter of the hole!
For short distance objects, even larger. Read, the “resolution” is very bad.

Whish: Each image point shall be just a mathematical point and not a circle.

Idea: lets place a biconvex (“collecting lens”) lens into the hole:

collecting-lens.in-iris

Note: every point of the front lens is reached by light from the object.

How to predict what size the image will have and where the position of the Images of object points is?

Two simple rules apply:

Image construction:
Rays through the center of the lens pass straight through the lens.
Rays arriving parallel to the optical axis and through the object point are “bent” through the focal point of the lens.
Where these two rays meet, is the image of the object point.

We note:

All object points on the plane perpendicular to the optical axis (the “object plane”) are mapped to another plane perpendicular to the optical axis, the “image plane”.
Image-plane

If image and object distnces are given, we can calculate the focal length of the lens.
This appoach is used in all them focal length calculators online.

In real life, we notice a slight difference between the theoretical values and the real distances:
thick-lens-mapping

Due to this difference between theory and parctice :

All focal length calculators that ignore the thickness of the lens give just approximate results, especially in short distances and for wide angles

But even the model of the thick lenses (the “paraxial image model”) works with

Implicite assumptions:
The lenses are perfect, say don’t have optical aberrations.
In case of the thin lenses : all lenses are infinitely thin.
Monochrome light is used.
The model assumes sin(x) = x, which is an approximation that holds only very close to the optical axis.

There’s good and bad news :

Good news: The Circle of Confusion (“CoC”) can be drastically reduced by the use of collecting lenses

We also notice that :
Objects at different distances result in CoCs of different size.
The “acceptable” maximal size of the CoC thus results in the so called “depth of field
CoC

Bad news: The Circle of Confusion (“CoC”) can not become arbitrarily small. It willl always stay a disk and never becomes a mathematical point.

Say : there are no perfect lenses. (even if they could be produced arbitrarily accurate)

The theoretical size of the smallest CoC possible even for close to perfect lenses , so called diffraction limited lenses) is described by the so called Rayleigh criterion.

Rule of Thumb:
For white light it’s not possible to generate CoCs smaller than the F# measured in micrometers.
The theoretical resultution is half thatvalue
A diffraction limited lens of F#4 can not generaqte image points smaller than 4um in diamter.
The theoretical best resultion is 4um / 2 = 2um
An image appears focussed, if the CoC is smaller than the pixel structure of the sensor.
See also Why can color cameras use lower resolution lenses than monochrome cameras?.

If the image can appear focussed on a sensor with n megapixels, then the lens is classified as an n Megapixel lens

Keep in mind that the Megpixel refer to the maximum image circle that a lens has. If a sensor uses just 50% of the area of image circle, only half the pixels are supported.
If a 5 Megapixel 1″ lens (i.e. image circle 16mm) is used on a 1/2″ sensor (image circle 8mm) one should not expect a resolution bettern than 1.3 (!) Megapixels. This is because the area of a 1/2″ sensor is 1/4 (!) of the area of an 1″ sensor!. So you lose factor 4 of the Megapixels.