# S-Mount

(= Short Mount)   is a lens mount for use of mini-lenses with M12x0.5 thread (diameter = 12mm, 1 revolution = 0.5mm stroke.

S-mount lenses are either used in special holders, or with adapters or in C-mount CS-mount cameras.

Note:
Like with C-mount, CS-mount and F-mount lenses diameter and thread pitch are fixed.
But different from these the back flange length (distance from the mechanical stop of the lens to the sensor) is NOT standardized.
This can lead to mechanical problems with filters mounted between the lens and sensor.

# sagittal plane

The sagittal plane through a point is a plane perpendicular to the tangential plane trough the point, containing the point and the center of the entrance pupile.

Perpendicaular to it is the tangential plane.

# Scheimpflug principle

Normally, the focus plane is 90 degree to the optical axis. This is due to symmetry reasons.
A problem arises when two objects have so different distances, that they can not be focussed at the same time.

Theodor Scheimpflug had a genius idea : lets tilt the camera!

Then all point in the A-B-plane will be focussed!

Just tilting the camera of course is not enough, to get a focussed image. The Gaussian focus equation also must be satisfied.
The Gauss equation is however equivalent to the second Scheimpflug priciple.

## First Scheimpflug principle:

Three planes must share a common line:

• The tilted plane containing the desired objects
• The sensor plane
• A plane perpendicular to the optical axis of the lens.

For a theoretical “thin lens” (=of virtual length 0) , it’s clear where this plane is. For the exact location in a real world lens, see below.

For a mind game lets keep the sensor plane and the object plane fixed and non-parallel. This defines a shared common line in 3D space. Through each line in Space there is an infinite number of Planes, containing it.
Obviously not all can be the plane of best focus.
Say:

The first Scheimpflug principle is just a necessary condition, but not a sufficient condition to get a focussed image of a tilted object plane on the sensor.

In general the lens is tilted, but the image not focussed.
However, as soon as wwe use the lens focus mechanism, the first Scheimpflug principle is not satisfied any more, we would have to tilt the lens a little to satisfy the first criterion, but then the image is not focussed any more, etc.

The second (sufficient) condition can be the

## Gauss focus equation:

But instead of the Gauss focus equation we can use the

## Second Scheimpflug principle:

These three planes must share a common line:

• The tilted plane containing the desired objects
• A plane through the lens center, parallel to the sensor plane
• A plane perpendicular to the optical axis of the lens shifted by the focal length.

## Situations & Applications where to use the Scheimpflug principle:

• Objects to be focussed have a various vertical distances from the camera (a poster at the wall, the facade of a building with the camera viewing upwards or a document on a table distant from the camera
• The camera can not be mounted where it should be (because for example to stay out of the way of a robot)
• The cameras looks at an angle to a more or less flat object
• Cameras for autonomous vehicles taht have to follow lines or signs on the floor
• Whenever the desired plane of focus is not parallel to the camera sensor
• Laser-Triangulation
The following interactive drawing is just for illustration purposes!

Usage: First place the object center (the green dot, the spot where the optical axis meets the object) at a local you like , for example at 60 on the x-axis.
Them move the lens (the other green dot) to a location where it’s possible to place the camera-lens position.
The interactive graphic keeps the optical axis in the center of the lens and maps the edges of the sensor to the wanted object plane.
The magnification if measured perpendicular(!) to the optical axis.
Keep in mind, that on your monitor you’ll see a trapezoid / trapezium)

# Sensor MTF

## What is the MTF of a sensor?

### Ideal case:

For an ideal pixel the shape is square and the fillfactor (= percentage of the photons that were intended for that very pixel also reaches this pixel) is 100%.

The rectangle curve describing the square shape of the pixel is convolved with a Dirac comb curve, (__|__|__|__|__|….__|__| ) where the distance of the dirac impulses is one pixel.

After the pixel shape has been convolved with the Dirac curve we have to convolve an 2D array of such convolutions with a 2D rectangle curve that has the sensor shape.

### Fourier transformation as helper

When we change to the Fourier Domain, convolutions are mapped to simple multiplications.
The Fourier Transformed of a rectangle curve is   sin(PI x) / (PI x) and the Fourier Transform of the Dirac comb is again a Dirac comb.

At 1 cycle per pixel the function sinc(x) is the first time zero. It is also zero for 2cycles per pixel, 3, 4, 5, etc ..

We are just interested in this curve in the range from x=Zero to One , where x is measured in cycles per millimeter.

Scaled accordingly the curve looks like this ..

Convolution with the Dirac function, we get a curve sampled at the point of the Dirac Pulses.
If we finally convolve this curve with the rectangle curve of the sensor, then instead of a mere sampling, we get a more “filled” curve .’

### Nyquist:

The Nyquist frequency is at 1/2 cycle per pixel. For this value of x, the function has a value of 60% (=0.6).
When a sensor documentation says that the sensor MTF is at over 50%, it means that it is at 80% of what it possible by physics.

Trouble cause the function values for x beyond the Nyquist frequency, for x between 0.5 and 1.
These generate alias frequencies.

If in the combination of a lens and a sensor the lens is the optical low pass filter the function is very predictable and few alias frequencies occur.
If the pixels however have about the size of the sampling frequency of the Dirac Function, and therefore of the pixel size, Then die difference between the optical MTF and the sensor MTF generate the alias frequencies above. Frequencies are generated in the resulting image that can not be found on the object side.

For a non-perfect lens, instead of sync(x) often functions (abs(sinc(x))^n  are used.
For these, the MTF at the Nyquist value is only 30% (=0.3).

In the Lensation setup for lens testing, the MTF curve is supersamples by  factor 4. By this the sinc curve is stretched in x direction by factor 4 and the first time the curve is zero is not at 1 cycle per pixels, but at x0 = 4 cycles per pixel. The Niquist frequency stays in the center between zero and x0, say at 2 cycles per pixel. By stretching the curve this way, the value of the MTF function at Nyquist is as high as Factor 4 times more left before, say, at about 90%.

When we combine the lens and the sensor, then their MTZFs are multiplied.
Because the function value at at the (new) Nyquist frequency is about 90% , only 10% of the MTF is lost due to the sensor.

So this is not an explanation why actual MTF curves of lenses are significantly lower than design curves … at least not by a factor of two.

# Sensor-Format

Sensors of different shapes and sizes are used in image processing and surveillance technology.

On one hand the sensors have a different ratio of width to height,
for example z.B. 1:1, 4:3, 16:9, 16:10

On the other hand, the sensor size differs, which is described by the sensor diagonal:

For sensors >= 1/2″ , the diagonal is 16mm * inch
for example 16mm * 1/2 = 8mm

For sensors < 1/2" , the diagonal is 18mm * inch
for example 18mm * 1/3 = 6mm

See: vidicon tube

# Shannon Sampling Theorem

For the data sampled in digital systems two conditions hold:

• The Signal must have a finite bandwidth: above a cutoff frequency all frequency components must be zero..
• The sampling frequency must be minimum twice the cutoff frequency of the signal.

These rules are called Shannon sampling theorem , or Nyquist Shannon Sampling Theorem.

If the conditions are not met, is for example the sampling frequency not minimum twice the cutoff frquency, there will be components in the spectrum that are not given in the signal.
If the lens resolution is for example higher than the pixel pitch’s frequency. we get Moiré effects. Yes, lenses can be “too good”.
See https://www.optowiki.info/faq/why-can-color-cameras-use-lower-resolution-lenses-than-monochrome-cameras/

This effect is called aliasing and results from mirroring of frequencies above into the frequencies below the cutoff limit..

The cutoff frequency is called „Nyquist-Frequency“.

In Optics the Nyquist-Frequency equals onle line per pixel = half a line pair per pixel.

At the Nyquist-Frequency the MTF reaches Zero. We don’t have contrast then.

The cutoff frequency depends on the FNumber and the wavelength.

cutoff frequency = 2 / Airydisk-Diameter = 1 / Airydiskradius = 1 / (1.22 * Wavelength * FNumber)
One can resolve 70% of the Nyquist frequency = 0.7*Nyquist frequency = Nyquist frequency / 1.41
If you want a contrast of about 20%, you should devide this Nyquist frequency by 1.41:
If you even want a chance to reach 50% contrast, you should even divide by 2.

This contrast is then reachend over a range of 4 pixels.

Lets say we have a sensor with 2.2um pixel pitch.Then the Nyquist frequency is 1/2 line pairs per Pixel.

On 1mm = 1000um fit 1000/2.2u =   454pixel.

So the Nyquist frequency = 0.5lp / pixel = (454/2) lp/mm = 227lp/mm.

there we have 0% Contrast.

To have a chance for 50% Contrast haben, we have to divide this value by 2 and get (227/2) lp/mm = 113.5 lp/mm.

# sign conventions

In order to achieve similar optical formulas across various authors, an agreement on some sign convention is necessary:

The z-axis of a system is the optical axis.
As usual we assume thet the light passes from left to right through the lens elements.
Initially the light travels from -z to +z

The y-Axis is perpendicular to the z-axis and in the plane of the monitor/papersteht
The x-Axis is perpendicular on the z-axis and the y-axis and is drected into the screen/paper.

The first optical surface then has the radius R1 and the second optical surface has the Radius R2, where infinit values signal plano surfaces (blue colors in the graphic).

If the light first meets the optical surface and then the center of curvature then the radius has a positive sign, (green color arcselse a negative sign (red color arcs).

R_a above is positive and R_b is negative.

Angles are measures between the optical axis and the beam, where the smaller of the two intersection angles is used.
Incident angles are measured between the surface normal and the incident beam.

signs of refraction indices are negated after a reflection.

[table caption=”sign conventions” width=”500″ colwidth=”40|20|100″ colalign=”left|center|left”]
measure,sign,explanation
object distance,+,object is left of the refracting surface
object distance,-,object is right of the refracting surface
image distance,+,image point is right of the refracting surface
image distance,-,image point is left of the refracting surface
radius of curvature,+,center is right of the refracting surface
radius of curvature,-,center is left of the refracting surface
focal length (object side), +, left of the lens
focal length (image side), -, right of the lens
object distance from focal point F,-,left of object side focal point
image distance from focal point F’,+,right of image side focal point
object height,+,above optical axis
object height,-,below optical axis
angle,+,measured counterclockwise
angle,-,measured clockwise
[/table]

# stereographic

stereographic lenses are a class of fisheye lenses

sterographic lenses are also called “ conform“or “ winkeltreu
stereographic lenses use image mapping functions of type

maintains angles

Type : stereographic weak medium strong max
angles 94° 131° 180° < 360°

Example of a Stereographic image :