Archives

Object to image distance

pericentric

= hypercentric

With pericentric lenses objects at larger distances appear larger(!) and objects at closer distances appear smaller.

Perizentric lenses allow for example to view a can from top and the sides at the same time.

This reverses our normal viewing experience.

Pericentric lenses got to be MUCH larger than the object under inspection.

see “comparison: entocentrictelecentric – pericentric”

Application : A cylinder with a drilling that is centered on one circular side and decentered on the opposite side is to be inspected for foreign parts in the drilling.
The rotation of the cylinder is not known, so we would need a lens that can look from all sides “outside-in” at the correct angle.
Solution: DIY with the help of a Fresnel Lens, a normal M12 lens and the graphic calculator below …

Purple Fringing

Purple colored rim around dark objects, typically on a white background

Purple fringing

Purple fringing (Image: webfound)

Purple fringing is the visible effect of lateral color-aberrations.

You can imagine this as is one of the Red / Green / Blue Images on a color sensor were a bit too small, for example the blue image. Instead of reaching the white area , together with green and red, it reaches the black area closer to the image center , where no light were expected at all.

That’s why the effect only occurs where (nearly) white and (nearly) black regions meet. If it’s white in a region anyway, it doesn’t matter is all, say, blue) light rays arrive a bit too close to the center, as the “gaps” are filled with other rays. Where black and white areas meet, however, the gaps can’t be filled.

On a monochrome sensor the effect shows as blurry black-white-edges.
The effect is stronger on sensors with smaller pixels, as it’s easier for ray to arrive in the neighbour pixel.
If a blueish ray arrives, say, 2.5um closer to the center than red and green rays, it’s no visible problem for sensors of 6um pixels, as we’re pretty much still in the same pixel. On a sensor with 1.67um pixles we fully reached the neighbor pixel already.

rad

for a circle of radius r is ‘1 rad’ the angle corresponding to length r on the circle,

say
1\,\mathrm{rad} = \frac {360^\circ} {2 pi} = \frac {180^\circ} {\pi} \approx 57{,}29577951^\circ

accordingly :
1\,\mathrm{mrad} = \frac {1 rad} {1000} :

1\,\mathrm{mrad} = \frac {360^\circ} {2000 \pi} = \frac {180^\circ} {1000 \pi} \approx 0{,}05729577951^\circ

Scheimpflug principle

Normally, the focus plane is 90 degree to the optical axis. This is due to symmetry reasons.
A problem arises when two objects have so different distances, that they can not be focussed at the same time.

Different_Object_Distances

Theodor Scheimpflug had a genius idea : lets tilt the camera!

scheimpflugs_idea

Then all point in the A-B-plane will be focussed!

Just tilting the camera of course is not enough, to get a focussed image. The Gaussian focus equation also must be satisfied.
The Gauss equation is however equivalent to the second Scheimpflug priciple.

First Scheimpflug principle:

Three planes must share a common line:

  • The tilted plane containing the desired objects
  • The sensor plane
  • A plane perpendicular to the optical axis of the lens.

For a theoretical “thin lens” (=of virtual length 0) , it’s clear where this plane is. For the exact location in a real world lens, see below.

For a mind game lets keep the sensor plane and the object plane fixed and non-parallel. This defines a shared common line in 3D space. Through each line in Space there is an infinite number of Planes, containing it.
Obviously not all can be the plane of best focus.
Say:

The first Scheimpflug principle is just a necessary condition, but not a sufficient condition to get a focussed image of a tilted object plane on the sensor.

In general the lens is tilted, but the image not focussed.
However, as soon as wwe use the lens focus mechanism, the first Scheimpflug principle is not satisfied any more, we would have to tilt the lens a little to satisfy the first criterion, but then the image is not focussed any more, etc.

The second (sufficient) condition can be the

Gauss focus equation:

\frac{1}{focal length} = \frac{1}{object distance} + \frac{1}{image distance}

But instead of the Gauss focus equation we can use the

Second Scheimpflug principle:

These three planes must share a common line:

  • The tilted plane containing the desired objects
  • A plane through the lens center, parallel to the sensor plane
  • A plane perpendicular to the optical axis of the lens shifted by the focal length.

Situations & Applications where to use the Scheimpflug principle:

  • Objects to be focussed have a various vertical distances from the camera (a poster at the wall, the facade of a building with the camera viewing upwards or a document on a table distant from the camera
  • The camera can not be mounted where it should be (because for example to stay out of the way of a robot)
  • The cameras looks at an angle to a more or less flat object
  • Cameras for autonomous vehicles taht have to follow lines or signs on the floor
  • Whenever the desired plane of focus is not parallel to the camera sensor
  • Laser-Triangulation
The following interactive drawing is just for illustration purposes!

Usage: First place the object center (the green dot, the spot where the optical axis meets the object) at a local you like , for example at 60 on the x-axis.
Them move the lens (the other green dot) to a location where it’s possible to place the camera-lens position.
The interactive graphic keeps the optical axis in the center of the lens and maps the edges of the sensor to the wanted object plane.
The magnification if measured perpendicular(!) to the optical axis.
Keep in mind, that on your monitor you’ll see a trapezoid / trapezium)

Use of lenses under water

if you want to use lenses designed for the use “in air” in a housing under water, please do NOT use a plane window! The reasons get clear from the interactive graphics below.

If there is no other chance than to use a plane window, then place it close to the lens.

instead you should use a spherical window that shares it’s center with the entrance pupil (center of the appearant hole when looked from the front:

 

 

Vidicon tube

Before there were CCD and CMOS-sensors, there were Vidicon tubes.

Why to mention? These light receiving tubes influence till today the names for the sizes of our imaging sensors.

Image of Vidicon tube

Vidicon tube (C) Wikipedia

The dark grey round area of the tube is the light sensitive part. Obviously the dark gray area can not reach the full diameter of the tube.

Lenses have a so called image circle, the round area on the image side of the lens that receives light. A lens has an image circle thats large enough to  expose the dark gray part to light. if the dark area was 6mm in diameter, we talk of a 1/3″ lens, because the outer diameter of the Visicon tube is 1/3″ = 25.4/3mm = 8.467mm.
But has a 1″ lens an image circle which is 3x as large as a 1/3″ lens ?
A third inch lens has 6mm Image circle, so a one inch lens should have 3 times as much, say 18mm. It is 16mm only, however, because a vidicon tube with an 16mm diameter dark area had an outer diameter of one inch (25.4mm).

That’s why 1/3″ has 6mm and 1″ has 16mm image circle 🙂