describes, how much larger an object is displayed on a monitor than it is in real life.
May the sensor diagonal be 6mm and the monitor diagonal = 50″ = 128cm = 1280mm.
Then
and
Because
we get :
describes, how much larger an object is displayed on a monitor than it is in real life.
Then
and
Because
we get :
With pericentric lenses objects at larger distances appear larger(!) and objects at closer distances appear smaller.
Perizentric lenses allow for example to view a can from top and the sides at the same time.
This reverses our normal viewing experience.
Pericentric lenses got to be MUCH larger than the object under inspection.
see “comparison: entocentric – telecentric – pericentric”
Purple colored rim around dark objects, typically on a white background
Purple fringing is the visible effect of lateral color-aberrations.
You can imagine this as is one of the Red / Green / Blue Images on a color sensor were a bit too small, for example the blue image. Instead of reaching the white area , together with green and red, it reaches the black area closer to the image center , where no light were expected at all.
That’s why the effect only occurs where (nearly) white and (nearly) black regions meet. If it’s white in a region anyway, it doesn’t matter is all, say, blue) light rays arrive a bit too close to the center, as the “gaps” are filled with other rays. Where black and white areas meet, however, the gaps can’t be filled.
for a circle of radius r is ‘1 rad’ the angle corresponding to length r on the circle,
say
accordingly :
:
A normal lens has pincushion distortion or barrel distortion which can be corrected to give a perfect perspective projection, like the image of a pin-hole camera.
This process is called “rectification” and is often applied for stitchich images, for example, in panoramic photography.
The resulting images has no distortion.
Normally, the focus plane is 90 degree to the optical axis. This is due to symmetry reasons.
A problem arises when two objects have so different distances, that they can not be focussed at the same time.
Theodor Scheimpflug had a genius idea : lets tilt the camera!
Then all point in the A-B-plane will be focussed!
Just tilting the camera of course is not enough, to get a focussed image. The Gaussian focus equation also must be satisfied.
The Gauss equation is however equivalent to the second Scheimpflug priciple.
Three planes must share a common line:
For a theoretical “thin lens” (=of virtual length 0) , it’s clear where this plane is. For the exact location in a real world lens, see below.
For a mind game lets keep the sensor plane and the object plane fixed and non-parallel. This defines a shared common line in 3D space. Through each line in Space there is an infinite number of Planes, containing it.
Obviously not all can be the plane of best focus.
Say:
In general the lens is tilted, but the image not focussed.
However, as soon as wwe use the lens focus mechanism, the first Scheimpflug principle is not satisfied any more, we would have to tilt the lens a little to satisfy the first criterion, but then the image is not focussed any more, etc.
The second (sufficient) condition can be the
But instead of the Gauss focus equation we can use the
These three planes must share a common line:
Usage: First place the object center (the green dot, the spot where the optical axis meets the object) at a local you like , for example at 60 on the x-axis.
Them move the lens (the other green dot) to a location where it’s possible to place the camera-lens position.
The interactive graphic keeps the optical axis in the center of the lens and maps the edges of the sensor to the wanted object plane.
The magnification if measured perpendicular(!) to the optical axis.
Keep in mind, that on your monitor you’ll see a trapezoid / trapezium)
if you want to use lenses designed for the use “in air” in a housing under water, please do NOT use a plane window! The reasons get clear from the interactive graphics below.
If there is no other chance than to use a plane window, then place it close to the lens.
instead you should use a spherical window that shares it’s center with the entrance pupil (center of the appearant hole when looked from the front:
Before there were CCD and CMOS-sensors, there were Vidicon tubes.
Why to mention? These light receiving tubes influence till today the names for the sizes of our imaging sensors.
The dark grey round area of the tube is the light sensitive part. Obviously the dark gray area can not reach the full diameter of the tube.
Lenses have a so called image circle, the round area on the image side of the lens that receives light. A lens has an image circle thats large enough to expose the dark gray part to light. if the dark area was 6mm in diameter, we talk of a 1/3″ lens, because the outer diameter of the Visicon tube is 1/3″ = 25.4/3mm = 8.467mm.
But has a 1″ lens an image circle which is 3x as large as a 1/3″ lens ?
A third inch lens has 6mm Image circle, so a one inch lens should have 3 times as much, say 18mm. It is 16mm only, however, because a vidicon tube with an 16mm diameter dark area had an outer diameter of one inch (25.4mm).
That’s why 1/3″ has 6mm and 1″ has 16mm image circle 🙂