In microscopy parfocal lenses stay in focus when the magnification is changed.
With pericentric lenses objects at larger distances appear larger(!) and objects at closer distances appear smaller.
Perizentric lenses allow for example to view a can from top and the sides at the same time.
This reverses our normal viewing experience.
Pericentric lenses got to be MUCH larger than the object under inspection.
The rotation of the cylinder is not known, so we would need a lens that can look from all sides “outside-in” at the correct angle.
Solution: DIY with the help of a Fresnel Lens, a normal M12 lens and the graphic calculator below …
A kind of vignetting which occurs exclusively with digital cameras.
Possible causes are:
- the pixels are not completely flat due to construction on the sensor surface, but in small cavities. Too shallow light cast shadows on the edges of the pixels, like the evening sun at some point no longer reaches mountain valleys.
- The sensor uses micro-lenses (small converging lenses) to capture as much light as possible for each pixel. From a certain off axis angle lenses are no longer capable to deflect the light strong enough and the light can’t reach the pixel no more.
- With image side telecentric lenses such vignetting does not occur because the incident light rays are parallel to the optical axis.
The latest sensor technologies however try to correct the Pixelvignettierung on-chip (= directly in the sensor) or by micro lenses that have differently shape in the corners than in the center.
Thus it may happen that the image side telecentric lenses surprisingly show vignetting.
The prolongued a light ray parallel to the optical axis entering a the first lens of a lens system and the corresponding ray leaving the last lens element intersect in a plane called image side principal plane.
The prolongued a light ray parallel to the optical axis entering a the last lens of a lens system and the corresponding ray leaving the first lens element intersect in a plane called object side principal plane.
Each (symmetric) lens has two principal planes. These (hypothetical) planes are perpendicular to the optical axis and are the planes on which light beams parallel to the axis coming from infinity seem to bend (and then go through the respective focal points).
This only applies to the paraxial optics, ie very close to the optical axis.
For rays more distant to the optical axis spherical aberration distorts this behaviour.
In single thin lens, the two principal planes merge and can be approximated by the center of the lens.
Purple colored rim around dark objects, typically on a white background
Purple fringing is the visible effect of lateral color-aberrations.
You can imagine this as is one of the Red / Green / Blue Images on a color sensor were a bit too small, for example the blue image. Instead of reaching the white area , together with green and red, it reaches the black area closer to the image center , where no light were expected at all.
That’s why the effect only occurs where (nearly) white and (nearly) black regions meet. If it’s white in a region anyway, it doesn’t matter is all, say, blue) light rays arrive a bit too close to the center, as the “gaps” are filled with other rays. Where black and white areas meet, however, the gaps can’t be filled.