View Full Version : Diffraction and Depth of Field

timparkin

11-Oct-2011, 14:40

I'm trying to come up with an equation to graph defocus against distance for a typical lens given aperture, focal length, etc.

The thing that is giving me problems is working out how to combine the effects of diffraction and depth of field. Obviously diffraction doesn't just 'stop' once you get to a certain defocus based on depth of field and so you have to combine the two.

Now using MTF is the best way of combining the two but I don't want to jump in that deep immediately.

My current method of combining the two is using 1/(1/R + 1/R) using the circle of confusion but some of the articles I've read suggest a root mean square approach.

Any ideas which may be right?

Tim

aduncanson

11-Oct-2011, 15:38

You may want to read through through following thread:

http://www.largeformatphotography.info/forum/showthread.php?t=36745

timparkin

11-Oct-2011, 15:58

You may want to read through through following thread:

http://www.largeformatphotography.info/forum/showthread.php?t=36745

Done that. Does it apply to diffraction is the question? The usual MTF's are to do with simple linear degradation of the image. Diffraction appears to behave differently and I wanted to check to see what people think as various sources use the root mean square of the disk diameter or the combined MTF's, both of which provide different figures. I don't know what a diffraction MTF would look like either?

The RMS calculation gives the following graph where red line is no diffraction and the blue lines includes diffraction

http://static.timparkin.co.uk/static/tmp/depth-of-field.jpg

Tim

Jeff Conrad

11-Oct-2011, 18:12

You might take a look at Depth of Field in Depth (http://www.largeformatphotography.info/articles/DoFinDepth.pdf), under Diffraction. The MTFs don’t include the effects of aberrations, so don’t take the results at large apertures too seriously.

aduncanson

11-Oct-2011, 19:11

Am I misreading your chart, or do I see diffraction actually improving resolution in certain ranges. That would seem to suggest that you have a sign error somewhere.

I think that Struan was suggesting that because the Airy disk is steeper than Gaussian, 1/r better corresponds to measured results than1/r^2. The CoC due to defocus is somewhat complex, but would certainly be non-Gaussian. I think that is further reason to use 1/r.

Best of luck, I am interested in what you come up with - Alan

Jeff Conrad

11-Oct-2011, 23:22

Recognize that both linear and root-square (no mean is involved) combinations are rules of thumb that have little theoretical basis, though they’ve been around for a long time—H. Lou Gibson of Kodak discussed this in the 1950s. The calculated MTF for combined defocus and diffraction was developed by H.H. Hopkins in 1955, and as far as I know, it remains the accepted approach. As I mentioned, unless a calculation is made for a specific lens with known aberrations, aberrations are necessarily ignored, so the results at large apertures usually aren’t meaningful. If we’re looking to maximize DoF, though, we’re usually interested in small apertures, so this isn’t a problem.

Certainly, diffraction cannot improve overall sharpness. But it’s not realistic to compare calculations for pure defocus with calculations for combined defocus and diffraction—you can have diffraction without defocus, but you cannot have defocus without diffraction. My Figures 7–10 show pure defocus, and seem to suggest that in some cases the combined MTF is better, but the pure defocus is an impossible condition—I’ve shown it only because everyone else does. A more realistic take is to note that the combined curves (in black) always have lower MTFs that those for pure diffraction (in green)—in other words, defocus always decreases sharpness. Presumably, this is not a surprise to anyone here.

Whatever the approach, using root-square combinations of defocus and diffraction blur spots (as Gibson and Hansma did), or using calculated MTFs for combined defocus and diffraction (as I have done), there is usually an optimal aperture. Hansma and I assumed that in most cases, the focus spread is fixed—the camera position is chosen for the best composition, the lens is selected to provide the desired framing, and the focus spread is then determined by the required DoF. This leaves only one control—the aperture. In the plane of focus, there is a tradeoff between aberrations and diffraction—once the lens is “diffraction limited” at moderate to middle apertures, additional stopping down softens the image because of increased diffraction. At the DoF limits, defocus additionally softens the image, and the tradeoff is usually between defocus and diffraction. Initially, decreasing the aperture decreases defocus, and increases overall sharpness. But at some point, the softening from diffraction exceeds the gains from decreasing defocus, and stopping down further decreases sharpness even at the DoF limits. If you look at my Figures 11, 12, 15, and 16, diffraction places an upper bound on sharpness. And this is for an ideal lens with no aberrations—with any real lens, the sharpness will be less than that shown.

In summary: again, for fixed focus spread, up to a point, stopping down improves sharpness at the DoF limits by decreasing defocus blur. But at some point, the losses from diffraction exceed the gains from decreased defocus, so that further stopping down decreases sharpness. The effect is illustrated in my Figure 13—there is an optimal aperture for every focus spread, and as focus spread increases, the resolving power at the optimal aperture decreases. The results aren’t radically different from Hansma’s using root-square combinations of defocus and diffraction. Obviously, anything than can be done, especially use of tilt or swing, to reduce the focus spread will reduce the required f-number and increase the maximum possible sharpness.

Struan Gray

12-Oct-2011, 01:07

Hi all :-)

The fundamental reason why you inevitably ratchet downwards in resolution is that the blurs introduced by aberrations and defocus and by diffraction are uncorreleated. The two physical processes spreading the light out are independent and do not influence each other. One just blurs the blurred result of the other, and you always get extra blur, even if only by a little bit.

You can imagine taking some ideal Platonic capture of the image, blurring it a bit to represent aberrations, and then blurring it some more for diffraction. Mathematically the blurring is done with a convolution: imagine taking some of the light in each pixel and spreading it out into neighbouring pixels. Do the same for all the pixels in the image and each new pixel becomes a weighted sum of itself and it's surroundings (there are, of course, analytical approaches which handle the non-digitised continuous analogue case).

The finicky details are in the weightings of the surrounding pixels, or, equivalently, in the kernal of the blurring function. Very, very often, it is assumed that the kernal is a Gaussian bell curve shape. That is because a lot of physical processes do indeed produce a Gaussian shape, but also because it's a good approximation to many other shapes, and because a wonderous piece of maths called the Central Limit Theorem means that combining repeated measurements tends to make the overall shape of the kernal converge to a Gaussian.

There is also laziness and convenience: a Gaussian can be handled analytically, since you can prove all sorts of useful general theorems about how convolutions of Gaussians give new Gaussians with widths which are simply related to the ones you started with. That is where the 1/R2 + 1/R2 rule comes in. In real life things are not that simple: for example, the commonest lineshape for atomic spectra is a Lorentzian, and that doesn't even have a defined variance. You can't come up with a definition of 'width' and you have no choice but to do the convolution explicitly (or cheat, and use a Gaussian).

Note that in real life, none of the functions affecting blur in photographs is a Gaussian. Aberrations produce the complex functions seen in spot diagrams, Pure defocus is a simple geometric shape, and diffraction is an Airy sinc function (for a circular aperture). There is no reason whatsoever to assume that the combination of blurs should follow a 1/R2 rule.

MTFs come in because they are one part of the Fourier Transfer of the error kernal. Another useful theorem says that instead of convolving two functions (which is time consuming, even for a computer) you can instead just multiply their Fourier Transforms together. Thus combining errors, or adding the effects of multiple blurring mechanisms, becomes a simple matter of multiplying the MTFs. The only issue is that you need to keep track of phase, and MTFs only handle magnitude - in 'real' calculations you use the full Optical Transfer Function, which includes phase.

timparkin

12-Oct-2011, 05:36

You might take a look at Depth of Field in Depth (http://www.largeformatphotography.info/articles/DoFinDepth.pdf), under Diffraction. The MTFs don’t include the effects of aberrations, so don’t take the results at large apertures too seriously.

I shall go away and read - I may be some time.. :-)

timparkin

12-Oct-2011, 05:40

Hi all :-)

The fundamental reason why you inevitably ratchet downwards in resolution is that the blurs introduced by aberrations and defocus and by diffraction are uncorreleated. The two physical processes spreading the light out are independent and do not influence each other. One just blurs the blurred result of the other, and you always get extra blur, even if only by a little bit.

You can imagine taking some ideal Platonic capture of the image, blurring it a bit to represent aberrations, and then blurring it some more for diffraction. Mathematically the blurring is done with a convolution: imagine taking some of the light in each pixel and spreading it out into neighbouring pixels. Do the same for all the pixels in the image and each new pixel becomes a weighted sum of itself and it's surroundings (there are, of course, analytical approaches which handle the non-digitised continuous analogue case).

The finicky details are in the weightings of the surrounding pixels, or, equivalently, in the kernal of the blurring function. Very, very often, it is assumed that the kernal is a Gaussian bell curve shape. That is because a lot of physical processes do indeed produce a Gaussian shape, but also because it's a good approximation to many other shapes, and because a wonderous piece of maths called the Central Limit Theorem means that combining repeated measurements tends to make the overall shape of the kernal converge to a Gaussian.

There is also laziness and convenience: a Gaussian can be handled analytically, since you can prove all sorts of useful general theorems about how convolutions of Gaussians give new Gaussians with widths which are simply related to the ones you started with. That is where the 1/R2 + 1/R2 rule comes in. In real life things are not that simple: for example, the commonest lineshape for atomic spectra is a Lorentzian, and that doesn't even have a defined variance. You can't come up with a definition of 'width' and you have no choice but to do the convolution explicitly (or cheat, and use a Gaussian).

Note that in real life, none of the functions affecting blur in photographs is a Gaussian. Aberrations produce the complex functions seen in spot diagrams, Pure defocus is a simple geometric shape, and diffraction is an Airy sinc function (for a circular aperture). There is no reason whatsoever to assume that the combination of blurs should follow a 1/R2 rule.

MTFs come in because they are one part of the Fourier Transfer of the error kernal. Another useful theorem says that instead of convolving two functions (which is time consuming, even for a computer) you can instead just multiply their Fourier Transforms together. Thus combining errors, or adding the effects of multiple blurring mechanisms, becomes a simple matter of multiplying the MTFs. The only issue is that you need to keep track of phase, and MTFs only handle magnitude - in 'real' calculations you use the full Optical Transfer Function, which includes phase.

Hi Struan,

in the case of diffraction and defocus, would you expect the end result to always be worse that the worse of the two? My RMS calculation gives a better result than straight defocus for areas far away from the focus point. I'm presuming this is why 1/R might be better? (actually - is my mistake using RMS i.e. d = sqrt(a^2/2 + b^2/2) when it should be root sum of squares? d = sqrt(a^2 + b^2).

Tim "just trying to come up with an approximation but would love to know the facts" Parkin

Struan Gray

12-Oct-2011, 06:09

in the case of diffraction and defocus, would you expect the end result to always be worse that the worse of the two?

Yes.

If a is the width of the blur introduced by the film, and b is the width of the blur for diffraction, then the combined width, d is given crudely by:

d = ab/sqrt(a^2+b^2)

Empirical testing has shown that the following is closer to the truth:

d = ab / (a+b)

Both give an answer which is larger than a, or b individually.

If a' and b' are 'resolutions' in lp/mm or similar units, you have already taken the reciprocal, and the formulae are:

d' = sqrt(a' + b')

d' = a'+b'

Again, the latter has been found to give a better fit to optics shining light onto film.

Note that digital and analogue light capture can both lead to an MTF from the recording medium which is higher than 1, i.e. contrast is increased at some spatial frequencies. The integral of the MTFs over the whole passband is limited (otherwise energy would not be conserved during capture), but some parts can be higher than unity if others are less to compensate. Film does this through adjacency effects in development, digital usually with aliasing.

But. When you combine such an MTF with the effects of diffraction, it always reduces. Not necessarily to less than unity, but certainly to less than the value without diffraction.

Note also that many published MTFs are downright vague about normalisation, even if they got it right in testing. I wouldn't get too hung up about the *value* of the MTF, more with how it varies across the passband, and spatially across the image frame.

Jim Jones

12-Oct-2011, 06:39

A practical study of diffraction limited optics reveals unexpected phenomena. For example, at the point where the curves for diffracted and geometric resolution cross, the resolution is greater than either rather than limited by both. This can perhaps be deduced from the below graphs. Resolution counterintuitively increases away from the image center, although the smaller effective pinhole diameter in that direction should decrease resolution.

Another interesting phenomena is the dark center in the image of a point light source when the pinhole diameter is enough smaller than than the diameter which gives maximum sharpness.

Oren Grad

12-Oct-2011, 06:58

...and diffraction is an Airy sinc function (for a circular aperture).

Which, of course, the iris diaphragms in modern shutters are not. The math gets messier still.

I'm less interested in the quantitative effect on MTF than I am on the effect on the subjective character of the rendering. But intuition suggests that overall, non-circular apertures will tend to push the MTF further down, though with a greater propensity toward odd bumps here and there. Does that make any sense?

Jim Jones

12-Oct-2011, 08:01

Oren, it makes as much sense as anything else related to diffraction limited optics.

timparkin

12-Oct-2011, 10:37

Oren, it makes as much sense as anything else related to diffraction limited optics.

:-) agreed

Struan Gray

12-Oct-2011, 11:53

A practical study of diffraction limited optics reveals unexpected phenomena.

Amen.

For example, at the point where the curves for diffracted and geometric resolution cross, the resolution is greater than either rather than limited by both.

I'm not sure I agree with this. My interpretation of the pinhole curves is that the diffraction curve shoots up rapidly for holes smaller than the optimum, and the geometric blur increases linearly for larger holes. The product of the two curves (well, convolution of the point spread functions :-) gives you a minimum in the spread, aka a maximum in the resolution.

This can perhaps be deduced from the below graphs. Resolution counterintuitively increases away from the image center, although the smaller effective pinhole diameter in that direction should decrease resolution.

There are 'spurious resolution' effects in pinhole imaging, but I think the example you gave is a side-effect of pinhole optimisation. The optimum size for a 3.5" focal length pinhole is 0.01" or thereabouts. Larger pinholes are sub-optimal on-axis, but the distance to the film increases as you move off-axis, so the pinhole can become optimum at some angle, and then sub-optimal again. The peak occurs at different positions for axial and sagittal detail because the hole appears elliptical for off-axis light, i.e. it narrows for axial detail, but not for sagittal.

Another interesting phenomena is the dark center in the image of a point light source when the pinhole diameter is enough smaller than than the diameter which gives maximum sharpness.

This one I didn't know. Do you have an example?

The ultimate pinhole complexity is perhaps found in the mathematics of crossline screens for halftoning. There, you have to consider diffraction off the lens aperture, and off the array of pinholes formed by the screen. Worse, the screen is close enough to the film that you have to include second-order corrections and do Fresnel diffraction rather than Fraunhofer (and say goodbye to all those lovely Fourier transforms).

Struan Gray

12-Oct-2011, 12:07

[circular apertures]

Which, of course, the iris diaphragms in modern shutters are not. The math gets messier still.

Well, not so much. The diffraction point spread function is the 2D Fourier transform of the aperture function. For a circular aperture it's an Airy disc. For a square aperture it's Sinc functions in x and y multiplied by each other. Hexagons and other shapes are doable analytically, and a doddle for a computer with a decent FFT routine.

Where I think things can get 'interesting' is with very large aperture lenses. I am not sure that you can then assume 'far field' diffraction. As with the crossline screens I mentioned above, you then need to start messing with Cornu spirals and Fresnel integrals.

I'm less interested in the quantitative effect on MTF than I am on the effect on the subjective character of the rendering. But intuition suggests that overall, non-circular apertures will tend to push the MTF further down, though with a greater propensity toward odd bumps here and there. Does that make any sense?

Crudely, the aperture hole is larger across the corners, so you get an increased resolution in those directions. That could lead to odd-looking textures, especially if you have, say, a fabric or distant building with strong periodicity at the resolution limit.

In practice though, although the jump from a triangle to a square or pentagon is pretty significant, by the time you are going from pentagon to heptagon the differences are small, especially with modern shutters where the aperture blades are often curved so that they approximate a constant-diameter polygon (see "How round is your circle? (http://www.howround.com/)" for some truly diverting mathematical engineering).

I prefer heptagonal or better apertures to polygons with fewer sides, but that is mostly because they minimise the busy look of fully out-of-focus background detail and specular highlights. I don't usually use apertures where diffraction is noticeable though.

Struan Gray

12-Oct-2011, 12:13

Ooops

d' = sqrt(a' + b')

Should have been:

d' = sqrt(a'^2 + b'^2)

Pythagorus will never forgive me.

Jim Jones

12-Oct-2011, 21:32

"Another interesting phenomena is the dark center in the image of a point light source when the pinhole diameter is enough smaller than than the diameter which gives maximum sharpness."

Struan, I don't have an example, but as I recall from many years ago, a pinhole of about .005 inches held close to the eye when looking at a small bright light will show the effect.

Struan Gray

12-Oct-2011, 23:35

Struan, I don't have an example, but as I recall from many years ago, a pinhole of about .005 inches held close to the eye when looking at a small bright light will show the effect.

Thanks Jim, that makes more sense. It depends on how close is 'close', but if you're close enough the extra terms of Fresnel diffraction can give a central dark area.

'Close enough' essentially means a failure of the small angle approximation. If you are close enough that you need to worry about the difference between sin(theta) and tan(theta) then the Fresnel integrals come into play.

The oddest of all the standard diffraction phenomena is probably the Arago spot (http://en.wikipedia.org/wiki/Arago_spot): a bright spot that appears in the middle of the shadow of a circular disc because of constructive interference from wavefronts from all around the circumference. Not often seen in camera optics, but it does turn up in defocussed star images.

Powered by vBulletin® Version 4.2.5 Copyright © 2019 vBulletin Solutions Inc. All rights reserved.