View Full Version : New articles by Jeff Conrad on DoF

QT Luong

25-Feb-2006, 00:01

Jeff Conrad has split his exposition of DoF into two new articles

Intoduction to DoF (http://www.largeformatphotography.info/articles/IntroToDoF.pdf),

which is now a true "introduction", and

DoF in Depth (http://www.largeformatphotography.info/articles/DoFinDepth.pdf), which

will likely be a definitive reference on the subject, revisiting also the "optimal f-stop" approach.

Please feel free to leave any contructive comments on the new articles in this thread.

For reference, comments on the old article are available

here (http://www.largeformatphotography.info/lfforum/topic/499846.html).

David Martin

25-Feb-2006, 10:52

Equation 3 (long form) in the intro (top of page 4) - should N (LHS) be the f-number, not the focal length?

Otherwise a very nice article.

..d

Jeff Conrad

25-Feb-2006, 19:45

That would make more sense, wouldn't it?

Eric Woodbury

8-Mar-2006, 15:09

No doubt definitive, but are the Hansma numbers I'm using non-optimal? I'll never know. Hansma's article was simple and practical.

It would be interesting to see actual photographs demonstrating the effects that are characterized by all those equations.

Also, as a BW photographer, I'm not sure I agree with the assumption of using 546nm as the spectral middle. Unless using filters, I should think a shorter wavelength would be better. It would be nice to have an Excel spreadsheet version of this that allowed calculations of the optimum f/number given our personal settings such that we might develop a better feel for DOF.

Paul Metcalf

8-Mar-2006, 17:02

Hansma's numbers are not non-optimal. The issue is what is optimal, which is subject to the optimization contraints. Hansma uses diffraction and defocus as contraints. Others have used MTF response. The real test of optimallity is in your own photographs. If you like them, join a f/64 group. If you don't, say your a member of a Pictorialist Club.

Jeff Conrad

8-Mar-2006, 18:32

Hansma's numbers and mine differ by about 3%; for practical purposes, they

are the same, and I tried to suggest this in both articles. Stated

otherwise, Hansma's numbers appear to work quite well indeed, more so than

the alternative suggested by Wheeler; quite honestly, this was a bit of a

surprise to me.

Both Hansma's numbers and mine are empirically derived: in his case, using

a “rule of thumb” method for combining defocus and diffraction;

in my case, from observing MTF graphs and noting the optimum f-numbers at

an arbitrarily chosen spatial frequency. That three slightly different

approaches seem to give about the same numbers suggests that the numbers

are not unreasonable.

My choice of spatial frequency (6 lp/mm in the final image) is somewhat

arbitrary; it could be argued that this frequency is below the threshold of

detectability (although I'm sure that some others might argue that 15 lp/mm

would be more suitable). If I had chosen 4 lp/mm, the optimum f-number

would be slightly less in most cases. I chose 6 lp/mm largely because the

best-fit equation used the square root rather than an exponent such as

0.62, which requires more effort (and introduces more chances for error).

I agree that 546 nm is arbitrary; I chose it simply because most other

analyses have used similar values. Offhand, I'm not sure I could assemble

a spreadsheet, because I'd need to recompute the MTFs and see what

happened. I may try this (it's not difficult), and if I see anything

significant I'll mention it.

In practical photography, I see little need for more than three equations:

<ol>

<li>Focus distance</li>

<li>Minimum f-number based on DoF</li>

<li>Maximum f-number based on diffraction effects</li>

</ol>

The other equations and graphs are included simply to indicate that I

didn't pull the numbers out of the air. You may disagree with my methods

and results, but at least you can see how I obtained them.

Perhaps most important: reaching the DoF limit is not like falling off a

cliff. DoF simply isn't an area where 5 significant figures are

meaningful. No real-world images will match the numbers I obtained. I

made a number of simplifying assumptions; in particular, I treat lenses as

aberration free and do not include the effects of the imaging medium.

Without these simplifications, however, the problem becomes so complex as

to be unmanageable.

I certainly agree that analysis of some actual images would be helpful

(Hansma did perform some tests that matched his predictions quite well).

I'd also like to see some tests that affirmed or negated the common

assumptions of detectable blur, as well as a rigorous test of the benefits

of equal vs. unequal near- and far-limit CoCs under reasonable viewing

conditions (I don't usually examine a print with a microscope). I'm not

convinced that I see much benefit, though my tests are far from rigorous.

Doing a meaningful, quantitative test is no simple matter, and I'm not

currently set up to perform tests with which I would be satisfied.

For what it's worth, I personally set focus and f-number from the image

side, using the approximate equations. I never worry about diffraction,

because motion blur nearly always is a far greater problem. In other

words, I pretty much forget the math when actually using a camera.

Hansma's article, as well as the results I got from the MTF analyses, seem

to suggest that what many of us have done for years is just fine.

relatively_random

18-Jul-2018, 03:09

Hello, I've been reading Depth of Field in Depth (http://www.largeformatphotography.info/articles/DoFinDepth.pdf), and it has been of great help for my purposes.

However, I may have stumbled upon an issue, unless I'm wrong. Equation 96, "relative blur" with pupil magnification, is derived from equations 44 and 95. Doesn't equation 44 assume no pupil magnification?

With pupil magnification, according to Physics of Digital Photography (http://iopscience.iop.org/book/978-0-7503-1242-4/chapter/bk978-0-7503-1242-4ch1#bk978-0-7503-1242-4ch1s1-2), entrance pupil position is:

u_ep = f * (1 - 1/p).

This means that the magnification at defocused point should not be:

(m + 1) * f / u_d,

but:

(m + 1) * f / (u_d - u_ep).

If my algebra is correct, this means that equation 96 would get slightly modified, by replacing the p * u_d part in the numerator with (p * u_d + f * (1 - p)). In the end, this seems to simplify so that the effect of p completely disappears:

m/(m+1) * |u_d-u|/N,

which is identical to the equation 45: relative blur without pupil dilation.

relatively_random

18-Jul-2018, 04:58

It appears I made a mistake so my results in previous post are wrong:

m_d = (m/p + 1) * f / (u_d - u_ep)

u_ep = f * (1 - 1/p)

With this, the result becomes:

k_r = k/m_d = m*p/(m+p) * |u_d - u| / N

I get the exact same result by sketching things on paper, using similar triangles and then substituting u = f/m+f and u_ep = f*(1-1/p). In any case, the DoFiD paper seems to be wrong for equation 96.

180588

Thalmees

18-Jul-2018, 14:32

Welcome to the forum relatively-random,

Wish you all the success.

Digital image is not Optical image, can not be controlled by optics physics once hit the sensor.

Projected optical image is going through multiple filters in front of sensor, including AntiAlising filter.

Then, charge(image electric effect on sensor) is going to A/D converter.

Each camera will display its own photo, depends on the camera Photoshop(processor).

On monitor or at printing, all digital photos(if not most) need UnSharpMask(to compensate for AA filter), plus other digital filters like blur filter.

All of these conversions and compulsory manipulations, will not let digital image to act appropriately within optics laws.

Thanks so much for your topic.

ic-racer

18-Jul-2018, 18:05

You might try email: jeff_conrad@msn.com

Jeff Conrad

18-Jul-2018, 20:45

Magnification is determined by distances from the principal planes—not the planes of the pupils—so my Eq. (44) still applies to an asymmetrical lens, and Eq. (96) still holds.

Despite the title of the book, the topic of the referenced chapter is the optical image rather than a digital image that might eventually result. It seems clear that this is to what the OP is referring, so the questions are valid.

relatively_random

19-Jul-2018, 00:34

Magnification at the focused point depends only on the principal plane, this is correct. However, principal point is not the center of angle of view, entrance pupil is. AOV is what determines the "scaling" of sizes in different points in space, which is what magnification at defocused points is.

When p = 1, both points are at the same spot so (44) is correct.

relatively_random

19-Jul-2018, 00:42

Also, after a good night's sleep, it occurred to me that (m+p)/p is just the bellows factor so the new eq. (96) can be represented as:

k_r = m / (1 + m/p) * |x_d| / N

This is practically identical to the original, symmetric relative blur in eq. (45), the only difference being that 1+m becomes the usual 1+m/p. :)

Jeff Conrad

19-Jul-2018, 17:47

AOV is what determines the "scaling" of sizes in different points in space, which is what magnification at defocused points is.

This is a new one for me. Admittedly, magnification of defocused objects isn’t a common topic; in fact, Merklinger’s discussion in The INs and OUTs of Focus is the only one of which I am aware, and he doesn’t cover asymmetrical lenses.

It seems to me that if magnification of a focused object were determined by distances from the principal planes but the magnification of defocused objects were determined by distances from the plane of the entrance pupil, magnification would be discontinuous between a focused object and an infinitesimally defocused object—which doesn’t make sense.

I am for sure no lens designer. Perhaps one of the high-powered folks like Emmanuel or Oren might be able to clarify this.

Nodda Duma

19-Jul-2018, 21:57

Defocus magnification change is determined by the marginal ray angle in image space. That’s an easier way to derive it.

Telecentric optics do not change magnification with defocus. This is an important property for applications such as astrometry and tracking systems.

Magnification changes with field angle is distortion. You can therefore derive exact magnification calculations via Seidel aberration coefficients and/or Zernike polynomials...if and only if you want to earn extra credit in a graduate level optical aberration course.

In the real world, lens designers write a macro in Zemax or Code V to generate plots and data tables of distortion vs field angle as the image plane moves through focus while they go on their lunch break, then copy the plots into power point after they get back from lunch. Doing it this way is more accurate than calculating, since the data is generated by the software’s physics-based ray trace algorithms.

relatively_random

20-Jul-2018, 01:04

It seems to me that if magnification of a focused object were determined by distances from the principal planes but the magnification of defocused objects were determined by distances from the plane of the entrance pupil, magnification would be discontinuous between a focused object and an infinitesimally defocused object—which doesn’t make sense.

m_d = (m/p + 1) * f / (u_d - u_ep) is a continuous function for u_d > u_ep. And it's not difficult to show that m_d = m when u_d = u. Just substitute u_d=u=(1+1/m)*f and u_ep=(1-1/p)*f.

Defocus magnification change is determined by the marginal ray angle in image space. That’s an easier way to derive it.

I don't know how to do it with marginal rays in image space, but here's my derivation using chief rays in object space:

180658

Marginal rays got me thinking that there is an easier way to derive "relative blur" in object-space:

180659

Sorry for all the paper photos, doing all this digitally would take some learning and time.

relatively_random

20-Jul-2018, 01:22

Magnification changes with field angle is distortion. You can therefore derive exact magnification calculations via Seidel aberration coefficients and/or Zernike polynomials...if and only if you want to earn extra credit in a graduate level optical aberration course.

In the real world, lens designers write a macro in Zemax or Code V to generate plots and data tables of distortion vs field angle as the image plane moves through focus while they go on their lunch break, then copy the plots into power point after they get back from lunch. Doing it this way is more accurate than calculating, since the data is generated by the software’s physics-based ray trace algorithms.

All this is waaaay above my knowledge level. I was just trying to figure out how to estimate positioning tolerance in a machine vision system, with lens data I have available.

Nodda Duma

20-Jul-2018, 03:08

Ah that makes more sense. If I understand the purpose correctly, Machine vision systems usually employ telecentric imaging lenses with centroid object tracking. That way the system is not sensitive to defocus. A basic blob detection scheme works perfectly fine in those cases and is simple to code (examples online). Color filters and backdrop selection are used to optimize contrast between the tracked object and background.

relatively_random

20-Jul-2018, 16:08

We use telecentrics when needed or when they make things simpler, yes. Often, though, they are either not feasible or simply not needed. And they don't really help with depth of field, except by making the blur symmetrical.

I had to pick some lens to order for a project with a nasty combination of properties: small objects (large magnification) which move fast (low exposure times), but relatively lots of field depth to cover. I needed to see how much depth we can cover while keeping the required details discernible enough. I had to learn how to estimate this, which is when I stumbled onto the concept of "relative blur" from this site's DOF guide. It's exactly what I needed.

Jeff Conrad

21-Jul-2018, 18:23

Sorry to be a bit slow getting back here—I had a few other things to take care of the last couple of days.

On further thought, a simple diagram suggests that if magnification is to have its common meaning, the distances—for focused or defocused objects—need to be measured from the pupils. As it turns out, Applied Photographic Optics (Ray 2002, 125) gives image and object distances equivalent to what relatively_random derived; curiously, though, it mentions them only in the context of exposure compensation for lens extension (and in a later chapter, for DoF). Thanks to relatively_random for catching this.

The paper probably ought to be fixed, but unfortunately—because of changes to Word, MathType, Acrobat, and a defect in a program to convert AutoCAD images to EMFs (the only vector format Word can handle)—this is no small undertaking. And some things—like links in equation references—no longer seem to work, so that one could not click on a reference to “Eq. (44)” to jump to the equation. Suffice it to say that my thoughts on this nonsense cannot be expressed in this forum. I suppose I could post an erratum, but I’m not sure anyone would find it.

It should be mentioned that the “relative blur” concept is essentially an image-side adaptation of the “object field method” described in Harold Merklinger’s The INs and OUTs of FOCUS (http://www.trenholm.org/hmmerk/#TIAOOF). Merklinger projects the blur spot into object space and compares it to the size of objects to determine whether the objects can be recognized. He gives one interesting example of an attempt to blur a distracting background in a portrait. It’s well known that a longer-focus lens gives a larger blur spot; will using a longer-focus lens solve the problem? Nope, because the magnification of the background also increases, and remains recognizable even though the blur spots are larger.

Jeff Conrad

23-Jul-2018, 22:51

In this thread (http://www.largeformatphotography.info/forum/showthread.php?147120-Using-blur-calculations-to-estimate-positioning-tolerance-in-a-machine-vision-system), it was pointed out that the expression for “relative blur” with an asymmetrical lens is incorrect. The error has been corrected; to avoid changing equation numbers for later equations, the correction is in an erratum at the end of the article; there is a note by Eq. (96) with a link to that erratum.

Thanks to relatively_random (http://www.largeformatphotography.info/forum/member.php?55281-relatively_random) for catching this.

Jeff Conrad

24-Jul-2018, 16:32

A new version is posted. Because of the software issues mentioned above, it’s not elegant—I put an Erratum at the end of the paper, with a note linking to it next to Eq. (96). Hopefully, the explanation is sufficient.

Thanks to Oren Grad for pointing me to the obvious way to do this while avoiding most of the other issues.

Oren Grad

24-Jul-2018, 16:44

I've merged relatively_random's thread into the thread originally established for comments on Jeff's DoF articles. Thanks to relatively_random for posting his original question and follow-up discussion, and thanks to Jeff for the article correction.

relatively_random

25-Jul-2018, 00:10

A new version is posted. Because of the software issues mentioned above, it’s not elegant—I put an Erratum at the end of the paper, with a note linking to it next to Eq. (96). Hopefully, the explanation is sufficient.

Thanks to Oren Grad for pointing me to the obvious way to do this while avoiding most of the other issues.

I still hadn'd responded to your original reply, but now I have to. You actually managed to fix it even though the software is now falling apart! Thanks.

Powered by vBulletin® Version 4.2.5 Copyright © 2019 vBulletin Solutions Inc. All rights reserved.