View Full Version : New articles by Jeff Conrad on DoF
Jeff Conrad has split his exposition of DoF into two new articles
Intoduction to DoF (http://www.largeformatphotography.info/articles/IntroToDoF.pdf),
which is now a true "introduction", and
DoF in Depth (http://www.largeformatphotography.info/articles/DoFinDepth.pdf), which
will likely be a definitive reference on the subject, revisiting also the "optimal f-stop" approach.
Please feel free to leave any contructive comments on the new articles in this thread.
For reference, comments on the old article are available
Equation 3 (long form) in the intro (top of page 4) - should N (LHS) be the f-number, not the focal length?
Otherwise a very nice article.
That would make more sense, wouldn't it?
No doubt definitive, but are the Hansma numbers I'm using non-optimal? I'll never know. Hansma's article was simple and practical.
It would be interesting to see actual photographs demonstrating the effects that are characterized by all those equations.
Also, as a BW photographer, I'm not sure I agree with the assumption of using 546nm as the spectral middle. Unless using filters, I should think a shorter wavelength would be better. It would be nice to have an Excel spreadsheet version of this that allowed calculations of the optimum f/number given our personal settings such that we might develop a better feel for DOF.
Hansma's numbers are not non-optimal. The issue is what is optimal, which is subject to the optimization contraints. Hansma uses diffraction and defocus as contraints. Others have used MTF response. The real test of optimallity is in your own photographs. If you like them, join a f/64 group. If you don't, say your a member of a Pictorialist Club.
Hansma's numbers and mine differ by about 3%; for practical purposes, they
are the same, and I tried to suggest this in both articles. Stated
otherwise, Hansma's numbers appear to work quite well indeed, more so than
the alternative suggested by Wheeler; quite honestly, this was a bit of a
surprise to me.
Both Hansma's numbers and mine are empirically derived: in his case, using
a “rule of thumb” method for combining defocus and diffraction;
in my case, from observing MTF graphs and noting the optimum f-numbers at
an arbitrarily chosen spatial frequency. That three slightly different
approaches seem to give about the same numbers suggests that the numbers
are not unreasonable.
My choice of spatial frequency (6 lp/mm in the final image) is somewhat
arbitrary; it could be argued that this frequency is below the threshold of
detectability (although I'm sure that some others might argue that 15 lp/mm
would be more suitable). If I had chosen 4 lp/mm, the optimum f-number
would be slightly less in most cases. I chose 6 lp/mm largely because the
best-fit equation used the square root rather than an exponent such as
0.62, which requires more effort (and introduces more chances for error).
I agree that 546 nm is arbitrary; I chose it simply because most other
analyses have used similar values. Offhand, I'm not sure I could assemble
a spreadsheet, because I'd need to recompute the MTFs and see what
happened. I may try this (it's not difficult), and if I see anything
significant I'll mention it.
In practical photography, I see little need for more than three equations:
<li>Minimum f-number based on DoF</li>
<li>Maximum f-number based on diffraction effects</li>
The other equations and graphs are included simply to indicate that I
didn't pull the numbers out of the air. You may disagree with my methods
and results, but at least you can see how I obtained them.
Perhaps most important: reaching the DoF limit is not like falling off a
cliff. DoF simply isn't an area where 5 significant figures are
meaningful. No real-world images will match the numbers I obtained. I
made a number of simplifying assumptions; in particular, I treat lenses as
aberration free and do not include the effects of the imaging medium.
Without these simplifications, however, the problem becomes so complex as
to be unmanageable.
I certainly agree that analysis of some actual images would be helpful
(Hansma did perform some tests that matched his predictions quite well).
I'd also like to see some tests that affirmed or negated the common
assumptions of detectable blur, as well as a rigorous test of the benefits
of equal vs. unequal near- and far-limit CoCs under reasonable viewing
conditions (I don't usually examine a print with a microscope). I'm not
convinced that I see much benefit, though my tests are far from rigorous.
Doing a meaningful, quantitative test is no simple matter, and I'm not
currently set up to perform tests with which I would be satisfied.
For what it's worth, I personally set focus and f-number from the image
side, using the approximate equations. I never worry about diffraction,
because motion blur nearly always is a far greater problem. In other
words, I pretty much forget the math when actually using a camera.
Hansma's article, as well as the results I got from the MTF analyses, seem
to suggest that what many of us have done for years is just fine.
Powered by vBulletin® Version 4.2.0 Copyright © 2013 vBulletin Solutions, Inc. All rights reserved.