PDA

View Full Version : Please explain diffraction in lenses



Lightbender
19-Apr-2015, 23:16
I've been testing some of my large format lenses in the past few days. I've noticed that none of them are as sharp (in the center) at f32 as they are at f16.
-This is across many different focal lengths and many different designs.
i know that this is caused by diffraction at small apertures.

But really, ive never had a good explanation of why this occurs, and why it only appears at small apertures.
Could someone answer this? THX

Patrick13
19-Apr-2015, 23:57
It's basic physics. If by "good" you mean "simple," then you probably never had a good explanation because the why isn't simple.

Start at that link then at the bottom of that page are links to the math and deeper discussion about why the why is the why :)
https://luminous-landscape.com/understanding-lens-diffraction/

The fast and furious explanation is that the edge is always messing with the light and it becomes proportionately more of the image as the central area inset from the edge becomes smaller in proportion to how much edge there is. i.e. less clean central light versus more messy edge light the smaller your aperture.

Jim Andrada
20-Apr-2015, 00:45
Take a look at the following. Diffraction is a fundamental property of waves and their interaction with physical objects. It's a function of wavelength as well which is why systems using blue lasers, X-rays etc have higher resolution than (for example) sound waves or water waves. There's a reason they call it "Blu-Ray" and why it gets higher data density. Unfortunately even Sony technologists agree that digital optical data recording capacities crap out well before magnetic recording technologies.

If you put a blue filter over your lens you might see less of an effect. Unfortunately, shorter wavelengths also exhibit more scattering in air so you'd also see more "noise" unless you photographed in a vacuum - or in drier air. Which is why telescopes are usually on top of mountains. Anyhow, start with the following and have fun.

http://en.wikipedia.org/wiki/Diffraction

http://www.cambridgeincolour.com/tutorials/diffraction-photography.htm

mdarnton
20-Apr-2015, 04:55
The analogy I like is of a wave hitting a dock or breakwater sideways. As it goes past the dock, some of the wave is deflected into the area on the opposite side that should be in the "shadow" of the dock. This deflected wave isn't simple like the original wave, it's more of a mess of disturbed water.

The dock is the diaphragm. If the lake (opening) is large, what is deflected is relatively small, by proportion to the rest of the wave in the lake and on the backside of the dock the wave is mainly intact. When the lake is very small and the dock relatively long, the deflected area, the disturbed water (or light) that isn't properly part of the original wave, represents a larger proportion of the water that's beyond the dock than the original wave, and the original wave (light that forms the the image) is less clearly defined, obscured by the large proportion of disturbed wave.

An illustration: http://fphoto.photoshelter.com/image/I0000irq3tbIzF20

fishbulb
20-Apr-2015, 07:49
http://www.cambridgeincolour.com/tutorials/diffraction-photography.htm

That one would be my recommendation for a simple, easy to understand explanation. But yeah, complex topic.

Jim Andrada
20-Apr-2015, 11:35
By the way, diffraction is not always a bad thing. A pattern of concentric rings (precisely calculated, of course!) will act as a lens because the rings are spaced so as to "interfere" at a known distance thus focusing an image. Check out "Zone plates". I have a pinhole set where the maker also provides a set of zone plates with different "focal" lengths. The zone plates are effectively at a much wider aperture than the pinhole.

http://en.wikipedia.org/wiki/Zone_plate

Lightbender
20-Apr-2015, 21:24
Thank you for the links. However they seem to explain the effects of diffraction rather than the cause.

In the cambrige example, light is shown going straight through a large hole, and then diffracting through a small hole.
It doesn't explain why the 3 light rays somehow 'know' they went through a small hole instead of a large one.

Do they interact with the hole in any way?

if they are interacting with each other, why dont they interact with each other when there are more of them going through a larger hole?

Are they diffracted by the air, or the glass, or??

What if I had a large aperture made of mostly small holes.. would their be alot of diffraction or not?

What if there were only 3 rays of light, traveling closely together, close enough that they could pass through a f64 aperture, but in actuality they passed through a f1.4 aperture. Would they still diffract?

Lightbender
20-Apr-2015, 21:40
"The fast and furious explanation is that the edge is always messing with the light and it becomes proportionately more of the image as the central area inset from the edge becomes smaller in proportion to how much edge there is. i.e. less clean central light versus more messy edge light the smaller your aperture. "

Patrick - if diffraction is caused by irregularities in the edge of the aperture, would a thinner, better light absorbing, more regular aperture cause less diffraction?

Jim Andrada
21-Apr-2015, 02:11
What causes diffraction is the fundamental nature of waves and the way waves interact with objects. Basically that's how waves (all waves) act when they encounter objects in their "path". You're asking a very good question - the problem is that answering the question of why waves act that way gets into the realm of very advanced PhD level physics. It's sort of like gravity. We all know how gravity acts and use the engineering level understanding to do things like put satellites in orbit, It took Einstein to figure out WHY gravity works the way it does.

I'm afraid that in the realm of photography we have to be satisfied with the engineering level understanding of diffraction.

And by the way - are you really sure that the loss of sharpness you're in the center isn't mostly caused by focus shift rather than diffraction?

Re your specific questions

Do they interact with the hole in any way?

Yes - they interact with the EDGE of the hole.

if they are interacting with each other, why dont they interact with each other when there are more of them going through a larger hole?

Don't confuse waves with the nice straight lines we draw when tracing "rays"

Are they diffracted by the air, or the glass, or??

Diffraction is caused primarily by holes/things with edges - basically the aperture. What you get from air and glass is more appropriately called refraction. The sky is blue because of a phenomenon called Rayleigh Scattering so dust and moisture in the air causes some problems but I think we can ignore that for purposes of the present discussion.

What if I had a large aperture made of mostly small holes.. would their be alot of diffraction or not?

Yes - diffraction would be much more significant - depending of course on the diameter of the individual holes. If the holes were very very small you'd probably see something akin to the rainbow effect of the track spacing on a CD - you'd have a super diffraction grating.

What if there were only 3 rays of light, traveling closely together, close enough that they could pass through a f64 aperture, but in actuality they passed through a f1.4 aperture. Would they still diffract?

1) As above, don't confuse "rays" with "waves".

2) There is diffraction everywhere a wave encounters an edge. As was pointed out by mdarnton, the amount of edge compared to the area of the hole increases as diameter is reduced. If r= the radius of the hole, the relationship is given by (2*pi*r)/(pi*(r*r)) = (2*r)/(r*r) = 2/r. As r gets bigger, edge effects are less important, as r gets smaller, edge effects dominate.

By the way, thanks for asking these questions. I was a Chemistry & Physics major but it was so long ago that I've forgotten everything I once thought I knew.

N Dhananjay
21-Apr-2015, 04:12
One way to understand the mechanism or process is as follows. Consider a wavefront of light. The waves are in some sense balanced - one part of the wave is supported by the waves around it - that is why the light or wavefront is cohesive. When part of the wavefront is cut off by an aperture, the part that is on the periphery of the section that passed through the aperture is now imbalanced since the part supporting it on one side got stopped/eliminated by the aperture/obstruction. This imbalance results is a spread in the wave function. So there is really no mechanical way (thinner aperture etc.) to eliminate this effect. Does that help?

Cheers, DJ

jp
21-Apr-2015, 04:59
Great discussion!

Struan Gray
21-Apr-2015, 05:26
One way to understand diffraction is to ask yourself why light *shouldn't* spread out after passing through a hole.

Water waves do. Sound waves do. All waves do.

The reason most light rays don't spread out very much is that they have such tiny wavelengths compared to the holes they are usually passing through. Also, light from conventional lamps and discharge tubes is mostly incoherent, so the final effects of any diffraction they do experience are less dramatic. Really coherent light, from a high class laser, diffracts by itself, as the beam width acts as an effective aperture.


132757

Subsurface waves on the underwater thermal/saline interface in the Straits of Gibraltar, diffracting off the Pillars of Hurcules.

Old-N-Feeble
21-Apr-2015, 06:03
That is a very nicely demonstrative image, Struan.

Struan Gray
21-Apr-2015, 06:54
Thanks OnF. Strictly speaking, those are non-linear waves, or solitons. Like a tidal bore, but underwater. Diffraction is the same, even for them.

rbultman
21-Apr-2015, 07:21
Is this why the disks used in soft focus lenses, like the 150mm for the RB, are perforated? You essentially have more sources of diffraction with the disk having the smallest holes providing the fuzziest (softest) image?

Struan Gray
21-Apr-2015, 07:29
That will provide a contribution, although the softening given by diffraction is less than many people imagine, and it tends to be fairly hard-edged.

The soup-strainer discs are a way of reducing the total amount of light, while allowing some rays from the edge of the lens to form the image. You are mixing in aberrations which would normally be eliminated by stopping down.

N Dhananjay
21-Apr-2015, 07:37
Is this why the disks used in soft focus lenses, like the 150mm for the RB, are perforated? You essentially have more sources of diffraction with the disk having the smallest holes providing the fuzziest (softest) image?

No, that is different. Soft focus lenses (at least of those designs) rely on spherical aberrations. Spherical aberrations are reduced as you stop down. So the perforations are ways to allow spherical aberrations at smaller stops. That is, at smaller stops, light is not made up of only light from the center of the lens but also from the periphery (which gives you the spherical aberration).
Cheers, DJ

rbultman
21-Apr-2015, 07:39
Thanks for the explanations.

N Dhananjay
21-Apr-2015, 07:41
Really coherent light, from a high class laser, diffracts by itself, as the beam width acts as an effective aperture.


Thanks for mentioning this. Again, it underscores the fact that diffraction is a fundamental aspect of wave phenomena. If you have waves, you always have the potential for interference effects. It is probably the biggest reason we have to live with wave-particle duality - there is no simple way to explain diffraction with a particle nature (at least, not without recourse to quantum weirdness...:-).

Cheers, DJ

Jim Andrada
21-Apr-2015, 10:03
When I was in college we didn't have to worry about the self diffractive property (or any other property) of lasers because - there weren't any lasers! Damn - makes me feel OLD!!!

Nodda Duma
21-Apr-2015, 10:10
One way to understand the mechanism or process is as follows. Consider a wavefront of light. The waves are in some sense balanced - one part of the wave is supported by the waves around it - that is why the light or wavefront is cohesive. When part of the wavefront is cut off by an aperture, the part that is on the periphery of the section that passed through the aperture is now imbalanced since the part supporting it on one side got stopped/eliminated by the aperture/obstruction. This imbalance results is a spread in the wave function. So there is really no mechanical way (thinner aperture etc.) to eliminate this effect. Does that help?

Cheers, DJ

Great explanation and you nailed how it is described in optics courses before getting into the math.

djdister
21-Apr-2015, 10:15
Great explanation and you nailed how it is described in optics courses before getting into the math.

I agree, a great non-mathematical explanation, and appreciated.

Emmanuel BIGLER
21-Apr-2015, 14:54
Coming late to the discussion, many thanks to N. Dhananjay for a superb non-mathematical explanation of diffraction valid for any kind of waves.
Actually the great Dutch physicist Christiaan Huyghens (1629 - 1695) had the same intuition more than 3 centuries ago, probably by looking at waves propagating at the quiet surface of ponds or canals in the Netherlands; he observed the shape of the wave after passing through an aperture of reduced size and realised that a plane wave with a large front was transformed into a cylindrical wave spreading in all directions.
Classroom demo, to re-enact Huyghens visions; the text is in French but the images are self-explanatory (http://www.sciences.univ-nantes.fr/sites/jacques_charrier/tp/interferences/exp_decouv3.html)

Huyghens did not have the required mathematical tools to compute the phenomenon. It is only at the beginning of the XIXst century, with the invention of calculus and other progress in mathematics, that Augustin Fresnel (1788 - 1827) could put some real numbers and equations to compare a model with experiments.

A physicist would say exactly the same thing as N. Dhananjay, but in more obscure words ;)
Imagine a wave in free space or in a free medium without any screen or anything obstucting the propagation; imagine a wave with a large wave front, much wider than the wavelength of the wave itself. The wave propagates freely and obeys some fundamental equations of physics, be it in hydrodynamics, acoustics, electromagnetism or optics, but within some reasonable approximations, all phenomena can be modelled by the same kind of equations.
Imagine that you "cut a piece" of this wave, by keeping only the central part and forcing to zero all vibration on both sides.
Such a "cut wave" does not obey the wave equations and hence does not exist (this is a typical approach by a theoretician: if an experiment is in contradiction with theory, experiment is wrong ;) )

What happens behind a lens, where a sharp image is projected, is more complex and was originally studied, not by Fresnel, but by the German physicist named Joseph von Fraunhofer (1785 - 1826).
The simplest way to see what happens in one of our good lenses, where aberrations are negligible, is to neglect diffraction by all lens mounts and to consider that diffraction occurs only near the iris, supposed to be stopped down quite heavily. Imagine that we illuminate the lens with a perfect monochromatic point source to simplify the problem and avoid mixtures of various colors, as suggested by Struan.
The hard thing to imagine now is to consider that the problem is equivalent to finding how a spherical converging wave continues to propagate after being abruply cut by an aperture. If diffraction by lens mounts behind the iris is neglected, it can be shown that we could consider an equivalent problem, a perfect spherical wave converging at some point of the image plane, namely the geometrical image of the source point. This perfect converging wave is abruptly cut by the exit pupil of our lens, the image of the iris in image space.
A spherical converging wave abruptly cut laterally is not a solution of wave equations and to this wave are superimposed some diffracted waves that spread some amount of light around the perfect geometrical image point.
What is not easy to justify without maths is that when the exit pupil diameter or iris diameter becomes smaller and smaller, the amount of light spread off the geometrical image point is greated and greater.
And the next step in the discussion, that was recognised only in the XXst century, is to build a model for image degradation when the lens is diffraction-limited. In other words, how the shape and size of the diffraction spot actually affects image quality. For this, physicists have applied to light propagation and image formation the same formalism named Fourier analysis that was already in use in acoustics and electricity, by decomposing an object into elementary periodic structures, namely sinusoidal gratings.

And eventually, after all this long historical path, more than 3 centuries old, we come to an exceedingly simple equation:

pc = N . λ

where pc is the smallest period of the tiniest elementary periodic structure visible in the diffraction-limited image,
N the f-number as usual defined by the ratio = (focal length) / (diameter of entrance pupil)
λ the wavelength of light.

Amazing ! :cool:

the reason why we do not directly see the exit pupil diameter in the equation is very subtle, hard to explain without maths and not really fundamental since we almost use symmetrical lenses all the time here; in such lenses, the exit pupil and the entrance pupil have the same diameter.

Jim Andrada
21-Apr-2015, 15:41
Now if we could just get rid of that pesky long-wavelength red light...

(Or maybe just use Orthochromatic film!!!!!)

Just teasing.

Although, if I use the formula Emmanual has graciously provided, it's not unrealistic to think that one could stop down 1 - 2 stops if the film were only blue sensitive or a blue filter were used without increasing diffraction effects.

And of course a great advantage of X-Ray lithography is that at X-ray wavelengths in the 1nm range, diffraction limits are more or less nonexistent.

Nodda Duma
21-Apr-2015, 15:52
Emmanuel I think your last statement about equivalency of exit and entrance pupil relies on the thin lens approximation, and/or symmetrical lens operating at equal finite conjugates. This usually isn't the case for camera lenses (although granted I don't think they're too different).

Dan Fromm
22-Apr-2015, 07:17
Interesting and enlightening discussion, but it failed to address my question about diffraction, depth of field and a few other subjects in photography. The rules are well-known, don't have to be understood to be applied. Why do we all seek to understand what doesn't really have to be understood?

jp
22-Apr-2015, 08:03
Now if we could just get rid of that pesky long-wavelength red light...


When using a DSLR, some of my most detailed photos are infrared scenes. Might not make sense for diffraction and small sensors, but it cuts through the atmospheric haze like nothing else, which is what hurts contrast the most for me.

Emmanuel BIGLER
22-Apr-2015, 08:14
Emmanuel I think your last statement ...

Well, I have to apologize, since I thought I had already entered into too much maths and physics, and I did not want to enter into the details of the calculations, but for those of our readers who like to know where formulae come from; here are the detailed explanations for the case of the asymmetrical lens with non-unit pupillar magnification factor.

And I apologize to Dan F. : Dan, if you do not want to know the Ultimate Secrets of the Top-Secret Diffraction Formula, simply skip this lengthy text (length as usual, I know).

Actually the lateral size of the diffraction spot in the image plane, for a single monochromatic point source, or which is equivalent, the cut-off period pc for Fourier analysis of a diffraction-limited image, can be expressed directly as a function of the sine of the half angle α' under which the edges of exit pupil is seen from the image point (assumed for simplicity to be on the optical axis) where a diffraction-limited spherical beam converges.

pc = λ/(2 sin(α'))

It happens that exactly the same quantity sin(α') appears in the fundamental photometric formula giving the illumination in an optical image

E = π T L sin2(α')

(E = illumination at the centre of the image plane, T = transmission factor of the glasses, L the luminance of the source and α' as defined above)

When the lens is asymmetrical with an entrance pupil of diameter a and an exit pupil of diameter a', pupillar magnification factor Mp = a'/a, the quantity sin2(α'), after a somewhat cumbersome and uninteresting calculation based on classical conjugation formulae, for an image located at the focal plane, simply reduces to 1/(1+4 (f/a)2) where "a" is the diameter of the entrance pupil and "f" the focal length of the lens.
Hence since we have N = f/a and when N is greater than about 4 (if we accept an error smaller than 1/6th of a f-stop), the formula can be simplified as sin2(α') ~= 1/4N2 => 2sin(α') ~= 1/N
Hence the value for the diffraction cut-off period, valid when N is greater than about 4 as announced in a previous post
pc = N . λ

I prefer this exceedingly simple formula expressed in terms of cut-off period, since

1/ there is no 1.22 or whichever mysterious factor involved that could evoke Lord Rayleigh's Holy Resolution Criterion (for those who want to know why 1.22 and not 1.23, the answer is simple: 1.22 is 3.83/3.14 ;) )

2/ the cut-off period pc is more meaningful, at a time when anything in life is sampled and digitised **(footnote 1) since it directly compares to the pixel pitch of our silicon sensors. If the diffraction cut-off period is pc, your pixel pitch needs to be pc/2 according to the sampling theorem, at least two samples are needed to pass one period. And the smallest period that can cross our perfect lens is pc.
Well all those considerations are valid in a monochromatic world, with no Bayer sensor structure, and of course no residual aberrations.

** footnote 1: everything is sampled and digitised today, except (at least in France, I imagine that different rules may apply elsewhere) a good glass of Bordeaux wine and a plate of traditional petit-salé-aux-lentilles to go with. So far this solid combination of food and beverage has managed to resist strongly against the digital world.

Dan Fromm
22-Apr-2015, 09:38
Emmanuel, I always read the lengthy text, equations included.

I have yet to see anything that changes the rule that on film the diffraction limit on axis is on the order of 1500/f number. I can apply the rule without going through other calculations, both to know how much trouble I'm in with the aperture I've set and to choose aperture to get the trouble I want. Combined with the rules for calculating DoF it tells me what I can't have.

I'm sorry to be so simple-minded, but rules of thumb derived from more-or-less first principles seem to be good enough.

Cheers,

Dan

Ken Lee
22-Apr-2015, 09:58
1500 / f-number

1500 / 15 = 100

Does that mean at f/15 we are limited to 100 lp/mm ? 100 l/mm ?

How is this formula used in practice please ?

Nodda Duma
22-Apr-2015, 10:08
Dan you'll get no argument from me about using rules of thumb. They are essential for staying sane even in the day-to-day working world. Whatever is simplest to use to give you the information you need should be the preference.

But I recall past design efforts where rules of thumb could have easily gotten me in trouble if applied casually. With any rule of thumb, you have to know when the assumptions that allow its application are valid. In order to understand the assumptions, the best way is to acknowledge the underlying math. That is an argument for understanding the fundamentals. A cranky old Navy master optician explained it to me that way a long time ago...except his explanation was way more colorful.

Most of us discover the underlying math when applying a rule of thumb that gives us unexpected results. Curiosity then drives us to understand why. I think that's all part of real-world experience: the fundamentals don't really mean anything until we've seen them in practice.

My apologies for waxing philosophical :)


Oh and great write-up Emmanuel.

Dan Fromm
22-Apr-2015, 10:23
1500 / f-number

1500 / 15 = 100

Does that mean at f/15 we are limited to 100 lp/mm ? 100 l/mm ?

How is this formula used in practice please ?

Yes, you divided correctly.

100 lp/mm on film means 100/enlargement in the final print. If you accept that 8 lp/mm is the lowest acceptable resolution in the final print then 100 lp/mm on film means that the negative can't be enlarged more than ~ 12 x. If you want to print large you need a large negative or must shoot a super lens at a largish aperture. This last introduces other compromises. Why do you think the tiny chip digicam enthusiasts are so crazy for fast lenses?

And don't forget that when shooting at near distances effective aperture, not aperture as set, is what matters. Another limit. This is why many of my 35 mm Kodachrome shots of flowers and such can't be printed very large. 8 x 10 is sometimes larger than their limit.

Incidentally, the rule of thumb I gave gives diffraction limited resolution at low contrast. And it overstates diffraction limited resolution off-axis. Think of it as saying "you absolutely can't do better than this and for critical work had better expect even less."

Dan Fromm
22-Apr-2015, 10:24
Jason, of course you're right that rules of thumb used unintelligently are dangerous.

Nodda Duma
22-Apr-2015, 14:16
I need to dig out a recent white paper that was published which you may be interested in. The conclusion of the paper was that there is an absolute, hard-stop limit to resolution for imaging objectives dependent solely on the number of elements in the design. Sort of a rule of thumb for design analogous to yours. The paper was presented at the International Optical Design Conference last year.

Dan Fromm
22-Apr-2015, 15:36
Interesting, Jason. I wonder how they got the result and whether limiting resolution falls monotonically with number of elements. If so it seems very odd. If so.

Nodda Duma
22-Apr-2015, 16:12
Interesting, Jason. I wonder how they got the result and whether limiting resolution falls monotonically with number of elements. If so it seems very odd. If so.

Dan the survey included 3000+ optical designs available from an old database called LensView, which was compiled from patent searches by the original LensView authors. So it's a statistically significant representation. The lenses were all scaled to the same focal length and resolution analyzed in a way to provide fair comparison (my memory is foggy on the details).

The design "data points" were then plotted on a resolution vs # of elements chart, and all fell below a straight line showing a monotonically increasing resolution limit vs # of elements (I believe between 1-10 elements). No designs were above this limit.

Interestingly enough, the use of aspheres in a lens design does not allow you to surpass this limit. In other words, using aspheres does not reduce the number of elements necessary to hit a certain resolution. However, the advantage aspheres provide is in size and weight. That conclusion actually agreed with my own design experience.

Another interesting white paper presented in conjunction discussed the design techniques that *do* break this traditional design rule: 1) A curved image plane negating the need to correct field curvature, and 2) computational imaging...a hot topic of current imaging system design research where traditional optical correction is traded off against specific types of image processing. The former is of interest to film. The latter, of course, is not.

I'll try to dig up a link after the kids are in bed. It'll be in the OSA database (pay to read), but I'll see if I have a digital copy from when I went to the conference.

Dan Fromm
22-Apr-2015, 17:10
Jason, thanks for the reply. So resolution attainable increases monotonically with the number of elements. Not surprising, each additional surface gives the designer an additional degree of freedom for controlling aberrations. How nice to have coatings that help maintain transmission and control veiling flare.

Nodda Duma
22-Apr-2015, 17:27
The limit was y = 1100x, where y is the number of resolvable "spots", and x is the number of elements.

A resolvable spot was defined as image field diameter / spot diameter. This is how they handled scaling, having the same effect as scaling focal length, for fair comparisons of design performance.

The rule of thumb is good for designers, so that when a manager comes to ask if such-and-such is possible in a design, I can quickly say "Maybe...get some funding and I'll take a look", or "Bless your heart, that's just not possible". :)

The presentation paper was fairly short. I'll send that to you. It's primary reference for the comparison was:

O. Cakmakci, J.P. Rolland, K.P. Thompson, and J.R. Rogers, “Design efficiency of 3188 lens designs,” Proc. SPIE 7061 (2008).

My SPIE membership lapsed (oops), so I can't download that one for free.

Jim Jones
22-Apr-2015, 18:03
Yes, you divided correctly.

100 lp/mm on film means 100/enlargement in the final print. If you accept that 8 lp/mm is the lowest acceptable resolution in the final print then 100 lp/mm on film means that the negative can't be enlarged more than ~ 12 x. . . ."

Rudolf Kingslake suggests on page 72 of his 1951 Lenses in Photography that the reciprocal of the resolution in the print may equal the reciprocal of the lens resolution plus the reciprocal of the film resolution. Thus, what an optician may see on an optical bench or a mathematician may see in a formula is better than what we see in a print.

Dan Fromm
22-Apr-2015, 18:28
Jim, are you sure he didn't mean resolution in the negative, not in the print? I don't have the book so can't check myself.

Cheers,

Dan

Lightbender
22-Apr-2015, 21:02
THANKS EVERYONE!

Especially N Dhananjay for the short explanation, and Emmanuel BIGLER for the long explanation.

Jim Jones
23-Apr-2015, 06:21
Jim, are you sure he didn't mean resolution in the negative, not in the print? I don't have the book so can't check myself.

Cheers,

Dan

Dan, you're right, as usual.

Nodda Duma
23-Apr-2015, 06:35
I'm off-topic now, but you guys have me curious now as to whether film response has a defined MTF model (a better metric than Kingslake's assumption above). A digital array is well-defined, but film is a bit trickier due to the grain sizes and position being a distribution and not an orderly arrangement.


Edit: Looks like Norman Koren goes into pretty good detail on the subject. Sweet!

http://www.normankoren.com/Tutorials/MTF1A.html

Emmanuel BIGLER
23-Apr-2015, 07:09
...you guys have me curious now as to whether film response has a defined MTF model (a better metric than Kingslake's assumption above). A digital array is well-defined, but film is a bit trickier due to the grain sizes and position being a distribution and not an orderly arrangement.

Actually, everything related to film grain, film noise and film resolution appears nowadays to be very complex to define and very hard to understand, for many reasons. And of course the random structure of film is adding another burden to the complexity of various film models, when compared to an ideal monochrome silicon image sensor.
However MTF for film is something well defined, at least there is enough consensus between film manufacturers to publish MTF curves for their film products in their data sheet, so that anybody can freely compare them, like anybody can compare published MTF charts for lenses.

However, my understanding is that MTF for film is only meaningful to engineers, film MTF data are very, very far from our human visual perception of a film-based image. The mere concept of MTF implies that we are dealing with a linear transfer of very small modulations between the object and the final image, whereas our human perception of image quality is highly influenced by all kinds of non-linear effects and strong modulations.

The idea that we live in an era where digital post-processing opens new perspectives in image performance and quality can be illustrated, for example to stay within the scope of this forum, in the field of digital view camera lenses where some colour drifts specific to the combination of a Bayer sensor structure with a wide-angle view camera lens can somewhat be corrected after the image has been recorded; by subtracting a reference image recorded from a reference white subject.

And also the fact that, in principle, "natural vignetting" in wide-angle quasi-symmetric lenses can be compensated to a certain extent by post-processing (you need at least to record a few photons per pixel in the corners of the image, before trying to compensate natural vignetting ;) )
Not mentioning geometrical distorsion, for which in principle nobody should care today since "it is so easy to compensate distorsion afterwards in a digital image".

The absolute diffraction limit cannot be, in principle, compensated, but until we reach this limit, digital post-processing can in principle "boost" the MTF output of the lens.
Even some kind of geometrical aberrations and de-focusing can be compensated, the emblematic example being the first images delivered by the Hubble Space Telescope.
To the best of my knowledge, however, Hubble images were much better after astronauts could install a correcting optical system!
But this is another story.

Nodda Duma
23-Apr-2015, 08:09
Emmanuel you will get no argument from me. I'll even say that MTF is of limited use to engineers who haven't seen for themselves how the theory ties into the real world. It's only one facet of describing what goes on. Theory only gets useful with the deep understanding that comes from real world experience (and that's true for *everything* practical). But once you've gotten that experience, then you see the effects everywhere.

Something to remember: we are standing on the shoulders of giants. These mathematical descriptions were all derived based on observations of real world effects. Seidel, for example, generated his aberration theory to describe what he saw at the eyepiece of his telescope. That's just one example among countless in the history of science. So we can take some comfort in the thought that the models and theory are grounded in reality. After all, if the theory wasn't accurate then lenses even as simple as the Petzval and Cooke (both mathematically calculated) would have never been possible! Not to mention every lens which came afterwards.

Jim Andrada
23-Apr-2015, 17:29
Thanks Emmanuel and Nodda. You're helping to make this thread a really great resource.

Funny you should bring up post process correction of fall-off/vignetting.

My EOS 5D is still the original one so I don't know what magic has gone on in still cameras for a while, but just this week I took delivery of a Canon EOS C100 video camera which (drumroll) has profiles stored for a number of lenses which it uses to ameliorate the vignetting and they clearly describe the function in the manual. Quite a trick when recording 30 (well actually 29.97) frames per second. As I recall it used to really be 30, but they modified it by 1% when color TV's appeared to keep them from confusing B&W and color broadcast material. How ANALOG of them!

Now if only I could get the equivalent of a digic chip in my PC maybe I could play back video without all the stammering and stuttering caused by huge high-res files and complex codecs.

I may have forgotten all the Physics I thought I learned in college around what now seems to have been the dawn of recorded history, but this is my 56th year in the computer business and I still have a few things I've failed to forget so I might make it to 60 years before my mind completely turns to mush.

Anyhow, thanks again.

Jim Andrada
6-May-2015, 22:41
Well, just when you thought it was safe to go back in the water along comes the following from Canon

Canon’s use of diffractive optics (DO) results in high-performance lenses that are much smaller and lighter than traditional designs. Canon’s unique multilayer diffractive elements are constructed by bonding diffractive coatings to the surfaces of two or more lens elements. These elements are then combined to form a single multilayer DO element. Conventional glass lens elements disperse incoming light, causing chromatic aberration. The DO element’s dispersion characteristics are designed to cancel chromatic aberrations at various wavelengths when combined with conventional glass optics. This technology results in smaller lenses with no compromise in image quality. Canon has also developed a triple-layer type DO lens that uses an advanced diffractive grating to deliver excellent performance, with superior control of color fringing. This configuration is ideal for zoom lens optics and provides significant reductions in size. A good example is the EF 70–300mm f/4.5–5.6 DO IS USM lens, which is 28 percent shorter than the EF 70–300mm f/4–5.6 IS USM lens.

OK - not a LF lens, but we're talking about optical principles here.

Any thoughts from our resident optical gurus? Are they basically combining zone plates with glass?

There are drawings to sort of explain what they're doing here www.usa.canon.com/cusa/consumer/standard_display/Lens_Advantage_Perf

Emmanuel BIGLER
6-May-2015, 23:38
Are they basically combining zone plates with glass?

Hi!
Certainly, yes, but those are not basic opaque/transparent zone plates like the ones used for pinhole cameras.

My understanding is that those circular diffraction gratings are "phase gratings" with transparent layers of different thicknesses deposited on one of the glass surfaces.
Those gratings need to have the same axial symmetry as all lenses used in our optical lenses hence they have to look like zone plates with circular rings.
In principle, chromatic aberrations in such circular gratings are of opposite sign with respect to chromatic aberrations generated by glass. Hence the use of a smart combination of both to cancel-out chromatic aberrations, to some extent.

Nodda Duma
7-May-2015, 02:43
Reading between the lines of the marketing mumbo-jumbo..

Diamond-turned diffractive surfaces are often used in long wave and mid wave infrared objective designs for color correction and to reduce size of the objective. Works great but it is expensive and the secondary effects can cause issues.

They're not really used in visible because the wavelength compared to the surface feature size causes forward scatter off the surface and reduces contrast.

That and visible glass is hard to cut.

Those are two fundamental problems which they'd have to address. Or ignore lol. Even the military has skirted around the problem and funded other approaches (gradient-index and printed optics) because of the difficulties involved.

So Canon cuts the surfaces into a softer glass like fused silica and reduces forward scatter by burying the surfaces in the "bond joint" of an achromat, where surface roughness has less of an impact on scatter.

Nodda Duma
7-May-2015, 04:12
Oh btw... For those who aren't quite sure what it is, here's an intro on diffractive surfaces.

http://physweb.bgu.ac.il/~gtelzur/teaching/comphy/Presentations/TamirGrossinger.pdf

And a link to several white papers on diamond turning by the people who made the first commercial diamond-turning machine.

http://www.precitech.com/about/white_papers.html

Coincidentally, I was fortunate to have seen and use the first--Serial # 1-- commercial diamond-turning machine, it having been delivered in the 80s to the lab I worked in for the Navy. The machine, the size of a small room, was dismantled in ~2009.

Jim Andrada
8-May-2015, 11:12
I was wondering if they were cut or deposited. Also what about them would allow the lens to be more compact for equivalent zoom range?

By the way, Nodda, what Navy lab were you working in? I worked for the Naval Weapons Lab (Dahlgren Va) for a couple of years "long, long ago" (1962 - 63) but I was a computer type. Very interesting place.

Bill_1856
8-May-2015, 11:21
It's that way because that's the way it is.
Live with it.

paulr
8-May-2015, 11:25
However, my understanding is that MTF for film is only meaningful to engineers, film MTF data are very, very far from our human visual perception of a film-based image. The mere concept of MTF implies that we are dealing with a linear transfer of very small modulations between the object and the final image, whereas our human perception of image quality is highly influenced by all kinds of non-linear effects and strong modulations.

Where does this understanding come from?

Jim Andrada
8-May-2015, 11:32
Are you sure? Maybe it only seems to be the way it is or isn't, or is or isn't the way it seems to be. Particularly at larger apertures.

Nodda Duma
8-May-2015, 14:56
I was wondering if they were cut or deposited. Also what about them would allow the lens to be more compact for equivalent zoom range?

By the way, Nodda, what Navy lab were you working in? I worked for the Naval Weapons Lab (Dahlgren Va) for a couple of years "long, long ago" (1962 - 63) but I was a computer type. Very interesting place.



Deposited wouldn't make sense..there's no deterministic way to build up a diffractive surface using coating methods.

Someone mentioned above that diffractive surface adds optical power to a surface...in a very simplistic way, I can say that this allows you to bring light to a shorter focus for a given surface curvature. Normally, you'd want to split a lens to add power and/or avoid getting too steep a curvature on a surface (think total internal reflection). But like I said usually the drawbacks override the benefits in the visible band.

I've used diffractive surfaces in designs for thermal objectives, but they are very tricky to analyze correctly and they are difficult to fabricate. In fact, I've probably had to fix more designs where the designer has implemented them incorrectly than I've designed myself.

I worked at China Lake from 2000-2010. Very fun job but we got tired of desert living.

Jim Andrada
8-May-2015, 17:28
So if LF lenses were still the flagship products I guess we might be using 600mm non-telephoto lenses that only needed 300mm of bellows draw and produced a perfectly (almost) color corrected lens with (practically) zero spherical aberrations. And we could even get them to keep the close-in subject as well as the distant mountains in (nearly) perfect focus at F 1.2! Oh well, we can dream I guess.

Dan Fromm
8-May-2015, 18:17
Jim, as long as you're dreaming add an all terrain truck crane for transporting and setting up your 600/1.2.

Emmanuel BIGLER
9-May-2015, 08:00
Where does this understanding come from?

Simply because the human eye looking at a displayed print at 30 cm viewing distance is far more sensitive to big modulations, abrupt edge contrast, coarse grain effects, deepness of the dark areas, and so on including all kinds of non-linear effects, than contrast of small modulations in a linear input/output model.

In other words, the human eye is sensitive to many kinds of physiological effects not addressed by the principles and measurements of FTM on film.
The correlation between film FTM and final image quality is far from obvious.
But, for sure, a film featuring a nice FTM curve passing up to 400 cy/mm like a microfilm has some definite qualities, for example for a microfiche reader.

paulr
9-May-2015, 08:26
The correlation between film FTM and final image quality is far from obvious.

Agreed 100%. Although I'd suggest that if you learn what to look for, many important correlations become reliable.

An MTF curve shows contrast at at all spacial frequencies, and there's now ample science on how the eye uses different frequency ranges in interpreting an image.

If we're looking at film (or sensor) MTF, the critical step is to translate this into MTF at the final print / viewing distance. This adds variables, of course, because with a digital file any sharpening steps move the MTF curve around, and with a darkroom print the enlarging optics diminish the modulation significantly. But the film curve gives us a solid starting point.

Dan Fromm
9-May-2015, 08:47
An MTF curve shows contrast at at all spacial frequencies, and there's now ample science on how the eye uses different frequency ranges in interpreting an image.

If this is true why does Rodenstock, for example, publish MTF curves for specified spatial frequencies?

Nodda Duma
9-May-2015, 08:53
I see a lot of poo-poo'ing of using MTF as a metric of interpretation for what the human eye will see in an image. To the point that laymen think there is no meaningful way to determine how an image will appear to the eye.

That conclusion would be patently false. However, MTF tells only a very small part of the full story in predicting image quality. It is difficult, but not impossible, to model a full imaging system, and so it is typically not done unless the need to predict the final image is critical. Search for NV-IPM...a modeling tool which has borrowed heavily from real-world data collection and testing as well as imaging theory (itself developed on predicting real-world phenomena).

But when talking about photography purposes, such a tool is well beyond your need. Just take pictures and keep the ones you like.

paulr
9-May-2015, 10:34
If this is true why does Rodenstock, for example, publish MTF curves for specified spatial frequencies?

Because there are more factors than can be shown in a single two-dimensional graph. Lens makers usually address this by showing MTF for every angle of view, but only at a few spatial frequencies.

For film and sensors, performance is uniform over the whole surface. That's one less variable, so they can all the spatial frequencies in a single chart.

paulr
9-May-2015, 10:40
It is difficult, but not impossible, to model a full imaging system, and so it is typically not done unless the need to predict the final image is critical.

It's actually pretty easy. I think it isn't done more often because you'd be modeling (or measuring) exactly one combination of devices, with one set of variables for each. So while the results would be precise, they'd be too specific to be useful for many people.

There are tools that specifically measure the whole system (like Imatest). Interestingly, the biggest complaint about these tools is that they don't let you isolate components. You can only test a particular lens / camera combination.

If you've got MTF curves of taking lens, film, and enlarging lens, you can just multiply them. That's one of the elegant properties of MTF.

bobwysiwyg
9-May-2015, 11:21
It's basic physics. If by "good" you mean "simple," then you probably never had a good explanation because the why isn't simple.

Start at that link then at the bottom of that page are links to the math and deeper discussion about why the why is the why :)
https://luminous-landscape.com/understanding-lens-diffraction/

The fast and furious explanation is that the edge is always messing with the light and it becomes proportionately more of the image as the central area inset from the edge becomes smaller in proportion to how much edge there is. i.e. less clean central light versus more messy edge light the smaller your aperture.

I like this one. Easily visualize this and it's very straight foward.

Emmanuel BIGLER
9-May-2015, 11:42
If you've got MTF curves of taking lens, film, and enlarging lens, you can just multiply them. That's one of the elegant properties of MTF.

Paul, this approach is valid only if you assume some kind of linearity between the output and the input.

As far as lens MTFs are concerned, there is no objection, linearity is granted in all photographic applications between the input luminances and the output number of recorded photons per square area.

But film and the human vision work in a very non linear regime.

For example : in order to get a nice gradation of gray levels, you need a linear gradation in terms of optical densities, not in terms of number of photons.
Same applies to digital cameras where the output levels are computed with some kind of non linear correspondence between the number of recorder photo-electrons and the output image level.
Hence MTFs for film can only be defined in a linearized model, only for small modulations.
And as explained by Nodda Duma, in this model we are very far from our physiological image perception processes.
But MTFs for film give us some useful information, in order to properly record a hologram, for sure, you need a generous amount of cy/mm in yout recording medium ;)

This is basically why I consider MTFs for film as mostly irrelevant for the assessment of good quality photographic prints.

But, believe me, I have on my computer a complete collection of film data-sheets accumulated since the end of the last century ;-) I love MTF charts of all kinds ;-)

Nodda Duma
9-May-2015, 12:12
No, imaging systems are definitely linear systems (which is different than talking about a linear response curve). And MTF certainly isn't irrelevant as we all agree. However, MTF (the whole system MTF..yes Paul you are correct) only tells part of the story. Additionally, this type of analysis is definitely susceptible to GIGO -- Garbage In, Garbage Out. Imatest is a good tool (the LSF output was my suggestion to Norman), but it is dependent on contrast and the high spatial frequency part of the curve is noise-dependent. Better to measure the lens on an MTF bench like from Optikos and roll in the rest of the subsystem MTFs separately.

Anyways, I'm rambling. Bottom line is that MTF is useful, but you need to know what it really tells you and how to correlate the values to what is presented to the eye.

paulr
9-May-2015, 13:24
Bottom line is that MTF is useful, but you need to know what it really tells you and how to correlate the values to what is presented to the eye.

Agree 100%, and also that doing so isn't easy.

Agree also on GIGO, which is of course the case with any system that quantifies anything.

What I like about MTF is that, while it's complicated and unwieldy, it DOES correlate to how the eye perceives sharpness and detail. Which you can't say about more popular and traditional methods, like Air Force resolution charts.

paulr
9-May-2015, 13:36
This is basically why I consider MTFs for film as mostly irrelevant for the assessment of good quality photographic prints.

Well, I think for most people shooting large format film, especially if they're enlarging very little, film MTF is unimportant. But not for the reasons you're giving. It's just that most film is going have 100% modulation or thereabouts at the relevant spatial frequencies.

For example, Tri-X (http://www.kodak.com/global/en/professional/support/techPubs/f4017/f4017.pdf) doesn't drop below 100% modulation until 30 lp/mm. This corresponds to 5 lp/mm, the most critical spatial frequency for sharpness, at a 6X enlargment. That's nearly a 30" print from 4x5, or 60" from 8x10. At these sizes, the sharpness differences between Tri-X and Tmax or microfilm are going to be unimportant.

Emmanuel BIGLER
10-May-2015, 02:56
No, imaging systems are definitely linear systems

If you include the human vision in 'imaging systems', then tell me what the relevant linear correspondence between input and output and we can continue the conversation ;)

Same kind of ideas, linear vs. non linear transfer functions occur for the human hearing system.

For sure, Fourier analysis of sound gives a very good starting point in explaining pitch and tones for musical instruments. But does not explain why we can here a beatnote between two organ pipes that are not perfectly tuned. If the human ear was a 100% linear detector operating like a linear Fourier spectrum analyzer, we would never hear the beatnote, a typical non linear effect that occurs when you non-linearly mix two sine waves. Without beatnotes, it would not be possible to tune a piano!

And (now really off-topic, just for the pleasure of admiring our human sensors for light and sound) there is an interesting example regarding the perceived pitch of very long organ pipes.
In certain pipes delivering low-frequency sound, the fundamental frequency is missing, and the human ear perceives a pitch corresponding to the spacing between harmonics, even if the fundamental is missing.
And there are several interesting experiments with synthetic sounds where you can cheat with pitch and loudness. By continuously changing the loudness and pitch of digitally synthetized sound, "specially designed to cheat", you can give the illusion of a sound pitch that goes up indefinitely!

In an image, the equivalent of a beatnote is a moiré effect and is a 2D phenomenon, but the analysis in terms of human perception is certainly not the same as for sound.

We are much more sensitive to low spatial frequencies that we see in a moiré pattern than to the individual fine grids that interfere together in the moiré.
And I totally agree with Paul regarding the fact that the good MTF curve or TRI-X up to 30 cy/mm makes the film particularly pleasant.
This a reason why we love using large format film.

And the fact that microfilms are available in 4x5" for general photography (to be souped in special low-contrast chemistry) whereas good ol' tri-X gives you everything you need, is a mystery ;)

This reminds me the last years of analogue Hi-Fi systems where some serious, uncompromising, amateurs insisted on using audio amplifiers with a bandwidth up to 50 kHz ;)

Nodda Duma
10-May-2015, 04:05
My differential equations are rusty, but I believe beat frequencies are described mathematically by a simple harmonic oscillatior? But since the human ear can't determine phase, the beat frequency is subjectively heard as the difference of the two?

I thought historically that was one of the first uses of differential equations..used to describe the phenomena you mention above.

Anyways, too much math for me. I spent a good portion of my career getting away from that stuff :) I'll stick to ray bending. I do have a detailed optical model of the eye. You would be shocked at how poor a performer the eyeball is. Our brain image processing makes up for a lot.

paulr
10-May-2015, 07:17
[QUOTE=Emmanuel BIGLER;1242851If you include the human vision in 'imaging systems', then tell me what the relevant linear correspondence between input and output and we can continue the conversation ;)[/QUOTE]

For the purposes of perceived sharpness and detail, it's really not as complicated as you're making it out to be. The basics are pretty basic. And with some clever homegrown photoshop experiments you can demonstrate them to yourself and to anyone.

Our visual cortex determines sharpness and "image quality" almost exclusively with contrast in the range of 1 lp/mm to 5 lp/mm. Anything higher frequency than that range is essentially inconsequential. And anything lower frequency than that range is practically irrelevant, since any visual system that does an adequate job at the high frequencies will do a more-than-adequate job at the low ones.

There are now piles of research showing this. And I've demonstrated it by incorporating these lessons into sharpening routines. I've shown many people inkjet prints that look more like contact prints to them than actual contact prints from the same negative.

I agree with everything you're saying about audio. Human psychoacoustics seems like a much less mature and much more mysterious field than human vision.

Although one caveat: I don't get your point with the beat notes of the organ pipes. That's a very simple, completely linear phenomenon. If you add two sine waves that are slightly off from each other, you'll see aliasing in the form of a non-signal, low frequency wave. Both it and visual moiré patterns are simple, easily calculable phenomena.

The examples of visual and auditory illusions play more to neurophysiological quirks, and both more interesting and less easily quantifiable with simple physics.

Emmanuel BIGLER
10-May-2015, 07:56
I don't get your point with the beat notes of the organ pipes. That's a very simple, completely linear phenomenon

Paul, the problem, from an academic point of view, is exceedingly simple.
Let's listen to two sinusoidal sound waves at a slightly different frequency.
Now consider that out ears+brains consist only in a linear detector.
And propose a model where the difference in frequencies could appear in this 100% linear detection scheme.
The difference in frequencies comes out immediately if the detector performs some non linear operations with respect to the amplitude of sound vibrations. But a purely linear detector, by no means can yield different Fourier frequencies in output.

Aliasing comes out from a sampling detector. To the best of my knowledge, our hears are purely analogue.

And regarding human vision, I agree 100% that a certain range of relatively low spatial frequencies contribute to what is perceived as sharp or not. And I agree that Fourier/ linear post-processing techniques can yield very interesting improvements in image sharpness.
But this does not prove that our vision system can be modelled only with a purely linear scheme.

Regarding complexity in modeling human vision, you probably know all examples of classical visual illusions as well as E. Land's "Retinex" models explaining the perception of color and contrast. I stand corrected: this is not so simple.
Are you sure that all secret pre-processing algorithms in DSRLs are 100% linear and spatially invariant?
The two mathematical conditions of linearity and spatial invariance are required, otherwise MTF models are not valid.

Jim Andrada
10-May-2015, 12:57
And to elaborate on Emmanuel's OT remarks about organs and churches, I've heard that it wasn't uncommon for smaller/poorer churches to build organs with two short pipes which sounded like one 16 foot (or maybe 32 foot) pipe. I play Tuba and in fact there is almost no energy at the pedal tones, but people are good about perceiving the low note based on energies at the higher harmonics. I think there's something similar about the acoustics of bells.

paulr
12-May-2015, 16:33
But this does not prove that our vision system can be modelled only with a purely linear scheme.

I'm not trying to prove that our visual systems are purely linear (I'd have to see this defined to even have a hunch one way or another). But what I've seen, in lots of research and in personal experience, is that MTF models subjective impressions very reliably. If you know how to read the charts. And while I'm sure MTF models our vision imperfectly (what model is perfect?), I have yet to see discrepancies that were important.

On the other hand, there are gigantic discrepancies between traditional models of image quality (extinction resolution, etc.) and subjective impressions. This is why MTF is such a breath of fresh air: it's the first model to come along that actually corresponds with what things look like.

paulr
12-May-2015, 16:39
Aliasing comes out from a sampling detector. To the best of my knowledge, our hears are purely analogue.

It would have more accurate to equate musical beating with moiré patterns. These are analogue phenomena and are easily modeled. There is no psychoacoustic component; the beating would register the same to any audio instrument that has adequate frequency response.

Some of the other phenomena you and Jim mention, like phantom fundamental frequencies, are psychoacoustic in nature. They are quirks of the listening organism. But these are not directly analogous to the visual phenomena we're talking about. I tend to think hearing is more mysterious than seeing. The book hasn't even been written yet on how to measure loudness.

Michael R
12-May-2015, 18:20
And to elaborate on Emmanuel's OT remarks about organs and churches, I've heard that it wasn't uncommon for smaller/poorer churches to build organs with two short pipes which sounded like one 16 foot (or maybe 32 foot) pipe.

This is correct. The lowest octave of a 32' rank of a flue stop is sometimes simulated (to varying degrees of success) using open fifths an octave higher in a 16' rank, and this is often done where either space or cost (or both) are prohibitive. Playing a 16' C with G above it, adds a "virtual" 32' C. The stop name is usually something like "Resultant" or "Resultant Bass". Because our ears are fairly lousy at differentiating frequencies that low without higher harmonics, this usually works ok, although the relative volume of the fifth above the fundamental, and whatever else is going on at the same time, determine the extent to which we hear the fifth. Ideally, the fifth is played quieter than the fundamental, for example. The lowest notes in a resultant 32' octave might use principal 16' pipes for the fundamental and a softer rank for the fifths. You can do this on a piano. Play a loud bottom C, then add a softer G a fifth higher, and you'll get an additional C an octave lower (16 hz).

Sorry this was way off topic.

Emmanuel BIGLER
13-May-2015, 00:35
Apologies to the moderators if we are going off-topic but I really appreciate the comments regarding low-frequency organ pipes.
Actualy I know the effect only due to my reading in the last century of a classical French textbook on the subject:
Émile Leipp , "Acoustique et musique" - isbn 978-2225801969, Masson Ed. (1984)
I have found no English edition of this book, unfortunately. The text was written a long time before heavy digital processing and modelling took place in music and acoustics, but I believe that some principles are still valid today.

------------------------

Regarding moiré patterns for two overlayed gratings observed in transmission mode, since the transmission factor of the overlay is the product of the transmission factors (the optical density is the sum of both optical densities), all cross-products of the elementary Fourier components of both individual gratings (I mean: Fourier components of the transmission factors, not Fourier components of the optical densities) will show up in the product of transmission factors. The effect is of course richer than a beatnote between two sounds, since we operate in a 2D space.


These are analogue phenomena and are easily modeled.

Hey, Paul (just pulling your leg again), so easy that I'm still waiting to see how a purely linear sound detector can create the cross-product, hence the sum & difference of frequencies between two purely sinusoidal signals linearly summed up ;)

paulr
13-May-2015, 10:27
Emmanuel, I may not even understand your question. If you're talking about something besides out-of-tune pipes creating a lower "beating" frequency between them, then I get it. And that's something that any sound detector will hear. If you're talking about something else, then I don't know what you're talking about. But I'm betting it doesn't have anything to to with the relationship between MTF curves and subjective sharpness!

Off-topic or not, I'm fascinated by audio and optics, and psychoacoustics and visual perception (psycho-optics doesn't seem to be a real word, for better or worse). I love it when parallels between the two fields are useful. And I see a lot of examples where the weirdnesses of hearing and the weirdnesses of seeing are too different to compare meaningfully. It's important to check the analogies for relevance.

Before trying to quantify the shortcomings of MTF in modeling sharpness, it would make sense to see if they exist, and to what degree. I'd be very interested to see an example of MTF failing miserably in predicting subjective impressions. I haven't yet.

wombat2go
14-May-2015, 06:59
Interesting discussion.
It is understandable that difference components are generated in a non linear transfer function, eg the ear or a diode modulator/detector.
We treat the em propagation as linear (I mean media properties regarded as invariant with intensity as in a vacuum or a glass lens etc)
But what about sound in air?
Are there artifacts by the compresssion of air iself in sound progagation?
Would a perfectly linear microphone detect difference frequencies and their harmonics when placed close to a loud organ?
I suspect so.
I did a search and so far found nothing about sound wave distortion by gas laws etc.

Jac@stafford.net
14-May-2015, 09:19
But what about sound in air?

Doppler effect - regardless of medium

paulr
14-May-2015, 19:25
Let's listen to two sinusoidal sound waves at a slightly different frequency.
Now consider that out ears+brains consist only in a linear detector.
And propose a model where the difference in frequencies could appear in this 100% linear detection scheme.
The difference in frequencies comes out immediately if the detector performs some non linear operations with respect to the amplitude of sound vibrations. But a purely linear detector, by no means can yield different Fourier frequencies in output.

Maybe you're trying to say something different from what I'm understanding.

But as I see, you have two slightly out-of-tune sine waves, you add them together, and you get a very low frequency undertone. It's what we hear, it's what a microphone pics up, and it's what an oscilloscope shows. It looks like this:

133852

Jac@stafford.net
14-May-2015, 19:38
[...] Let's listen to two sinusoidal sound waves at a slightly different frequency.
Now consider that out ears+brains consist only in a linear detector [...]

But do our ears sense in only a linear manner? We know our eye-brain does not.
.

paulr
14-May-2015, 20:03
But do our ears sense in only a linear manner? We know our eye-brain does not.
.

You really have to define your terms here. We respond to some aspects of sound and light in a linear manner, others not. In this case, what we hear is pretty well predicted by what you see in those waveforms, so I'm not sure what's being questioned ...

My question all along is the relevance of this to MTF curves. Non-linearity was brought up in objection to the notion that you can model the sharpness of a system by multiplying MTF curves. This criticism strikes me as both illogical and vague, and moreover I'm certain it's wrong. It's what optical scientists do, and it's one of the things they value about the measurement.

Emmanuel BIGLER
15-May-2015, 00:01
Paul.
Sorry to insist, and I apologize to our readers, this is not one of those controversies we are familiar with on discussion forums, but a fundamental scientific question.
And to put it frankly: your are plain wrong regarding the beatnote for sound.
Or you do not understand what Fourier analysis means, which would be surprising.

If the Fourier spectrum for a certain sound is composed only of two elementary components, i.e. two different sinusoidal vibrations at different frequencies, any linear and time-invariant (non-sampling) detector cannot do anything but filter the two components with a certain coefficient for each frequency. Hence the spectrum of the output signal contains only the two original frequencies. And nothing else.
In the diagram that you've posted with the sum of two sine waves, the Fourier spectrum contains only the two original frequencies. No sum and no difference of frequencies.
Of course we can see the beatnote as plotted as the envelope of the wave. But no difference in frequencies exist in the Fourier spectrum.

Paul, we really need to peacefully agree on this before continuing the discussion.

A sampling detector will create a spectrum for the sampled signal which is different from the original spectrum. Aliasing can result from such a detector when the sampled signal is re-filtered with a linear low-pass filter. But our ears are not a sampling detector.

If you do not agree with what I just stated, then I'm afraid that you a missing some of the fundamental signification of Fourier analysis.
And I am not vague stating this: I am very precise regarding what Fourier analysis means.
So you need to answer the question relative to sound beatnotes before we continue the discussion.

The situation is simple.
If you believe that the difference of frequencies can appear in the Fourier spectrum detected by a linear and time-invariant detector, you are wrong.

Now we apply Fourier analysis to vision and images, and my objection is: our vision system is non linear, hence MTF analysis, valid only for a linear and space-invariant, non sampling image detector, cannot explain everything.

Nodda Duma
16-May-2015, 02:24
My question all along is the relevance of this to MTF curves. Non-linearity was brought up in objection to the notion that you can model the sharpness of a system by multiplying MTF curves. This criticism strikes me as both illogical and vague, and moreover I'm certain it's wrong. It's what optical scientists do, and it's one of the things they value about the measurement.

It's because there is confusion between the idea of a *linear output* and *linear systems*. I gave up trying to explain the difference and checked out earlier in the thread. ;)

Make no mistake: imaging systems are entirely linear and predictable. The proof is in the fact that precision optics exist.