PDA

View Full Version : What is the total resolution of a combined system?

raghavsol
28-May-2008, 03:53
Hi,
According to common sense, if I have an optics of let say 120 lp/mm resolution then I must select a sensor of at least 120 lp/mm resolution to get over all resolution of the combined sytem (optics+sensor) equal to 120 lp/mm. But this is not correct because total resolution (Rt) is given by following relation:

1/Rt^2 = 1/Ro^2 + 1/Rs^2, where Rt is total resolution of optics+sensor system, Ro is optics resoluiotn and Rs is that of sensor.

I am not able to comprehend this can anybody help me?

In other words I can put my doubt this way; what kind of resolution is required for the sensor to record all the information provieded from the lens?

Raghav

Struan Gray
28-May-2008, 04:03
1/Rt^2 = 1/Ro^2 + 1/Rs^2

This is only true if the blur from your optics and the blur from your sensor have a particular functional form - usually assumed to be Gaussian: the famous bell curve.

Lenses have blurs which at best are Airy functions. These are much steeper than Gaussians for a given normalisation.

Film has blur which is very process-dependent, and which is complicated by the mixing of spatial and tonal resolution once you get down amongst the grain.

Empirical research has shown that 1/Rt = 1/Ro + 1/Rs is a better expression for photographic imagery. It fits with the general observations above.

For a digital sensor the blur is again based upon Airy functions, so I would expect the combination of reciprocals to hold there too. I am sure the NSA have worked out all the details.

Walter Calahan
28-May-2008, 05:49
This only matter if the images produced on a system of any resolution are worth looking at. Some of the best images ever made were on sensors (film) that could not come close to the resolving power of the lens.

Emmanuel BIGLER
28-May-2008, 06:23
If I dare to add my 0,02 euro here.
Before trying to combine resolution figures, it might be useful to define the resolution figure itself...

Traditional resolution figures for lenses were, in the past, defined by the minimum feature size visible on a high-resolution film recording the image of a resolution target. The image has to be carefullly examined with a good microscope.
This classical procedure is still in use by serious authors on the web like Christopher Perez.
However the resolution figure of 120 cycles/mm is hardly ever useful in the real photographic world except when you enlarge microfilms in order to extract a precious document recorded optically.

A more satisfactory approach would be to know the modulation transfer function of the lens and define the resolution by the maximum cycles/mm capable of rendering a minimum acceptable contrast in the final image.
If you insist on, say, 40&#37; of minimum contrast, I know no photographic optics covering medium or large format capable of such a performance, except, may be, some specialised process lenses of the old times like the Zeiss Orthoplanar.

When a lens is listed @120 cycles/mm like some top-class MF lenses on Chris Perez's web site, it means the those 120 cy/mm are still visible, but vanishing, at something like 5% of contrast. So the same lens at 40% contrast will only resolve.. I do not know, but 60 cy/mm would already be outstanding.

Regarding the empirical law quoted by Struan about adding the inverse resolution figures and not the squares, it can be related to some real mathematical process of multiplying two FTM curves for two subsequent imaging processes.
If you make a simple model for the low frequencies and represent the FTM curve as a straight line starting at 100% for zero cycles end gently falling down to 0% of contrast at some absolute resolution limit, if you multiply two curves of the same kind you get a "curved" curve ;) which slope at the origin obeys exactly Struan's "empirical" law.
In other terms, Struan's law 1/Rt = 1/Ro + 1/Rs has the following meaning : the slope for the combined MTF curve, extrapolated from its beginning around zero cy/mm as a straight line until crossing the zero-contrast line,yields a combined resolution of 1/Rt = 1/Ro + 1/Rs

Now what happens with digital ? Things become really tricky.
I have played the game of comparing the MTF curve of a modern fine-grain colour slide film (Fuji Provia 100) and the expected MTF curve of an ideal digital sensor with a pixel pitch of 7 microns and 100% coverage with side-to-side square pixels.
There is a first difficulty with the Bayer sensor, since the pixels of the same R, G, or B colour are located with a pitch of 14 microns, not 7, but imagine that the pixels are true RGB pixels on an ideal grid with 7 microns pitch.
With this simple model, taking into account some anti-aliasing filter to limit the resolution at 70 cycles/mm, the actual limit according to the sampling theorem, you find that the 7-micron pitch digital sensor has roughly the same FTM curve than provia 100F slide film.

If you do not use any anti-alising filter, the actual resolution limit given by the modeled FTM curve of the sensor would be something like 140 cy/mm but you cannot re-construct without aliasing any fine cycles beyond the limit of 70 cy/mm.

But in the real world, what you get out of your digital sensor might not be a really raw digital data, manufacturers do not give so much information about what they actually do inside the sensor before delivering the image, and they are free to manipulate the image ad libitum, they can boost the MTF curve up to the sampling limit by any proprietary algorithm ... and taking into accound that European, laws, to date, do not recognise software patents, do not expect to see the alogorithms described anywhere on any public document !! ;)

As a conclusion, this is another story that may explain, eventually, after this long discussion, why I am dubious about defining the resolution by a single figure related to the minimum feature size at the limit of detection of vanishing visibility is not really meaningful.

I prefer to look at the contrast achieved for 10, 20 and 40 cycles/mm like in manufacturer's specs. For a 10x enlargement observed at a distance of about 300mm (12") , considering taht the eye has a resolution limit or 5 to 7 cy/mm in thios conditions, the important number of cy/mm on film or sensor are betwen 50 and 70, not 120.
And between 50 and 70 cy/mm, whereas the good old film cannot but let its FTM curve drop down to zero, digital sensors helped with unlimited booting algorithms can keep an excellent contrast up to the limit imposed by the sampling theorem.

Yes, I admit that this is cheating, definitely unfair with respect to the old & respectable film technology...

The ftm curve for provia 100F film can be found here
http://www.fujifilm.com/products/professional_films/pdf/Provia100F.pdf
The curve, when re-plotted in a linear horizontal and vertical scales, is very close to the simple sin(x)/x FTM model for a digital sensor fitting with anti-aliasing filter cutting at 70 cy/mm. Boosting the FTM of the sensor means artificially keeping the mtf contrast as close to 100% as possible until the final cutoff of the filter. No anti-aliasing filter would mean : set the resolution limit to about twice the value at 140 cy/mm with a 7 micron pixel pitch, this will in principel deliver about 60% of contrast at 70 cy/mm without any pre-processing of the image ; with the hope that your subject has no fine grids beyond 70 cy/mm generating aliasing or moir&#233; artefacts. Or that you can post-process those artefacts by some other smart-cheating procedure ;)

Suggested readings on Norman Koren's site
http://www.normankoren.com/Tutorials/MTF1A.html

Emmanuel BIGLER
28-May-2008, 10:25
Sorry, the link to the Provia datasheet liste above is broken, I have found another one that works :
http://www.fujifilm.fr/support/pdf/fiches_techniques/AF3-036F_Provia100F.pdf

In this official document, the MTF curve (see page 6) stops at about 60 cy/mm at 35&#37; of contrast, and up to this published value, both curves for this film and a 7 micron pitch ideal sensor with anti-aliasing filter are very similar.
Above 60 cy/mm there is no data availble for provia film, but since it has been tested well above 100 cy/mm by independant measurement (Zeiss people among others), it means that the MTF curve does not drop as abruptly as the sensor's with anti-aliasing filter.
Unfortnately as far as I know, it seems very difficult to measure MTF curves above 100 cy/mm, so the only accessible values come from the traditional method of the vanishing visibility of a recorded test target.
Unfortunately for film, the tail of the film MTF curve, although it certainly continues up to 150 cy/mm with a few % of contrast (which I truly believe), does not make the print sharper visually since in the competition for visual sharpness, the digital sensor boosted by all enhancement procedures can keep a high contrast up to 60 cy/mm.

To me the consequence is that defining the resolution for optics + film by the limit of perception at a low contrast does not properly reflect the quality of images that one can actually print and examine, say, at a 10X enlargement ratio.

Ken Lee
28-May-2008, 10:56
When a lens is listed @120 cycles/mm like some top-class MF lenses on Chris Perez's web site, it means the those 120 cy/mm are still visible, but vanishing, at something like 5&#37; of contrast. So the same lens at 40% contrast will only resolve.. I do not know, but 60 cy/mm would already be outstanding.

I can grasp the first point, quoted above: much of the high resolution data, when using film with superb lenses, is lost - because it is low in contrast. It is "vanishing visibility".

I can also grasp the idea that a digital sensor can recover some of that "vanishing" data.

What I don't follow, is your conclusion. What does this tell us about film, versus digital sensors ?

Steven Barall
28-May-2008, 11:40
You people know too much.

Nathan Potter
28-May-2008, 15:30
What I think Emmanual is saying is that if you specify a very high image frequency (>100 lp/mm.) your image on film is going to be of very low contrast and visually appear to be of poor resolution due to the discrimination capability of the human eye. OTOH if one puts a pixelated sensor at the film plane, instead of film, and choose a spacial frequency of modest dimensions (say 50 lp/mm.) the functional form of the image collection is no longer a classic Airy function but a square wave function where each pixel picks off a discrete part of a COC (Circle Of Confusion). I imagine that constructing a mathematical connection between the Airy function and the square wave function would be quite complicated. (Maybe a square wave function is not the proper form here - I'm no expert)

But in practice I suspect that the higher contrast possible with a digital sensor at modest spacial frequencies is part of the reason why the digitally captured image can be so astoundingly good at times. (Not to mention the signal processing enhancements possible within camera)

Nate Potter

Bruce Watson
28-May-2008, 19:12
You people know too much.

Is that possible?

Emmanuel BIGLER
29-May-2008, 02:48
What I don't follow, is your conclusion. What does this tell us about film, versus digital sensors ?
Ken, my conclusions are exactly the same as Nathan's.

For the same surface, provia film and a sensor with 7 micron pitch plus anti-aliasing filter have the same resolution, in fact their MTF curve is similar.
Film can go much beyond 70 cy/mm, I have found the reference to tests at Zeiss : Resolving power of photographic films, Zeiss
Camera Lens News N° 19, mars 2003
available from the Zeiss web site http://www.zeiss.de

But the digital sensor can be boosted by all kinds of secret tricks inside the electronics, close to the sensor. So the contrast of the digital image can be boosted up to stay as high as possible up to the absolute limit of 70 cy/mm in the example cited above. And I know that some correcting software can manage moiré effects in prost-processing, in other words the anti-aliasing filter might not be necessary at all.
And the noise is much lower in a silicon detector, this is not a digital effect, it is due to the quantum efficiency of silicon compared to film. This efficiency commands the noise factor for a given amout of photons received per pixel.
Best films have a quantum efficiency of about 1%. Good old tri-X is rated at something like 0.5 % ; whereas curent amateur-grade silicon photo sensors reach 10 to 20% ; professional sensors for astrophysics and space programs exceed 80%.

Although I know all the advantages, I do not really care for them my amateur photography, I stay with film in MF and LF.
I know on a scientific basis ;) that I need big film-pixels to get a good image with invisible noise; in LF, film resolution becomes a non-issue with respect to lens resolution in modern lenses.
In conclusion : I do not care : I use a big camera and I enjoy to see real images on a real support !
I like to do the traditional darkroom and I do not want to manage a family photo archive digitally.
I have inherited of all may father's B&W negatives since 1935.
I have all his kodacrhomes (35mm) slides from 1960.
I do not do anything special to manage this archive, the images are stored in boxes at home. At no cost and I can, any time, retrieve an image visually and print it. Yes often the images are scratched, but the image are there.
I have no clients, no production stresses and short delays, most advantages of digital image capture & processing are minimal for me. I need a good print : no problem, now that digital prints from film are top-class, I know how I can do if I do not want to print them myself in the darkroom.

Ken Lee
29-May-2008, 04:26
Excellent - Thank you for the clarification, and for the additional information.

Those of us who like to have a basic understanding of these matters, are grateful that there are real scientists and mathematicians on the forum.

Question: What happens when we scan film ? Can we retrieve some of that "vanishing" data ? Can we have our cake, and eat it too ?

Emmanuel BIGLER
29-May-2008, 06:20
Question: What happens when we scan film ? Can we retrieve some of that "vanishing" data ? Can we have our cake, and eat it too ?

This is probably the most irritating question since scanners became available for the amateur !
Clearly, good amateur-grade flatbed scanners cannot retrieve everything which is recorded on film, but do we actually need to extract everything ? Scanning beyond, says 3000 samples per inch does not usually bring any additional sharpness with amateur-grade flatbeds. So the larger the film size, the better in this case.

Drum scanners on the other hand can do something excellent but I have no idea whether those machines are still manufactured, although many are still in use.

The problem with scanning film, even if your machine like a drum scanner can reach a real optical resolution of 10,000 samples per inch, is that you cannot easily get rid of granularity when your scan analyses the surface with a very small spot. The problem was well-known in the past with micro-densitometers, density fluctuations on output increase when your analysing slit gets smaller. I do not see how a drum scanner could bypass this fundamental effect !
Your are stuck exactly like with a image photon detector with a limited quantum efficiency : you have to merge pixels together and average the reading in order to decrease the noise.

So you have to find a trade-off between noise and resolution, the good approach is to use large format film for which an amateur-grade scanner will deliver acceptable results. Then, the issues of optical resolution and film resolution usually become less important than final scanner effective resolution !!

Ken Lee
29-May-2008, 06:39
Wonderful - Thanks !

Bruce Watson
29-May-2008, 07:19
Drum scanners on the other hand can do something excellent but I have no idea whether those machines are still manufactured, although many are still in use.

There are at least three manufacturers still making drum scanners. Screen in Japan, ICG in England, and Aztek in the USA.

The problem with scanning film, even if your machine like a drum scanner can reach a real optical resolution of 10,000 samples per inch, is that you cannot easily get rid of granularity when your scan analyses the surface with a very small spot.

There are no scanners that I know of that can optically scan at that rate. The highest seems to be an Aztek Premier with a 3 micron aperture which tested out at 7264 ppi. But this test was run by the late Phil Lippencott who was the president of Aztek at the time he made the study. I'm just saying there's the possibility of a conflict of interest there, so make of the study what you will.

Oh, yes, the study is at scannerforum.com (http://www.scannerforum.com/), follow the DIMA 2002 Scanner Roundup link, look at page 22 for his final results. Note also the lack of testing of the big Screen scanner which might do a bit better than the Premier. But we wont know until somebody runs the tests and publishes the results. Sigh...

The problem was well-known in the past with micro-densitometers, density fluctuations on output increase when your analysing slit gets smaller. I do not see how a drum scanner could bypass this fundamental effect !
Your are stuck exactly like with a image photon detector with a limited quantum efficiency : you have to merge pixels together and average the reading in order to decrease the noise.

The thing is that a PMT is a much better photon detector than a CCD is. I've been drum scanning a while now and I have yet to see anything I would classify as noise from any scan I've made. The weird thing is how much better performance the PMT's have over CCDs in the least dense areas of the film. I would have thought that they would be nearly equal here. But PMT's pull considerably more information out of the "shadows" of a B&W negative than the CCD scanners I've used.

So you have to find a trade-off between noise and resolution, the good approach is to use large format film for which an amateur-grade scanner will deliver acceptable results. Then, the issues of optical resolution and film resolution usually become less important than final scanner effective resolution !!

There's something else going on here too. Specifically, scanners are deterministic devices. They lay a highly repeatable "virtual grid" over the image and sample the image through the holes in this virtual grid. The film however is stochastic in nature. Density fluctuations across the film are nearly random. The film grain itself is very small, but the emulsion is 3D in that it has thickness. It is this thickness which allows grains to overlap and form grain clumps which are big enough that a scanner can see them.

For more about how film forms an image, what the image actually looks like at the detail level (excellent illustrations), and what levels of resolution various films can achieve, see the excellent Tim Vitale paper (http://aic.stanford.edu/sg/emg/library/pdf/vitale/2006-03-vitale-filmgrain_resolution.pdf) that is so amazingly informative. Tim does very good, very thorough work, and explains it all so much better than I can ever hope to.

The range of sizes of these grain clumps is very large, while the scanner looks through fixed size holes in its virtual grid. There is little hope therefore of ever imaging a film grain clump, because there is little hope that the film grain clump is the same size as the hole in the virtual grid. And if it were, there's even less hope that the film grain clump is centered precisely in the hole of the virtual grid.

What I'm saying here is that the scanner does not image the film grain. What it does is look through it's virtual grid and sample the film at that point. That this works at all indicates that the image information lives at a level considerably higher than that of the film grain clump. That is, it takes a whole lot of grain clumps to make image information -- it takes a lot more than just two grain clumps to record the edge of a tree branch for example.

The bottom line here is that scanning is almost certainly going to be the "bottle neck" of resolution in a lens/film/scanner workflow. If for no other reason than the problems of looking at a stochastic medium with a deterministic tool.

Robert Fisher
29-May-2008, 07:43
Emmanuel and Bruce, thanks very much for your input - as Ken mentioned, it is nice to have folks like you on this forum.

Emmanuel BIGLER
29-May-2008, 08:57
Thanks very much to Bruce for the information about drum scanners still being made.
If you arrange side by side square pixels of 3 microns square, this makes 8400 pixels per inch so it is nice to see that the instrument could get very close to this limit.
Actually if you look at the theoretical sampling limit for a square aperture, you could still gain something by moving the slit by one-half of its width, i.e. taking about 16,000 samples per inch for a 3 micron slit. This "doubled sampling" of course is not possible with a modern scanner in ther direction where sensors are arranged in line side by side, however in the scanning direction, this is perfectly possible, and as far as I know, actually used.
Actually in the eighties I have worked with the ancestor of modern scanners, it was a computer-controlled micro-densitometer made by Perkin-Elmer.
This was not a drum scanner, this was a high precision X-Y stage with a repeatability in the micron range. The instrument had the advantage over drum scanners of handling glass plates that many scientific labs still used at the time.
The smallest slit was 5 microns (actually, in a microdensitometer the slit is the projected, de-magnified image of a mechanical slit) and as a student working with Kodak high-resolution plates at the time (Kodak HR type 1A, similar in granularity and resolution to the legendary 649F spectroscopic plates, but non-chromatized, blue-sensitive only).
You bet that I rushed on the instrument pushing it to its limits : too bad, scan times were very long and mass storage was only on tape...
So I had tried to apply my university course immediately and tried to sample at 10,000 samples per inch (5 microns side by side would make about 5000 pixels, the extra factor 2 was to comply with Shannon's theorem ;) )

Quickly I had to work with a modest 20 micron slit in order to get tractable files ! And there was a huge users queue, so scanning time was money !

About noise, I was not referring to electronic noise but to density fluctuations due to the random nature of sliver grain patterns, known as Selwyn's law, which still serves today to define RMS granularity. I have no idea about the RMS granularity value for Kodak high resolution plates.. too bad since I have done so many measurements !
So I doubt that any top-class scanner could bypass Selwyn's law, but I do not know, the issue in itself is fascinating !
The Perkin Elmer machine I know from the past used a photo-multiplier, but I have read that recent silicon sensors can now exceed the performance of a photomultiplier to some extent ; a phomultiplier has a quantum efficiency of about 40&#37; (limited by the photo-cathode) whereas top-class silicon sensors can exceed 80%
However the photomutiplier is still in use for special applications.

29-May-2008, 09:18
1/Rt^2 = 1/Ro^2 + 1/Rs^2, where Rt is total resolution of optics+sensor system, Ro is optics resoluiotn and Rs is that of sensor.

...

In other words I can put my doubt this way; what kind of resolution is required for the sensor to record all the information provieded from the lens?

Raghav

There have been many valuable contributions here but I have not seen a simple answer given for the original simple question.

The simple answer is that you cannot get there.

Either equation, the 1/R^2 version or the 1/R version suggested by Struan (Thanks for that by the way.), implies that the resolution of the combination will always be less than that of the lens. If your sensor is much, much better than the lens, then the resolution of the system can be arbitrarily close to that of the lens, but never quite as good.

Of course equations are just models of reality, and I always remember a pragmatic old engineering professor's disparaging joke about mathematicians, so don't let this theory keep you from trying. Better is the enemy of good enough.

- Alan

sanking
29-May-2008, 11:22
Of course equations are just models of reality, and I always remember a pragmatic old engineering professor's disparaging joke about mathematicians, so don't let this theory keep you from trying. Better is the enemy of good enough.

- Alan

Equations aside, if one wants to know the total resolution of a combined system, be it digital sensor or film, it is easy enough to test. Just obtain a suitable target, put the camera on a tripod, select the best aperture for resolution (or bracket if in doubt), focus carefully and trip the shutter. You can determine resolution of digital files directly on your computer monitor, for film you will need a microscope.

Sandy King

Emmanuel BIGLER
30-May-2008, 01:36
Just obtain a suitable target, put the camera on a tripod, select the best aperture for resolution (or bracket if in doubt), focus carefully and trip the shutter. You can determine resolution of digital files directly on your computer monitor, for film you will need a microscope.

Yes, Sandy ; last year I had the privilege to play this game in the digital version with a good photographer friend of mine. So the testing team was made of a professor (only to control the theoretical aspects of the game ;) ) plus a professionnal photographer with 30+ years of experience focusing large format cameras on a daily basis (to be sure that the hands-on experiment was perfect ;) )
We borrowed a superb piece of equipement from a professional photographer equiped with top-notch "digital" Rodenstock lenses and a digital back.
In the collection of lenses I inserted a 2.8-100 Zeiss planar from the sixties (the same fitting the baby linhof) an we checked the results after enlarging files on a good professional screen.

The game is much faster that the equivalent with film, but so far the conclusions are : if your focus is OK, you reach the theoretical limits of the sampling theorem based on the sensor pixel pitch. In this case, the theoretical limit was 66 cycles/mm, most files reached 60 cy/mm with the best f-stop with almost all lenses provided that the focus was OK.
For short focal lenghts in addition to various f-stops we did some focus-bracketting and selected the best file.

I was a bit dissapointed, since I had theoretically computed that my planar, probably the same design as the Rolleiflex's, but scaled by a factor 100/80, should resolve about 96 x 80 / 100 = 76.8 cycles/mm, as based on Chris Perez's extraordinary figures for various MF cameras (including the record of 120 cy/mm for the 80mm lens of the Mamiya 7)
http://www.hevanet.com/cperez/MF_testing.html

Too bad ;) ! This damn' digital sensor chopped abruptly the expected resolution of nearly 77 cy/mm to a modest 60 cy/mm ! ;-)

sanking
30-May-2008, 07:05
[i]I was a bit dissapointed, since I had theoretically computed that my planar, probably the same design as the Rolleiflex's, but scaled by a factor 100/80, should resolve about 96 x 80 / 100 = 76.8 cycles/mm, as based on Chris Perez's extraordinary figures for various MF cameras (including the record of 120 cy/mm for the 80mm lens of the Mamiya 7)
http://www.hevanet.com/cperez/MF_testing.html

Too bad ;) ! This damn' digital sensor chopped abruptly the expected resolution of nearly 77 cy/mm to a modest 60 cy/mm ! ;-)

I have also tested the Mamiya 7 lenses, including the 80mm. Unfortunately I can only get about 85 lines/mm with my 80mm lens, a far cry from the 120 lines/mm recorded by Chris Perez.

Your experience with the digital sensor is not surprising. From some testing with friends it appears that an optimistic best resolution for the P45 MF digital back is about 62 lines/mm, quite a bit below the 73 lines/mm that would be predicted by pixel count alone. That is in the center only as we did not test the edges.

Sandy King

raghavsol
3-Jun-2008, 21:24
Thanks aduncansun for bringing my question again into focus. I was trying to find the answer for my question from various replies but i could not comprehend most of them except few.

As you said "If your sensor is much, much better than the lens, then the resolution of the system can be arbitrarily close to that of the lens, but never quite as good". Is it because of much spreaded Airy Disc (point spread function). I mean to say the so called Airy Disc is much spreaded then we take it. In most our calculations we take the central lobe of the airy disc as the PSF and we forget other lobes present in the Airy disc. By lobes i mean to say maximas. In Airy Disc there is 1st maxima, which we call as central or principle maxima, then there is a minima and again there is 2nd or secondary maxima and so on. Are these secondary maximas are limiting the total performance of our combined system?

What I beleive is that; Lets assume if the Airy Disc has been limited to First Maxima only and there are no secondary maximas, then i feel, if my pixel size is equal to the airy disc size my total performance of the combined system will be equal to that of lens. But in reality this is not the case, Airy Disc is spreading further from the central maxima having other secondary maximas also. And now if i choose a sensor having pixel size equal to the central maxima only, obviously i am loosing the information related to other secondary maximas, means i am not capturing my full Airy Disc and therefore the combined performance will not reach as that of the lens. Hope i have make some sense.

raghavsol
4-Jun-2008, 22:43
The fundamnetal 'why?' question is still unanswered. Why resolution of a combined system (lens +sesor) is always less then Rs and Ro both? Where Ro is optics resolution and Rs is sensor resolution. The total resolution is said to be given by two well know equations: 1/Rs + 1/Ro or 1/Rs^2 +1/Ro^2; are these purely emperical equations or there is any theory based for these equations. In other words what is that which is limiting the combined resolution? This problem is really puzzling my mind from long.

raghav

Emmanuel BIGLER
5-Jun-2008, 03:18
I have attached a diagram explaining why the combined resolution is always worse than the original resolution for separate systems, in the particular case of diffraction-limited systems.
The model shows what happens when you combine two diffraction-limited systems e.g. a very good taking lens followed by enlarging the image with a very good enlarging lens.

The MT the combined system is the product of two MTFs.
If we approximate the MTF curve by a straight line that decreases from 100&#37; of contrast at zero frequency to zero contrast at 80% of the absolute cut-off frequency, where the remaining contrast is in fact 10%, we show that the estimated combined resolution follows the rule : 1/R_comb (1/Reff1 + 1/Reff2) where Reff1 and Reff 2 are the approx resolution taken on the straight line model.

Now to justify why in any case combining diffraction and aberrations yields always some image degradation cannot be explained in a few words, it is a fundamental property of optical systems working in incoherent light i.e. regular, ordinary light we always use in photography.

5-Jun-2008, 15:28
Hi there,

Like Raghav I am just trying to gain understanding here, so I have a couple of questions.

I thought that among the powers of the MTF technique was that to obtain the transfer function for the system, you merely had to multiply the MTF's of the components. Yet your graph does not seem to do that. Rather it seems to rely on the equation 1/R_comb = (1/Reff1 + 1/Reff2) to derive a straight line approximation for the MTF of the system.

Digitizing the original curves in your graph and multiplying MTFs I get a composite that crosses 10% at almost exactly 60 cy/mm. Have I done something wrong?

Also the shape of the MTF curves in the graph are unlike any I have seen published by a lens manufacturer. Under what conditions does the almost asymptotic shape in your graph apply, rather than the complicated shapes published by manufacturers?

Finally, Raghav's original question refers to a digital sensor. I would expect that a digital sensor would have a characteristic MTF different from those of a lens or film. It would seem to be constrained by the Nyquist condition, including that it would be subject to aliasing unless there was some kind of steep low pass anti-aliasing filter in place.

Can those of you who evidently work in the field please shed a bit more light including an example MTF curve typical of a digital sensor?

Thanks - Alan

Emmanuel BIGLER
6-Jun-2008, 00:56
Explanations for "aduncanson " adn all friends on thsoe delicate questions.

The graphs show the MTF of a perfect diffraction-limited lens.
The graphs are plotted as a function of the spatial frequency in cycles per mm.
Consider that this is the MTF measured on the optical axis, where the image quality is usually at its best.
Manufacturers plot different graphs, they plot the contrast at 2,5, 5, 10, 20 (or 40 cy/mm in medium format) as a function of the off-axis distance. This is more meaningful for photographic use since we demand a certain image quality off axis ! And we demand a certain homogeneity to the image quality across the field.
Now about the graphs, the MTF of the combined system is simply the product of both. At least within some assumptions n how we combine both effects. For example the real situation which would be closer to the model would be the enlargement of the image (taken on a grain-free detector in monochromatic light), enlargement done with a perfectly diffuse monochromatic enlarger head. Condensor illumination and xhite light with a broad spectrum iare more tricky to model !!
If you compute analytically the tangent to the curves, i.e. the slope of the straight line, you find easily that the tangent to the product at zero frequence obeys the law or adding inverse resolutions, not inverse squares.
This is what I tried to show on the graph. Actually the straight line is not exacty the tangent to the curve, it is a straight line that best fits the diffraction-limited curve up to about 80&#37; of the absolute cut-off frequency. In the real world, due to residual aberrations and residual defocusing, it is impossible to reach this tail-end of the curve. For an idea of the shape of the MTF curve when defocusing occurs, you can refer to the article by Jeff Conrad on Tuan's web site:
"Depth of Field in Depth (PDF)" see page 30 and 31
http://www.largeformatphotography.info/articles/DoFinDepth.pdf
However if you look at manufacturer's charts and extract the MTF value at the centre of the field for various spatial frequencies, for example 5, 10 and 20 cy/mm, you'll see that they fit the straight line model quite well with a wavelength of .7 microns.
Rodensiock charts plot the absolute diffraction limits with crosses & circles so that you can see how far the lens is from a perfect lens.
But the shape of the curves in Jeff Conrad's article show that even a small amount of defocusing cuts-off the tail of the ideal curve very quickly. In other terms, the resolution figures, even taken at 80% of the absolute diffraction cut-off appear unrealistic in a real picture of a 3-D scene, since only one plane can be considered sharp.

However the shape of those curves without defocusing, for the perfect system, show that multiplying both curves of this kind yields a resulting MTF where contrast is always smaller, since the product of two values between zero and one is always smaller than both terms of the product.
There is a difficulty with defocused MTF curves since they show negative contrast values, this is a bit difficult explain, it means tha the defocused image of a black-on-white sereis of bars apperas as a white-on-black series of bars, but with a very low contrast. And in any case, defocusing can never show an image sharper and with more contrast than a perfectly focused image.