PDA

View Full Version : There must be a limit somewhere



swmcl
1-Jun-2011, 02:58
Hello to you all.

I genuinely want to discuss an issue relating to the current state of play w.r.t digital sensor technology.

For several months / years I have been watching the increase in the abilities of the digital camera scene and I wonder ...

Is it true that we are pretty much at the smallest pixel size on a sensor now ? Is it not true that going too much smaller than the size of pixel on a say 16M full-frame DSLR leads to worsening ISO performance and increasing noise ?

Without the figures in front of me, what is the Nikon D3 sampling each 'pixel' at ? Is it 12 bits per 'pixel' ? So is that 12 bits per red, per green and per blue ? A total of 36 bits per 'pixel' ?

IF (note the capitals) one were to sample at 48 bits per pixel would one not need a larger voltage to avoid excessive noise ? To get a larger voltage one needs a larger pixel right ? So if one were to sample at 48 bits per pixel one would start to max out in noise and ISO at what ? 6M pixels for a full frame 35mm sensor ?

Yes I do know my physics enough to know there is a limit. I just think we are pretty much at that limit. To get the colours of my scanner the camera needs to sample at 48 bits per pixel. To get this I need perhaps a 6M pixel full frame DSLR. 6M pixels would be pretty crappy right ?

I know my 2.4GHz pc is nearly 10 years old and the new ones are around 3.2GHz. Moore's Law is breaking down is it not? So too I think in the digital photography realm.

Nice replies only !

Steve Smith
1-Jun-2011, 03:30
Is it true that we are pretty much at the smallest pixel size on a sensor now ?

It is my view that both film and digital are at the level where the laws of physics are the limiting factor. I also think that for the same format size, the resolution is now roughly equal.


Steve.

Bruce Barlow
1-Jun-2011, 03:39
My son, pretty much an expert on this stuff, tells me that the Hot Area is sensitivity - high ISO without noise, rather than stretching the limits on pixels. That's what he loves about his D3s, which if memory serves is 12MP.

He works for filmmaker Ken Burns (Dad gets to brag), and last night was talking about the Red video camera, which can run 250 frames per second, with each frame a 14 megapixel image. He thinks it will revolutionize things like fashion and journalism photography, since there apparently is a way to tag frames in the data stream before it leaves the camera. Others can then take that frame, immediately export it, and it's ready to go - anywhere. Faster than anything before, and there's full-motion video available, too. In any event, he's impressed.

I couldn't muster the gumption to tell him about my development time tests for sheet film...

I certainly would not count Mr. Moore out just yet. The path is littered with the bodies of those who did.

Jack Dahlgren
1-Jun-2011, 04:07
I do not think we are at the limit yet. I know that the surface of the sensor is not 100% filled with photosites, and there is more sensitivity to be had. Probably have a few more generations of sensor before the physical limit of gathering photons is achieved, but there is more that can be gained by other things which may be possible with software and multiple exposures. I also think that the 35mm DSLR is a transitional camera and will go the way of TLRs sometime in the next decade.

Emmanuel BIGLER
1-Jun-2011, 04:22
Is it true that we are pretty much at the smallest pixel size on a sensor now ?

I'll refer to medium-format sensors which directly compete with (or, for some people, outperform) large format on film.
If we consider a silicon sensor in the class of 22 Mpix, the pixel grid has a pitch of about 9 microns (example : Leaf Aptus 22) (http://www.leaf-photography.com/products_aptus25.asp)
The smallest periodic object that can be detected with such a sensor has a pitch of 2x9 = 18 microns.
This is still bigger than the actual capability of the best 'digital' view camera lenses. At the centre of the field, those lenses are close to be diffraction-limited when stopped down to f/11, f/8 being the recommended aperture for top-notche lenses designed for the 4.5x6cm format on silicon.
At f/11 for visible light, the smallest periodic feature passing through the lens has a period of about 11x0.7 = 8 microns. Hence a 22 Mpix sensor with 9 microns of pixel pitch fails by a factor 2 to correctly sample the analog-optical image delivered by a top-notch lens @f/11.
With a 80 Mpix sensor of about the same size, roughly 4.5x6 cm, the pixel pitch is divided by about 2, and now those sensors have a sampling capability that matches what a diffraction limited lens @f/11 can deliver.
With a 22 Mpix sensor and top-notch lenses, some moiré (aliasing) patterns can be visible in the final image, in certain shooting situations, for certain subjects like fabric or other fine-pitched periodic objects.
With a 80 Mpix sensor and state-of-the art lenses, the probabilty to get annoying moiré effects is very low. And there apaprently is no benefit to use a finer grid that what is necessary to sample a diffraction-limited image.

Regarding noise, dynamic range and sensitivity, a quick look at the technical specs of a Kodak sensor, for example this one
http://www.kodak.com/global/plugins/acrobat/en/business/ISS/datasheet/fullframe/KAF-39000LongSpec.pdf will give us some ideas.
We find that in this sensor, the pixel can store a maxium of 60,000 electrons. The natural statistical fluctuation of this figure is about 250 (the square root of 60,000)
If we consider that the smallest gray level detectable is about 256 (this is a very rough approach, but we just want to have the proper order of magnitude), it means the the maximum number of distinct gray levels for 60,000 electrons is only 256, such a monochrome image can be coded on 1-byte only (8-bit).
Of course you can accumulate images in your computer, there is no limit, but for a single-shot, the maximum number of electrons per pixel is a limiting factor. And you cannot get more electrons than incoming photons per pixels. This is not as obvious at it might seem actually.

In terms of noise, it is very difficult to compare film with a digital sensor, but people involved in astronomical photography have done this kind of comparison. If we try to imagine that film behaves like a photon counter, in very weak ligh levels, we get for our beloved Tri-X flm an equivalent quantum efficiency of about 0.5%. it means that a photon counter that would miss 99.5% of the photons is as efficient as a Tri-X silver halilde film in terms of equivalent noise.

The Kodak specs quoted above mentions a quantum efficient in the range of 20 to 30%.
An efficiency above 80% has been achieved for monochrome silicon detectors in astrophysics, but the mere fact that we have to filter each pixel for RGB shots implies that each color pixel has to loose about 66% of the incoming white-light photons !
So in terms of what can be expected for a monochrome image, Tri-X at 0.5% of efficiency versus silicon at 80%, we actually reached the limits for film at the end of last century, and we are close to the absolute limits for silicon.
So the only solution is to use BIG pixels capable of counting zillions of electrons before saturating ;)
And big pixels imply big sensors, back to the 4x5" (silicon) format in the future ??

jmooney
1-Jun-2011, 04:26
I don't think Moore's law is breaking down. As the technology advances the chips/sensors change so that 2.4GHZ doesn't necessarily equal 2.4GHZ. I mean that a 10MP sensor from 4 years ago and one from 1 year ago are two very different beasts, probably mostly in the high ISO limit, so they are both 10MP but their similarity ends at the name.

It seems to me that as we approach the limit of the current chip/sensor technology there is usually a new design on the horizon that will allow for further expansion or as Bruce refered to above a new direction that innovation takes. The high ISO race is on now and that's great for consumers. I give it 2 years before we have compacts with ISO 6400 performance that the current full frame cameras can only dream of. This will have the added benefit of cheap FF DSLR's for the rest of us.

It's the same trickle down you see in automotive circles. The current Ferrari Grand Prix race cars may have a braking system that cost 2.5 million dollars to develop but once it's developed and then a version is made from consumer grade parts, you wind up with a race designed Ferrari braking system on your Ford Taurus. Everyone wins.

Nathan Potter
1-Jun-2011, 08:31
Emmanuel, those are nice comments on the technology at current state of the art. But I'm not clear on the point of quantum efficiency of film vs sensor.

You are quoting efficiency of Tri-X at 0.5%. Is that during exposure or during development which results in a cascade (multiplication) of silver in the emulsion? And of course that multiplication is a form of noise since it is not directly related, spacially, to the original photon flux site.

The quantum efficiency of silicon CMOS sensors is probably near the limit due to the purity of material and perfection of the junction parameters so little further gain can be achieved by reducing recombination rates. There is a possibility of increasing the number of electrons per incoming optical photons by employing avalanche multiplication within the high field junction depletion region. Such avalanche multiplication is analogous to PMT gain and in a way to the multiplication of silver in film (physics is different) but fraught with an increase in noise.

As you alluded to, the easiest way to improvement in dynamic range is larger pixels with larger sensors. But that is not compatible with a high volume consumer market.

Nate Potter, Austin TX.

paulr
1-Jun-2011, 08:49
We're not anywhere near the limit of could theoretically be done with silicon technology, but I suspect we're close to the limits of what makes practical sense with pixel density. Sensors in better quality consumer DSLRs are down to around 4.8 micron pixel pitch. This allows 80 lp/mm to be resolved at a very high MTF. This is well in excess of what any large format lens can resolve, and is pushing the limits for any lens.

I think we're going to see the quality of these sensors continue to improve, in terms of noise, color, and dynamic range, and we'll see the finer pixel pitches migrate to the larger sensors. But I don't think we're going to see pixel densities continue to increase that much past where it already is in the small sensors, unless marketing plays a big role.

Ok, one reason I might be wrong: if the pixels are small enough to allow a high degree of oversampling, manufacturers could get rid of anti-aliasing filters and get a significant performance bump. They've already eliminated them on some of the MF sensors (in those cases I don't know how they didi it).

Mike Anderson
1-Jun-2011, 16:38
Here's one smart person's take on the laws of physics and the limits sensitivity:

http://theonlinephotographer.typepad.com/the_online_photographer/2011/03/photography-at-the-speed-of-light.html

To summarize, Ctein says there's room for a 10 times improvement in sensitivity:


What the Nikon D3S can do at ISO 6400, these technologies would let it deliver at 64,000

I'm no scientist but I always thought you could trade sensitivity for pixel density (common sense, right :)), and if that's true there's still a lot of room for improvement in pixel density.

...Mike

Jack Dahlgren
1-Jun-2011, 21:17
Here's one smart person's take on the laws of physics and the limits sensitivity:

http://theonlinephotographer.typepad.com/the_online_photographer/2011/03/photography-at-the-speed-of-light.html

To summarize, Ctein says there's room for a 10 times improvement in sensitivity:


10 times is just a bit more than three stops.

John Kasaian
1-Jun-2011, 21:26
Ansco 130 works for me! :D

paulr
1-Jun-2011, 22:09
I'm no scientist but I always thought you could trade sensitivity for pixel density (common sense, right :)), and if that's true there's still a lot of room for improvement in pixel density.

The issue is, at what point are just oversampling the information that the lens can resolve?

Current sensor on dslrs are able to resolve around 80 lp/mm

The very best MF digital lenses can resolve 80 lp/mm at around 50% modulation on axis, and down close to 0% modulation in the corners. Large format optics can't come anywhere close to this.

So how much more pixel density can we really benefit from?

engl
2-Jun-2011, 02:21
The issue is, at what point are just oversampling the information that the lens can resolve?

Current sensor on dslrs are able to resolve around 80 lp/mm

The very best MF digital lenses can resolve 80 lp/mm at around 50% modulation on axis, and down close to 0% modulation in the corners. Large format optics can't come anywhere close to this.

So how much more pixel density can we really benefit from?

There are digital MF lenses that are a lot better than what you describe. The HR Digaron-S 100/4 has 65% modulation on axis and still around 60% near the edge of the frame with no movements, at 80lp/mm.

paulr
2-Jun-2011, 08:44
There are digital MF lenses that are a lot better than what you describe. The HR Digaron-S 100/4 has 65% modulation on axis and still around 60% near the edge of the frame with no movements, at 80lp/mm.

Ok. I just looked at the 32mm one. That's certainly better. I wonder at what point they consider the lens to be at its practical limits. In film photography, 30% modulation is generally thought to be the lowest contrast that's worth anything. Possibly with a digital sensor and the ability to sharpen, you can get useful detail with less than that.

engl
2-Jun-2011, 10:49
Ok. I just looked at the 32mm one. That's certainly better. I wonder at what point they consider the lens to be at its practical limits. In film photography, 30% modulation is generally thought to be the lowest contrast that's worth anything. Possibly with a digital sensor and the ability to sharpen, you can get useful detail with less than that.

Wide lenses are weakness of MF digital. Lenses built for MF DSLRs have to be retrofocus, are huge, have many elements leading to flare, have barrel distortion and are usually not as good performers as non-retrofocus designs (except in falloff).

Even lenses built for digital view cameras need to be retrofocus designs, although less extreme. Too high incident angle on digital sensors leads to color problems.

A Rodenstock HR-Digaron-S 28mm lens has a flange focal distance of 53mm, considerably more than its focal length. It also shows the typical retrofocus problems, distortion and performance well below the sharpest lenses for the format.

A Rodenstock Grandagon-N 90mm has a flange focal distance of 94-98mm (F6.8, F4.5), nearly the same as the focal length. Less retrofocus, less distortion, better performance relative other lenses for the format.

Tim Povlick
2-Jun-2011, 12:25
First - Thanks Emmanuel for the good explanation!

Since others have asked about pixel size and QE note from OVT website:

http://www.ovt.com/applications/app_mobile.php

They have pixel size of 1.4 microns. Some sensors are back side illuminated (BSI) which means ~100% of photons impinging on the silicon surface are absorbed so QE is high.

Having done both film & CCD astrophotography, there is no comparison, CCD wins out due to ability to stack images and remove noise via dark frames. The best film was hypered TechPan. For large high performance sensors one can look to Fairchild with their BSI models although the prices are truly astronomical.

Best Regards,

Tim

swmcl
2-Jun-2011, 20:42
Sorry, I have been unable to reply before now.

Emmanuel says at the end of his excellent reply that to get the sensitivity (which relates to both ISO and bits per pixel does it not?) we need bigger pixels. And that I believe is where I started from.

As for getting more photons out than went in ...

I hereby declare my invention ... Erbium doped CCDs. Each pixel (post RGB filter could be Erbium doped ...)

Ha!

Thanks guys. I'll stick with it (the LF thing) for now.

JJeffrey
4-Jun-2011, 11:16
Wide lenses are weakness of MF digital. Lenses built for MF DSLRs have to be retrofocus, are huge, have many elements leading to flare, have barrel distortion and are usually not as good performers as non-retrofocus designs (except in falloff).

Even lenses built for digital view cameras need to be retrofocus designs, although less extreme. Too high incident angle on digital sensors leads to color problems.

A Rodenstock HR-Digaron-S 28mm lens has a flange focal distance of 53mm, considerably more than its focal length. It also shows the typical retrofocus problems, distortion and performance well below the sharpest lenses for the format.

A Rodenstock Grandagon-N 90mm has a flange focal distance of 94-98mm (F6.8, F4.5), nearly the same as the focal length. Less retrofocus, less distortion, better performance relative other lenses for the format.

I'm no digital whiz, so this may be a stupid question; nevertheless I'll stick my neck out and ask it. Given that a digital sensor is totally a different matter from a piece of film, and is a static, integral part of the camera: why do they not work on hemispheric sensors? This would have at least two major advantages, removing the angle-of-incidence problem and allowing the lens designer to forget about spherical aberration! Maybe at first this would work only for fixed focal lengths with a sensor specially designed for the focal length in question. But perhaps further developmental work would result in greater flexibility.

Of course, the ultimate gain of resolving power would probably involve eliminating glass optics entirely in favour of some kind of laser holographic system! ;)

JeffKohn
4-Jun-2011, 12:29
The issue is, at what point are just oversampling the information that the lens can resolve? Oversampling is not a bad thing, or even pointless. Nyquist theorem says you want 2x over-sampling, so for an 80lp/mm lens you would want a 160lp/mm sensor with no AA filter to maximize resolution without artifacts. Actually with Bayer-filtered sensor I guess there would even be some potential benefit from going higher than 2x, because each sensel is R, G, or B and therefore not a "full" sample.

JeffKohn
4-Jun-2011, 12:36
Is it not true that going too much smaller than the size of pixel on a say 16M full-frame DSLR leads to worsening ISO performance and increasing noise ?I definitely don't think 16mp full-frame is anywhere near optimal. A year or two ago I might have said that 24mp is the sweet spot for full-frame; but the 16mp APS sensors coming out since then have shown impressive resolution and dynamic range, which leads me to believe there's still some headroom for full-frame sensors. I expect Canon will have a 30-32mp 1Ds3 successor before too long, with Nikon following at some later point.

It's true that as you go higher, pixel-level noise at high ISO's increases. But for a given print, pixel-level SNR is less an issue than sensor size. I suppose if all you care about is ISO 6400 performance, there's no much point in high-density sensors. But a D3x ISO-100 shot will easily beat the D3s in both resolution and dynamic range, so I don't buy the argument that 12mp or 16mp is all you need.

engl
4-Jun-2011, 15:40
Oversampling is not a bad thing, or even pointless. Nyquist theorem says you want 2x over-sampling, so for an 80lp/mm lens you would want a 160lp/mm sensor with no AA filter to maximize resolution without artifacts. Actually with Bayer-filtered sensor I guess there would even be some potential benefit from going higher than 2x, because each sensel is R, G, or B and therefore not a "full" sample.

Some of the sharpest lenses for medium format digital have over 65% modulation at 80lp/mm, so in practice they will have very usable detail above this frequency. With a bit of sharpening, 10-20% modulation is still valuable for digital photographic use, and I'm sure the lenses with 65% modulation at 80lp/mm could at least do 120lp/mm at something like 20%.

Additionally, oversampling is needed or you will loose detail to aliasing. Nyquist theorem states 2 times the frequency, but this is only the minimum frequency needed to be able to recreate the signal in theory, in practice you want more (try looking at a 10hz sine sampled at 20.1hz).

300lp/mm on the sensor would probably be good for really sharp lenses, on sensors with the Bayer filter removed (black and white), but only for detail along the X and Y axis. For the same detail on the diagonal you'll need over 400lp/mm along the X/Y axis.

Then add the Bayer filter, and you will need to double this again, otherwise you will loose detail, mostly on saturated red/blue subjects. So something like 800lp/mm is desirable to see what the lenses have to give. That is 1600 sensels per millimeter (two to a black/white pair). In the 54x40 sensor size used on the latest 80 megapixel backs, that would be a resolution of 86400x64000, or about 5500 megapixels.

Of course, in practice not everyone uses the best lenses (or use the apertures and precise focusing needed), and few would notice if faint diagonal detail in saturated reds was lacking, but I don't think the megapixel race is going to slow down until we reach about 1000MP medium format and 200-300MP on full frame sensors (no AA).

This is not a bad thing, in the high-end segment, pixel density increases have always gone hand in hand with dynamic range and real detail improvement. Sure, some people like to extrapolate their knowledge gained from noticing that a 14-megapixel compact might be worse than a 8 megapixel one, but for high-end low-ISO, dense is good.

Emmanuel BIGLER
3-Jul-2011, 13:26
Hello all !
I realize that I did not follow this discussion after my post dated June 1, and I really apologize for not answering earlier.
To Nathan :
I'm not clear on the point of quantum efficiency of film vs sensor.

Well, I'm quoting a value of 0.5% fro Tri-X from memory, read in a classical book on Photographic chemistry and physics by Pierre Glafkidès, a former Kodak scientist in France.

Of course film does not behave like a counter and what you can define, with great difficulty and complexity, is an equivalent quantum efficiency. the fact that there is a chemical amplification of course degrades the noise figure like in any electronic amplification stage. I am unable to explain in detail how this equivalent efficiency is measured, but I know that it is meaningful in astrophysics at a very low level of illumination.

Tim Povlick has mentioned hypersensitized film, I know that the procedure was used in astrophysics, but the improved sensitivity did not last very long, you had to expose your film immediately but Tim can explain better than me.

There was an imaging device developped by a French astrophysicist named Lallemand,where you detected an electron image with film in a vacuum electron-beam camera. The optical image was converted into electrons in a photocathode, and the electronic image detected by a film that had to be inserted inside the vacuum chamber of the camera. I do not remember if there was an electron-multiplying stage like in an image intensifier, but such a multiplier can never improve the quantum efficiency.
Of course, this strange mixed photoelectric-electronic-film camera delivered monochrome images.
The quantum efficiency of the whole system was much better that direct optical detection on film. I have in mind a figure of 40% efficiency for the photocathode, and as an electron detector, the film in use was good enough so that the association of both devices exceeded film alone directly exposed to low light levels.

For many years, outside imaging devices, the photo multiplier was the only detector capable of counting photons one by one. But as far as I know, the efficiency of photocathodes is limited in the range of 40%, so as soon as new silicon detectors reached and even exceeded this figure of 40%, the supremacy of the photo-multiplier was challenged.
What we are living now is that all those technologies, previously limited to military or scientific use, reach us at an incredible price, and instead of a unique photon detector, we get an array of 80 millions ... all of them with an incredibly high quantum efficiency.

For the kind of photography we are interested in, the real problem as mentioned, is the cost of large format silicon image sensors above the the 4.5x6 cm size. Bigger detectors have beed fabricated for astrophysics. So it can be fabricated ... too bad that the economic factor actually limits our enthusiasm.

As far as I'm concerned, for the kind of hand-crafted images I like to make, film and the large format camera are the ideal choice, since I have no client to satisfy, no delivery schedules and so on ...
I can wait for 1/30-s for my shutter to open and close. And now that I know that film is wasting so many good photons that I could efficiently capture with silicon, I do not really care and I easily forgive my sliver-halide layer ; for the kind of landscape or achitecture shots I like with the view camera, I have no benefits of 1/8000 s for the same shot !
Nobody would object that van Gogh painted his famous 'sun flowers' much slowly than 1/30-s ;)

But, not kidding, I agree with Steve McLevie, my favourite camera would be a 4x5" with full-size 4x5" silicon sensor, the pitch of the individual sensing elements would exactly match the diffraction limit, say @f/11 somewaht like current top-notch 'film' view camera lenses. Down with anti-aliasing filters !!

Being optimistic, we would believed that the diffraction limit would be reached on the whole field ;), and classical textbooks tell us that the limit period in the optical-analogue image is about 11 wavelengths @f/11. If we take 0.7 microns as the worse-case for visible light, we eventually get 8 microns, 125 cy/mm for this ultimate image quality. I doubt that even the best 150mm lenses, covering 75 degrees could pass 125 cy/mm for a retail price of $999.99.
But the situation would be clear and simple. The pixel pitch would be 4 microns =8/2, exactly matching the sampling theorem for a diffraction-limited image @f/11, the total number of pixels would be about
100,000/4 by 120,000/4 = 25000 by 30000 = 750 million, about 10 times more than we have today in curent top-class medium format sensors. Shall we see it some day ?
I remember colleagues proudly showing me the images they got with a 1-st generation consumer digital camera @500 kilo-pixels...

In fact the maths involved here are much, much simpler that all I have read about the theory of the silver halide process !
This is one of the reason why we love film, nobody can explain what's going on in our tiny silver grains in a few words : it's magic, the engineers cannot understand ;)

JoeV
11-Jul-2011, 07:28
Technically, Moore's Law (or, more accurately, Moore's Observation) only relates to digital photography in the point-and-shoot arena, where sensor size isn't defined at a specific physical size, and thus transistor density can continue to increase, doubling every 18 months or so according to the rule.

The thing with any sensor size that's defined by a specific format size (like micro 4/3, APSC and its variants, FF, medium format, etc.) is that the economics of semiconductor manufacture can't scale over time to reduce costs, as they do with circuits whose physical size isn't constrained to a predefined format. Thus, Moore's Law doesn't apply in such cases.

Moore's Law is all about the economics of semiconductor manufacture: the cost to process silicon wafers is fixed, as is the "real estate" size of said wafers. The goal is thus to maximize the price/unit area of each "chip" (or die) within the wafer by reducing the size of a transistor, increasing the transistor density which not only makes for more transistors per unit wafer size (and increased profits per unit wafer) but has the secondary effect of producing faster circuits (charges have shorter distances to travel and less heat loss), thus implying that newer chips can be sold at higher prices, or their improved performance can offset the inevitable decline in price over time by commodification. This is how Moore's Law plays out in actual fact.

Example: compare an early Pentium processor with a current Atom processor. The early Pentium had a die size about that of a US postage stamp, and were originally processed on 6 inch silicon wafers. Only several dozen chips were usable at end of line. Each one sold for over a thousand US$. The current Atom processor is the size of a long-grain of rice, is processed on 300mm wafers (you can fit 2500-3000 such Atoms on a single 300mm wafer) and, although selling for $10-20, the economics of scale implies higher profits for the smaller processor being sold at cheaper prices. Also, because the transistor density is higher with the Atom, the circuit operates more efficiently, electron charges move faster and create less heat loss, etc.

The benefits of pursuing increased transistor density, as per Moore's Law with logic chips, doesn't apply to image sensors whose overall size is predefined by a format specification, and whose individual photo diode size is constrained in performance by physics. The cost to operate a chip fab is divided into each square millimeter of usable silicon wafer area, which then defines the necessary break-even point between die size and retail price. Hence prices can only decline, and profits improve over time, by manufacturing efficiencies like ensuring chip fabs are running at full volumes, and amortizing equipment costs over time. Such cost improvements are miniscule as compared to the benefits of shrinking transistor size over time, which you can do with logic chips, but not image sensors that are fixed to specific sizes.

Because size-specific image sensors are exempt from Moore's Law, cost improvements will only be made by reducing costs in manufacture of the rest of the camera: replacing mechanical mirror optics with live-view displays, reducing mechanical controls and increase LCD touch-screen controls, replacing the last stage of a lens optic with software correction in firmware. All of these cost reductions you can see in effect in the newer camera formats like micro-4/3. That's where the future of large-sensor photography is headed.

~Joe

Nathan Potter
11-Jul-2011, 09:00
Emmanuel, thanks for your response and I like your comment about silver multiplication in film defying easy physical characterization - part of the charm and intrigue of the silver halide process I think.

JoeV, excellent discussion about the limitations of using Moores Law for scaling. The opportunity for large area sensors might be realized by using flat panel manufacturing technology where the substrate size is tens of square feet per panel and the manufacturing cost per panel is as low as $200 per square foot. I've not looked at this rigorously but of course the performance of the photo receptor (thin film transistor) would need to be analyzed and the lithography requirements may not be adequate for the pixel size needed. The issue of projected volume looms large unless ancillary applications can be found and made to run in the same facility. For example Xray sensors come to mind among other possibilities.

Nate Potter, Austin TX.

Bob Salomon
11-Jul-2011, 10:10
"A Rodenstock HR-Digaron-S 28mm lens has a flange focal distance of 53mm, considerably more than its focal length. It also shows the typical retrofocus problems, distortion and performance well below the sharpest lenses for the format.

A Rodenstock Grandagon-N 90mm has a flange focal distance of 94-98mm (F6.8, F4.5), nearly the same as the focal length. Less retrofocus, less distortion, better performance relative other lenses for the format."

Looking at the latest Rodenstock literature for the HR Digaron series and the Apo Grandagon/Grandagon series I find your statement a bit confusing.

The distortion graphs for the 28mm HR Digaron S shows distortion between roughly -0.6% and +0.6% (depending on the reproduction scale and the coverage.

The distortion graphs for the 90mm 4.5 Grandagon N show distortion between 0 and +1.2% again depending on image ratio and the coverage.

The fall-off in illumination is lower with the 28mm and the 28mm's Long. Chromatic Aberration is under 0.1% compared to the 90's 0.3% at full coverage.

If you were to compare the curves of the 90mm HR Digaron W to the 90mm 4.5 Grandagon, except for coverage, the digital 90mm beats it by a long way for fall-off, distortion and Long. Chrom. Aberration. Even more dramatic would be the curves for the 100mm 4.0 HR Digaron-S vs the 90mm 4.5 Grandagon-N.

Would you like a set of the latest curves and specs?

John NYC
11-Jul-2011, 17:18
I know my 2.4GHz pc is nearly 10 years old and the new ones are around 3.2GHz. Moore's Law is breaking down is it not? So too I think in the digital photography realm.


With computers, more performance was squeezed out by focussing on parallelism, bus speeds, memory speed, cache sizes and types of caches, off-loading certain operations to separate processors, and so on.

The relevant comparison here is that eventually megapixels might take a back seat to super high ISO performance, maybe something like variable exposures that account for highlights and shadows (kind of like intelligent HDR or darkroom manipulation that looks natural), or any number of things.

I think one thing that would actually kill film for almost everyone would be being able to take a 64,000 ISO image that looks like 100 ISO slide film, even if it were only a 12MP camera.