Hi Clément

Following your post, the question relevant to large format users are
- since digital is supposed to deliver better images than film with he current "35mm" (24x36mm) and medium format silicon sensors (now close to the 41x56 = 645 film size), with a better work-flow and (hopefully) with a better cash-flow (for professionals), is there any future for even 6x8 cm silicon chips and above ?
- if large format silicon chips appear, the question of 3 micron pixel size is less important provided that we have enough pixels to saturate even a one terabyte hard disk with 2-3 images

Regarding smaller pixel size, we should always make a difference between
- pixel pitch or pixel grid periodicity, commanding the rules for the sampling theorem
- pixel surface, commanding the number of photons per exposure for a given illumination and exposure time, hence commanding the physical signal to noise ratio.

Sure, in Bayer patterns both pitch and aperture size are related for practical reasons, but both phenomena are formally different. Reminder : the commercial pixel count for a Bayer pattern is the total number of pixels all colors combined, in fact the actual pitch for red pixels is twice as big as the basic Bayer pattern pitch.

I see a great deal of interest in tiny subpixels as exposed below.

Regarding the sampling theorem I prefer not to speak in terms or Airy disks but in terms of cut-off period or cut-off spatial frequency.
With film we are in a situation of the cut-off period of best films is far above what can pass through a top-class large format lens. So in a sense, all LF users are very happy with a detector which is much better than the lens ! So why not the same for digital photographic imaging, not only for military or aerospace use (or both ) ?

The absolute cut-off period for a diffraction-limited lens si N*lambda where N is the f-number and lambda is the average wavelength of visible light ; in fact taking the worst-case at 0.7 microns (the actual limit of sensitivity fr the human eye) we get an ultimate cut-off period of 0.7 N.
The sampling theorem states that you need 2 samples per optical period to avoid aliasing and moiré effects.

Take a state of the art view camera lens, e.g. a solid modern 150mm for 4x5", consider that it is diffraction-limited at f/16, the diffraction cut-off period is about 11 microns (0.7 * 16), the corresponding cut-off spatial frequency being about 90 cy/mm, a figure hardly reached by any view camera lens covering the 4x5" format. Modern color film are capable of recording fine details above 100 cy/mm but with a vanishing contrast.

I recenty read that Sony has announced silicon chips for mobile phones with 12 Mpix on a sensor with 7mm of diagonal, 3288 x 2468 pixels ; this yields a pixel pitch of 1.7 micron (we know nothing from these figures about pixel surface, smaller than 1.7 x 1.7 micron square !)
http://mobilearsenal.com/new/sony_12...r_mobiles.html

So a pixel pitch of 1.7 microns yields a cut-off period of 3.4 microns, this corresponds to a diffraction-limited lens at about f/5. For a fixed-focal length for a mobile phone, if it is not a wide-angle, why not ? But sure, we are now entering a situation where the optical diffraction cut-off frequency of the lens is the actual limit.
To me this is a blessing since in this perspective we can forget about anti-aliasing filters. Good by also to anti-moiré post-processing software as advertised by Hasselblad !

Compare with the situation we have in LF on film.
If you read the actual MTF data of a fine-grained color slide like PROVIA 100F, you find that the MTF curve of this film is very close (up to 60 cy/mm) to the MTF curve of a 7-micron pitched sensor fitted with an anti-aliasing filter cutting off at 70 cy/mm, except that film transmits fine details with a low contrast well above 100 cy/mm.
Since you are French you can read my article here and check by yourself.
http://www.galerie-photo.com/film-co...esolution.html

But we can use (B&W) films for which the MTF extends above 200 cy/mm, the Gigabit(TM) film was available in 4x5" ! So why not doing the same with digital sensors ?

Regarding tiny pixels, I see a great interest in them since it gives a total freedom to the digital imaging software engineer to do as much pre-processing as thay can inside the camera, or inside the digital back before delivering the actual image file to the end user.
Once the software is developed, it is costless to duplicate it. And the computations are secret and proprietary, do not expect any details about that.
In Europe we do not recognize software patents, so all pre-processing tricks, at least for the European market, will be secret know-how. And if a company is issued a software patent for digital image pre-processing valid only on the US, taking into account that you have to disclose a minimum of technical details in a patent, this would mean that the patent is kept secret for European readers !

So on the contrary I'm expecting that zillions of tiny pixels will continue to be both a marketing gimmick for the years to come, and a blessing for software engineers. It will give them so much freedom in pre-processing, that the question of the optical diffcation cut-off frequency wil be more or less marginal.
I'm thinking of all kind of intelligent pre-procesing software based on smartly combining pixels together, in a adaptative way, testing locally the shapes and intensities in the image, in order to reduce noice where it is the most visible and to enhance egde sharpness ad libitum.
Instead of good old Fourier methods based on a spatially invariant linear processing, I see a heavy use of all kinds of non-linear and adaptative image pre-processing techniques based on brute-force methods with zillions of pixels !