I am curious to calculate how large an array of sensors would be needed to be able to record as much information as there is on a well-made and properly processed 8x10 negative.
We all know that a digital sensor may have an image-capturing capability greater than that of a traditional, chemically-based negative of the same area; or accordingly, may be (and usually is) smaller than the traditional film-based format of a comparably-sized camera. The question is, what size, and pixel-capturing capability would be necessary to approximate the amount of information one gets from an 8x10 camera?
Here are my assumptions, to use as a basis for a comparison:
Obviously, such a calculation would be a theoretical limit, based upon an ideal contrasty subject in bright light, with the finest grain/developer combination customarily used in ordinary pictorial photography; since the standard for most digital sensors is equivalent to ISO 100, let us say that the analog-based benchmark would be a sheet of FP4 Plus processed in a metol-hydroquinone developer such as ID-11 (D-76). I prefer PMK Pyro, but the numbers used to be reported for more conventional developers. If memory serves me rightly, such a combination would have a theoretical maximum resolution of about 125 lp/mm, well beyond any lens.
The best general-purpose lenses in use for large-format photography are supposed to have a maximum resolving power equivalent to, say 40 lp/mm in the central area, and perhaps 20 out in the periphery. (I'm using Schneider's figures for their old Super-Symmar series, but I'm assuming my Commercial Ektar and Fujinon-W lenses are not that far off from this. Yes, there are many other factors which go into the design of a lens, but I'm trying to calculate maximum data-collecting capability here.)
An 8x10 negative or transparency has an area of slightly more than 50,000 sq. mm (51,562 to be more precise, but there are the unexposed borders of the film, and anyway, these calculations are going to have to be very approximate, anyway.) Even at a very high 40 lp/mm, this implies the film would have registered a little over 4,000,000 data points (intersections) from a perfect grid inscribed with all lines at the minimum distance resolvable, again under these ideal, high-contrast conditions. More likely, there would be between 1 and 2,000,000 such points recorded, as the resolving power falls off away from the central axis and under more normal light conditions. (Of course, if the image produced has this capability, the number of useful data points at the classical photographer's disposal would be the same in any ordinary photograph, it simply wouldn't be as easy to measure them.)
Now, assuming we are concerned with instantaneous image capture here—no scanning and re-sampling of the data, because the 8x10 camera is not itself limited to perfectly static subjects—how many pixels are necessary to approximate those 1 to 4 million usable grains in the image, and how large a sensor does one need to record the same amount of information? I assume one gets one bit of data (i.e., 1 or 0) from each cell of a sensor, and at 8 bits per byte, if (and this is a big part of my question) it takes 1 byte to contribute the equivalent information of that one exposed and developed grain of silver, (i.e., one pixel) it would seem that a sensor capable of generating 32 Mp would be capable of giving the same amount of data as a black and white photograph from an 8x10 camera, and, since digital sensors are always capable of recording color, the sensor would have to be 4x that size, or 128 Megapixels to have the same capabilities. Now, no one has yet made a sensor that large to my knowledge, but if they could produce one at a cost someone could afford, would that do the trick? Or have I omitted some factors? Equivalently, there is a great deal of software which reads and interprets the image before it is transmitted to a recording medium, even in Raw format. This should reduce the number of Mp, and therefore, of cells necessary to give the same result.
I also appreciate that the physical size of each cell makes some difference. Nonetheless, does a Leaf 80 Mp 40mm X 54mm sensor capture the same amount of information as, say, a 5"x7" film camera? Would it, if you made it physically twice (or four or six times) the size in order to have larger individual cells?
I understand that such an image would still not have the same character or look as the chemically-produced image. But is such a calculation as I have made fundamentally faulty, and if so, why?
Bookmarks