PDA

View Full Version : B&W Film Dynamic Range



marschp
5-Sep-2007, 08:10
I started photography in the digital domain, have recently moved to colour transparency, and am now looking at B&W film. My question is, how do I find out info on the DR of different B&W films. I've just trawled the Ilford fact sheets and it doesn't mention DR. I've just got used to dealing with 5 stops of DR on a sensor, and 4 and a bit of DR on Velvia 50 (using NDG's and all that) - and I;ve heard that some B&W films can handle scenes with DR of 10-11 stops (does this mean I won't need to use all my NDG filters?) - but where do I find out specific DR for specific films? Any help would be greatly appreciated.

Paul.

JW Dewdney
5-Sep-2007, 08:14
Actually- of all places - I'd probably recommend Ansel Adams' book The Negative for some of this info. It will probably tell you just about all you need to know on the subject...!

paulr
5-Sep-2007, 09:04
The DR of black and white films is almost infinitely adjustable. A film like TMX can have a range of 2 or 3 stops with an extreme high contrast developer, 10 to 12 stops with normal development, to over 20 stops (1 : 1 million!) with an extreme low contrast, compensating developer like POTA.

The reason "normal" development means 10 or so stops dynamic range is that this has proven most useful for people most of the time--it's a convenient standard, not something inherent in the film itself.

Ken Lee
5-Sep-2007, 10:16
While we are on the subject - Why can't affordable digital cameras (or their sensors) handle a wider dynamic range ?

Gordon Moat
5-Sep-2007, 10:30
Photon collection and charge accumulation. Imaging sensors essentially build a full charge in each pixel well when they indicate a pure white area. A pure black area would basically mean no charge accumulated. As you go from white (lighter areas) to darker areas, there is less charge collected. So darker tones are created from a very low charge accumulation, which is also why Bayer interpolation causes worse problems in darker tones than in lighter regions of a scene/image. Anyway, there are entire books on this stuff, so obviously I am greatly simplifying this explanation.

Ciao!

Gordon Moat
A G Studio (http://www.allgstudio.com)

Bruce Watson
5-Sep-2007, 10:44
While we are on the subject - Why can't affordable digital cameras (or their sensors) handle a wider dynamic range ?

What Gordon said. Basically it comes down to the size of the sensor wells and the range of charge accumulation. Bigger arrays take more real estate on the wafer which runs the cost up. More charge accumulation takes a higher quality design, from the wafer to the onboard camera electronics, which also runs the price up. IOW, you get what you pay for.

Historically the price has been dropping as sensor quality has been increasing. I have no doubt that Moore's Law is being confirmed for digital camera sensors. So give it a few years -- eventually the quality/cost balance will be what you want.

But if you are waiting for that $1000 USD P80+ digital back equivalent like I am, you are going to be shooting LF film for quite a few more years ;)

Bruce Watson
5-Sep-2007, 10:56
The DR of black and white films is almost infinitely adjustable. A film like TMX can have a range of 2 or 3 stops with an extreme high contrast developer, 10 to 12 stops with normal development, to over 20 stops (1 : 1 million!) with an extreme low contrast, compensating developer like POTA.

The reason "normal" development means 10 or so stops dynamic range is that this has proven most useful for people most of the time--it's a convenient standard, not something inherent in the film itself.

What Paul said. I saw a report many years ago that said that scientists at Kodak had shown that TMX could deliver something like 20 stops of density information. That is, about 10 stops more than you could print with an enlarger if you were really working at it. The film can do it, but it's not really useful to us because we can't read the information outside the laboratory.

What this means is that dynamic range is difficult to define for modern B&W films. What that means to us is that we are free to use exposure and development to create the density range we want on the film. That density range is limited by the darkroom papers we use, or the scanning system we use. It is not limited by the range of densities of which the film is capable. And that is perhaps the reason no one bothers to publish the dynamic range information you seek.

paulr
5-Sep-2007, 10:58
aparently POTA and plus x were used years ago to capture detailed pictures of nuclear explosions.

roteague
5-Sep-2007, 11:44
I have no doubt that Moore's Law is being confirmed for digital camera sensors. So give it a few years -- eventually the quality/cost balance will be what you want.

Moore's Law has nothing to do with the quality/cost of sensors, it has to do with the size and number of transistors that can be packaged in an integrated circuit; granted quality/cost is a measure of the effectiveness of Moore's Law, but not what defines it.
What is changing is that sensors are moving away from being CCD based to CMOS based, and the processing software is getting better.

Ron Marshall
5-Sep-2007, 11:45
A link to some examples of adjusting b/w development to adjust the density range of the negative to suit the brightness range of the subject:

http://unblinkingeye.com/Articles/PC-HD/pc-hd.html

Bruce Watson
5-Sep-2007, 12:12
Moore's Law has nothing to do with the quality/cost of sensors, it has to do with the size and number of transistors that can be packaged in an integrated circuit; granted quality/cost is a measure of the effectiveness of Moore's Law, but not what defines it.

Picky. Gordon Moore did observe back in 1965 that the semiconductor industry was doubling the number of transistors in an integrated circuit about ever two years (turned out to be closer to 18 months, but so what?). But Moore's law has since been applied to much of the electronics industry, from hard drives to LCD screens. True to the original "definition" or not. It's now seen as a doubling (or halving) of size, capacity, quality, cost, whatever in some small amount of time, usually a year to 18 months.

This has been holding true for digital camera sensors too. Resolution goes up, cost comes down, sensitivity goes up, dynamic range goes up, sensor sizes go up (full size 35mm, MF), sensor sizes go down (APS-x), etc. This relentless drive to improve causes upheavals in the manufacturing of these devices, moving from CCDs to CMOS is one example. This is also no different than what Moore was seeing at Intel way back when he made his observation.

My point was and still is that while Moore's Law or something just like it applies to digital camera sensors, these sensors are still a ways off from equaling the quality/unit cost of a piece of LF film.

cowanw
5-Sep-2007, 12:30
While we are on the subject - Why can't affordable digital cameras (or their sensors) handle a wider dynamic range ?

This is something that has been puzzling me for some time. While I have often read that one should expose digitally as one does for slide film (similar dynamic ranges), the Jan 07 issue of PhotoTechniques has an article that suggests that digital cameras have dynamic ranges of 8-10 stops (similar to B&W film).
I have not asked this yet as I am a little unsure of the propriety of the digital question, but as it has been brought up.
Comments?
Regards
Bill

Toyon
5-Sep-2007, 17:04
Anyone have any experience with developers that offer POTA-like effects with less troublesome streaking?

Ken Lee
5-Sep-2007, 17:21
"I have often read that one should expose digitally as one does for slide film (similar dynamic ranges), the Jan 07 issue of PhotoTechniques has an article that suggests that digital cameras have dynamic ranges of 8-10 stops (similar to B&W film)."

It has been suggested that digital cameras should be exposed with respect to the high values. According to this view, one can scavenge additional detail from the shadows, more easily than one can retrieve texture from blown high values, since once the high values have been pushed over the edge, texture is non-existent.

There are techniques in Photoshop which allow you to do just this - but the results can often be mediocre. For that reason, cameras and software now allow the user to shoot a series of exposures, and blend them together later. Given the popularity of HDR (high dynamic range) images, the 8-10 stop range may be a bit of an exaggeration.

roteague
5-Sep-2007, 17:26
This has been holding true for digital camera sensors too. Resolution goes up, cost comes down, sensitivity goes up, dynamic range goes up, sensor sizes go up (full size 35mm, MF), sensor sizes go down (APS-x), etc. This relentless drive to improve causes upheavals in the manufacturing of these devices, moving from CCDs to CMOS is one example. This is also no different than what Moore was seeing at Intel way back when he made his observation.

Except that these factors haven't really gone up much at all. For example, in the DX sensor size, 12 MP is still the best you can get. That goes all the way back to December 2004 when the Nikon D2x was released. Even Canon with its full size sensor (September 2004) took almost 3 years to get an upgrade.


My point was and still is that while Moore's Law or something just like it applies to digital camera sensors, these sensors are still a ways off from equaling the quality/unit cost of a piece of LF film.

I agree with your second part. However, I just spent over $2000 on a new film Nikon (F6), so that shows you what I like most. :D

Scott Knowles
5-Sep-2007, 17:33
This is something that has been puzzling me for some time. While I have often read that one should expose digitally as one does for slide film (similar dynamic ranges), the Jan 07 issue of PhotoTechniques has an article that suggests that digital cameras have dynamic ranges of 8-10 stops (similar to B&W film).
I have not asked this yet as I am a little unsure of the propriety of the digital question, but as it has been brought up.
Comments?
Regards
Bill

Ideally, they're likely right. I've done some non-scientific tests using a grayscale card and can record 9 stops shooting jpg and in camera b&w at ISO 100. This falls to ~7 at higher ISO's but the real fault seems to be using EV control or manual over/under exposure, where it quickly collapses to ~5 stops adjusting +/- 1-2 stops.

In real life shooting b&w jpg the range doesn't seem as dynamic, mostly in the 5+ range. It's why you might have better results using post-processing to create b&w images. I'll stick with shooting jpg (b&w) or raw+jpg (b&w jpg) in camera. Or better yet, use real film.

I also find the photo magazines seem to disagree on this topic. Where the magazines and some equipment junkies seem to tout the high dynamic range, citing 8-10 stops, most of the professionals using even high(er)end DSLR's I've read or heard cite 5 stops in the field. They usually recommend multiple images with different exposures to expand the image's range to composite images.

It's a real "Huh?" sometimes.

Oren Grad
5-Sep-2007, 17:54
While I have often read that one should expose digitally as one does for slide film (similar dynamic ranges), the Jan 07 issue of PhotoTechniques has an article that suggests that digital cameras have dynamic ranges of 8-10 stops (similar to B&W film).

I think where this comes from is that DPReview and Imaging Resource regularly test new cameras with a standard gray scale. In those tests, using RAW capture, taking full advantage of ACR controls including highlight recovery and working at the native (lowest) ISO setting, and specifying an arbitrary noise "floor" to define the bottom end of the scale, it's usually possible to extract a total range of 10-11 stops with current DSLR sensors (other than the special Fuji sensor, which can hold at least a stop more range).

My experience with a low-end DSLR strongly suggests that this considerably exaggerates the amount of useful range with pictorial subjects in the field.

Just as important, at its maximum dynamic range a digital sensor has zero exposure latitude. So you indeed need to meter exactly for the highlights, because any overexposure means no information in the capture for the brightest highlights. Many (most?) DSLR meters aren't good enough to do that reliably when left to their own devices, so you also typically need some trial and error with the histogram to nail the exposure, which is hopeless with any subject that's changing.

So for real subjects in the field, B&W negative film has a substantially greater effective dynamic range, meaning not just the range that it can capture under ideal conditions, but also a corresponding substantial advantage in exposure latitude that makes a big difference in the practical ability to capture long-scale subjects under poorly controlled, fast changing real world conditions.

Brian Ellis
5-Sep-2007, 19:43
While we are on the subject - Why can't affordable digital cameras (or their sensors) handle a wider dynamic range ?

I don't know enough to have any idea what the technical reasons might be and I don't know what you consider "affordable." But in general limited dynamic range for me has been much less of a problem with my digital cameras than with film. If I'm typical maybe there just isn't much incentive to improve the dynamic range in "affordable" digital cameras because it's so easy to make multiple exposures and then merge them in Photoshop. Or even make a single exposure in camera and then make two or more "exposures" in Camera Raw and merge them. Then there's HDR, which I've never tried because the few prints I've seen from HDR have looked kind of phony - bright foreground like a sunny day, dark clouds like a storm, that kind of thing. But two exposures in camera or in Camera Raw followed by a merger seems to work pretty well.

paulr
5-Sep-2007, 20:27
All it sounds like is that digital sensors are currently closer transparency film than negative film, and that this will be the case until there are some major advances.

Gordon Moat
5-Sep-2007, 22:06
Oddly enough it is more connected to the physical size of the pixel wells (cell site) than other factors. However, a few issues have been, or still are, in development. Canon has been working on minimizing dead space between pixels. Kodak and Canon have been working on improved micro lenses over the imaging chip. Sony and Nikon have been working on improved Bayer filtration, or a few other ideas. Dalsa have worked on minimizing dark current noise.

Unfortunately, the larger pixel wells simply have better ability to gather light. So this is one reason why so much development goes into all those other items. While Samsung have managed to make working pixel sizes under 3 µm (micron) sizes, these are very limited in their ability to capture photons and build charge. Based upon several White Papers I have seen on the subject, the physical limit seems to be 5 µm pixel sizes. What that would mean on a 42mm by 56mm sized chip, would be around 100 MP imaging.

Fuji do show an alternative with their Super CCD, basically using two different pixel well sizes. However, only they are making these, and it seems that progress is very slow compared to other chip designs. That leaves Nikon with their D2X at 5.4 µm pixel sizes as the current technology limit. Eventually this could mean chips get closer to 5 µm pixels, and all other aspects from micro lenses to Bayer filtration, and other filtration might become optimized in the near future.

The move towards CMOS in some cameras has more to do with simpler A/D converter design and lower power requirements. All medium format digital backs use CCD sensors from Kodak and Dalsa; though these should be considered as cost no object items. Higher volume consumer goods will drive technology more than any quest for better high end gear. I would expect slow and steady changes at the high end, though perhaps a 42mm by 56mm or 56mm square imaging sensor soon . . . then maybe several more years for anything larger.

What really needs to happen is an entirely new type of sensor. Fuji did briefly demonstrate one idea, with the ability to escape grids or colour filtration, but I think that will be a long way off, if ever. I would be surprised if CCD and CMOS sensors are around in 10 years, though I think that optimum 5 µm size will be reached soon.

Ciao!

Gordon Moat
A G Studio (http://www.allgstudio.com)

Greg Lockrey
6-Sep-2007, 00:09
A little OT & FWIW: You can get pretty close to the DR of film with digital using a technique called HDR (High Dynamic Range) where you combine multible bracketed exposures of two or more stops increments into a single image. You are limited to mostly stationary subjects and on tripods for the most part. I use it when copying large pieces of artwork with a camera when a scanner isn't practical and I need to capture detail in dark (zone 3) areas and not lose highlight detail at the same time. The latest Canon's allow pretty fast bracketing now.

Joanna Carter
6-Sep-2007, 01:09
I did some tests on my old Nikon D100 and found that it could cope with 6 stops range: 1.5 above "standard" exposure and 4.5 below; thus making it very susceptible to blowing highlights. I found that compensating the exposure by -0.7 stops helped even this spread out and avoid blowing highlights.

Now, as to DR, don't forget, with RAW at least, you can "recover" up to 2 stops over or under-exposure (well, sort of), so therefore, you should, theoretically be able to shoot upp to 10 stops range; it just needs a bit of jiggery-pokery, importing the same image 3 times at different settings and then using something like the HDR bit of Photoshop.

Hmm, might as well use a real camera :-)

Greg Lockrey
6-Sep-2007, 04:41
I did some tests on my old Nikon D100 and found that it could cope with 6 stops range: 1.5 above "standard" exposure and 4.5 below; thus making it very susceptible to blowing highlights. I found that compensating the exposure by -0.7 stops helped even this spread out and avoid blowing highlights.

Now, as to DR, don't forget, with RAW at least, you can "recover" up to 2 stops over or under-exposure (well, sort of), so therefore, you should, theoretically be able to shoot upp to 10 stops range; it just needs a bit of jiggery-pokery, importing the same image 3 times at different settings and then using something like the HDR bit of Photoshop.

Hmm, might as well use a real camera :-)

Nah...by the time you shot and developed your negative and waiting for it to dry and then scan, I have a finished print already out to the customer. Using a program like Photomatix, I can almost "automatically" make a 9-11 stop DR range by using 3-5 braketed exposures. Soon this will all be done in camera. I'll stick with my make believe camera.:) Not to get off topic.

Ole Tjugen
6-Sep-2007, 05:38
Anyone have any experience with developers that offer POTA-like effects with less troublesome streaking?

Windisch' Extremely Compensating Pyrocatechin developer - the original recipe - allowed me to pull out detail in the foreground on a sequence shot of a partial solar eclipse. On the negative it's possible to identify sunspots. That's more than 27 stops!

BTW - the film was APX100.

David Luttmann
6-Sep-2007, 05:45
While we are on the subject - Why can't affordable digital cameras (or their sensors) handle a wider dynamic range ?

They do. Most DSLRs now can capture 8 to 10 stops easily. This 5 stop figure was exceeded by the old Canon D30 back in 2000 and as such is just a myth quoted by some.

marschp
6-Sep-2007, 06:19
They do. Most DSLRs now can capture 8 to 10 stops easily. This 5 stop figure was exceeded by the old Canon D30 back in 2000 and as such is just a myth quoted by some.

Thanks everyone for all the comments on this thread - very useful, and revealing of the immense differences not just between digital and film, but between b&w and colour.

My own experience with a high-end DSLR is that it can handle just about 5 stops of DR. I just don't believe the claims for 10-11 that I read in magazines and review - if it was possible then why aren't canon and nikon shouting it from the rooftops? I recently asked a very senior DSLR product engineer what DR I should be expecting from my DSLR - he said about 1 to 1+1/2 stops!!! and was reluctant to be quoted - I took this answer with a pinch of salt and concluded that the concept of DR was not widely used within that manufacturers product engineering department. (Note, the same engineer also couldn't quite understand my request for the bracketing feature on future DSLR models to allow more than three brackets - that way shooting multiple HDR exposures for wider dynamic range would be much easier!!).

One of the reason's I've moved over to LF film is that, despite the slightly narrower DR on colour transparency film, it seems to me to have a more forgiving character around the upper and lower mid-tones compared to my digital sensor. On the 5D it seems that the sensor has a non-linear response around this area (steeper curve?). Whatever, I seem to get my manual exposures correct on film much more than I do on digital.

So, if I can conclude, on B&W film, I should not expect to have to use my (extensive and costly :) ) collection of Lee NDG filters in order to control the dynamic range on most occasions. I WILL need to pay more attention to placement of tones, thinking in terms of an expanded zone system compared to colour film or digital. It also sounds like I'd better know what appearance I want when it comes to talking to my processing lab for developing and printing.

Thanks

Paul

Greg Lockrey
6-Sep-2007, 06:25
I use a 5D as well, to my eye, it's close to the DR of Ektachrome slide film. I shoot RAW exclusivly so there is that capability to "pull" a little extra out of it. With HDR, it approaches negative color film. B&W is a different ball game altogether.

David Luttmann
6-Sep-2007, 07:21
Thanks everyone for all the comments on this thread - very useful, and revealing of the immense differences not just between digital and film, but between b&w and colour.

My own experience with a high-end DSLR is that it can handle just about 5 stops of DR. I just don't believe the claims for 10-11 that I read in magazines and review - if it was possible then why aren't canon and nikon shouting it from the rooftops? I recently asked a very senior DSLR product engineer what DR I should be expecting from my DSLR - he said about 1 to 1+1/2 stops!!! and was reluctant to be quoted - I took this answer with a pinch of salt and concluded that the concept of DR was not widely used within that manufacturers product engineering department. (Note, the same engineer also couldn't quite understand my request for the bracketing feature on future DSLR models to allow more than three brackets - that way shooting multiple HDR exposures for wider dynamic range would be much easier!!).

One of the reason's I've moved over to LF film is that, despite the slightly narrower DR on colour transparency film, it seems to me to have a more forgiving character around the upper and lower mid-tones compared to my digital sensor. On the 5D it seems that the sensor has a non-linear response around this area (steeper curve?). Whatever, I seem to get my manual exposures correct on film much more than I do on digital.

So, if I can conclude, on B&W film, I should not expect to have to use my (extensive and costly :) ) collection of Lee NDG filters in order to control the dynamic range on most occasions. I WILL need to pay more attention to placement of tones, thinking in terms of an expanded zone system compared to colour film or digital. It also sounds like I'd better know what appearance I want when it comes to talking to my processing lab for developing and printing.

Thanks

Paul


Sorry Paul. What you believe and what is true are different things altogether. You can see here:

http://www.dpreview.com/reviews/CanonEOS5D/page22.asp

what some DR tests show. You'll see that all the tested 12 bit cameras go beyond 8 stops....in the case of the 20D, 8.4 stops. In the case of the Fuji S5 here:

http://www.dpreview.com/reviews/fujifilms5pro/page18.asp

Total DR in this system at iso 100 is 11.8 stops.

I would say that if you are only acheiving 5 stops, the problem is inherent in your exposure and Raw processing technique and not in the capture devices.

Continuing to state 5 stops is nothing more than nonsense and is just plain incorrect. Sorry.

jetcode
6-Sep-2007, 07:55
While we are on the subject - Why can't affordable digital cameras (or their sensors) handle a wider dynamic range ?

Nature cannot easily be contained with any sensory system whether it's our human body or analog sensors. In fact all measurement systems have the same dynamic range problem.

Imagine a 16 bit A/D which can capture 65,536 unique values between reference level X and reference level Y. To maximize resolution the range between X and Y must be narrow, in fact the more narrow it is the more precise the measurement within the capabilities of the devices employed. If the range of X and Y is really wide then the resolution is spread out over a much larger area. Measurement systems are designed to capture a given precision over a given range and this is usually very limited in order to capture essential information. Gain or attenuation stages are used to amplify or limit signals to fit within range X and Y. This is what ISO is on a digital camera. As you increase ISO the incoming signal is amplified and hence the noise which is present in all electronic devices and corresponds to the natural energetic chaos that exists in the universe. With film a gain in signal strength is achieved by increasing the sensitivity of the emulsion and reduction is achieved by stopping the lens down or using ND's. In digital this is achieved by scaling the sensor data to the A/D's X and Y range or window.

High capability measurement systems cost more to design and produce. Opamp's used for a gain stage can cost 8 cents to $12 or more each in production depending on noise, range, sensitivity, and precision. The most important element is the sensor itself. The better the sensor the higher the capability. The good news is that the R & D budgets for digital camera systems is high because there is significant market value.

Hope that helps a bit,
Joe

jetcode
6-Sep-2007, 08:07
Picky. Gordon Moore did observe back in 1965 that the semiconductor industry was doubling the number of transistors in an integrated circuit about ever two years (turned out to be closer to 18 months, but so what?). But Moore's law has since been applied to much of the electronics industry, from hard drives to LCD screens. True to the original "definition" or not. It's now seen as a doubling (or halving) of size, capacity, quality, cost, whatever in some small amount of time, usually a year to 18 months.

This has been holding true for digital camera sensors too. Resolution goes up, cost comes down, sensitivity goes up, dynamic range goes up, sensor sizes go up (full size 35mm, MF), sensor sizes go down (APS-x), etc. This relentless drive to improve causes upheavals in the manufacturing of these devices, moving from CCDs to CMOS is one example. This is also no different than what Moore was seeing at Intel way back when he made his observation.

My point was and still is that while Moore's Law or something just like it applies to digital camera sensors, these sensors are still a ways off from equaling the quality/unit cost of a piece of LF film.

Every law has a curve and in particular the wall that is limiting this curve is the physics of silicon itself. Manufacturers are using sub-micron processes and have pushed the bandwidth to it's limits. It will be fascinating to see where all of the goes in the near future. One could say that Moore's law applies to much of the industrial nature of mankind, bigger better tractors, taller better built buildings, more and more choices, etc.

Neal Shields
6-Sep-2007, 08:26
I think the big problem of applying Moore's law to camera sensors is that in a memory chip or a CPU if you have a dead transistor you simply "map it out". On a sensor you can only have so many dead pixels before the entire sensor is junk.

The bigger the sensor and the more pixels the more likely that you will have to throw an unacceptable number away.

With some of the sensor technologies, information is read out by passing the information down the row from one pixel to another, like people filing out of a movie theatre. If you have a dead pixel, all pixels upstream can't communicate. I don't think CMOS has this problem but it comes at the cost of having more non light sensetive areas on the sensor. I.E. if you don't pass from one pixel to another you have to have highways to move the data. I think I remember that there is a problem with passing the electrons to lower layers of the chip but I don't remember why.

jetcode
6-Sep-2007, 08:26
On the 5D it seems that the sensor has a non-linear response around this area (steeper curve?).


You are dead on in assessing that all analog sensors exhibit some form of linear and non-linear transfer curves. Much of sensor conditioning pertains to making corrections in gain, linearity, and precision. Every sensing system whether analog or digital conforms to nature of analog systems. Beyond 18 bits of resolution is the noise floor or the random chaotic noise that permeates the universe. 24 bit converters are highly filtered to achieve higher resolutions. Sensors are notorious for non-linearity. A modern transistor has a very narrow range of linearity but when used correctly can produce a beautiful linear function. Half the battle in engineering is working with inherent limitations in nature and device physics.

Joe

jetcode
6-Sep-2007, 08:38
I think the big problem of applying Moore's law to camera sensors is that in a memory chip or a CPU if you have a dead transistor you simply "map it out". On a sensor you can only have so many dead pixels before the entire sensor is junk.

The bigger the sensor and the more pixels the more likely that you will have to throw an unacceptable number away.

With some of the sensor technologies, information is read out by passing the information down the row from one pixel to another, like people filing out of a movie theatre. If you have a dead pixel, all pixels upstream can't communicate. I don't think CMOS has this problem but it comes at the cost of having more non light sensetive areas on the sensor. I.E. if you don't pass from one pixel to another you have to have highways to move the data. I think I remember that there is a problem with passing the electrons to lower layers of the chip but I don't remember why.

There are no bad transistors in CPU's, there are failures in logic and that is why no one can find the BIOS engineer at Intel. Mapping out bad memory happens in high density flash, there is no mapping at the CPU level or in RAM, every cell has to work or the chip is tossed. There are flash components that are 100% functional and are very expensive and used for mission critical applications.

bucket brigade devices (chained, movie theater) are noisy and no longer employed as the noise floor is amplified by each cell as the signal passes from one cell to the next. These devices were used for analog delay. CCD's and CMOS are not bucket brigade, they are analog sensors cells operated in parallel limiting the over all noise floor to a single cell. I think you are referring to the multiplexing that is necessary to interface huge arrays of pixels in sensors. It's the same strategy that is used to interface huge arrays of memory. Careful design will limit the degradation of an analog signal in a multiplexed interface.

marschp
6-Sep-2007, 12:34
Sorry Paul. ....

I would say that if you are only acheiving 5 stops, the problem is inherent in your exposure and Raw processing technique and not in the capture devices.

Continuing to state 5 stops is nothing more than nonsense and is just plain incorrect. Sorry.

David - I checked those links - very interesting, but I'm still not convinced. It feels to me a little like when my Lexus dealer tells me that the RX400H can do 34.5mpg - that might be right in the lab, but my experience tells me its not going to happen in real life (I get 27mpg;)).

I just took two sets of (real life) test shots to challenge myself, and compare with the DPR claim. The scene metered a total dynamic range of 7.6 stops. The Jpeg, on a generous assessment, seems to have captured 6 stops before maxing out. The overall image appears dark and the distinction among the shadows is poor. I have bracketed jpegs of the same scene that look better tonally, but the sky is blown out and not recoverable.

The RAW version did better. Using the ACR settings suggested in the DPR review for the 5D I was able to produce a histogram that encompassed the full 7.6 stops range of the image as metered. However, this produced an image that I would class as unusable. The DPR test suggested settings seem to sacrifice mid-tone luminosity and raise the shadow detail (and noise with it) in order to maximise DR - the result is a flat dull image that lacks contrast in the shadows and the highlights. (By the way, I was pretty surprised to see that the ACR default settings used in the DPR test produced a RAW DR that was about 1 1/2 stops WORSE than the jpeg test).

I'm intrigued by the DR response curves that DPR produce - particularly the non-linearity they all display. I'm by no means an expert in this field (as you probably can tell), but maybe its that non-linearity in the shadows and (to a lesser degree) in the highlights that explains the difference between 'theoretical' DR (the 10-11 stops that DPR state) and 'usable' DR (the 5 and a bit stops that I'm talking about).

Next trip out with my 5D, I think I'll still be taking my grad filters.

Cheers

Paul

David Luttmann
6-Sep-2007, 13:30
Paul,

When you start with an image with high DR, you have an inherently low contrast image to start. You then have the ability to adjust it afterwards to a contrast level that you find suitable. As to noise in shadows, there is far less noise in those shadows than there is with film grain.

That said, you just did the test yourself an obtained 7.6 stops with a 12 bit system. A moment ago it was only 5 stops. Amazing how these figures change when subjected to scrutiny ;-) I find better results with Capture One to the tune of about .5 stops.

That said, I've had no problem achieving 8 stops or more which is more than I've ever obtained from ANY Fuji or Kodak transparency film.....and definitely more than 5 stops!

marschp
6-Sep-2007, 15:19
Paul,

That said, you just did the test yourself an obtained 7.6 stops with a 12 bit system. A moment ago it was only 5 stops. Amazing how these figures change when subjected to scrutiny ;-)

Yes, but my point is that the 7.6 stops image is unusable, no matter what post processing treatment. So I have to conclude that the 7.6 stops of DR that I thought I had, is in reality not letting me produce a good image. On the other hand, if I had approached the shot as though my sensor could only manage 5 and a bit stops, then I would have achieved a much more satisfying image, either through the use of grads, or through the blending of several images via HDR or layering.

Here's a thought: my kind of landscape photography takes place in low light conditions, but I'm still trying to capture a broad range of light - the unfiltered histogram would no doubt have a hump at either end of the range, with not much in the middle. Generally, I would apply 2 to 5 stops of ND grad to bring the scene within what I regard as the 'useful' DR of my sensor - i.e. 5+ stops. So lets say the typical scene I'm shooting is 7 to 10 stops - supposedly within the 5D DR claimed by DPReview. Yet, the bulk of the contrast is at the extreme ends of the DR - just where the sensor becomes the most non-linear. Doesn't that suggest that the sensor is least useful just where I need it most - i.e in the area of highlight contrast and shadow contrast, and that's why I'm struggling to accept the 'usability' of the full DR of the sensor.

Paul

tim atherton
6-Sep-2007, 15:38
Yes, but my point is that the 7.6 stops image is unusable, no matter what post processing treatment.
Paul

the 20 stops of TMX isn't very usable either (unless you are photographing things like nuclear explosions)

The point is that it's there if you need it.

With B&W film you chose you range/development depending on the subject matter and the look you want - sometimes you compress it, sometimes you expand it, sometimes you just go for the middle of the road

paulr
6-Sep-2007, 20:10
This thread is reminding me of how cool b+w film is. I have no particular attachment to color film, but it seems like a satisfying substitute for b+w is long, long way off.

Joanna Carter
6-Sep-2007, 22:43
Yes, although I have taken some stunning shots in colour, I have had to work hard to capture the full DR that was present; especially where the highlight/shadow was not filterable unless I had a triangular or oval shaped grad filter :-) On occasions like that I have resorted to taking more than one trannie and compositing the scans into one "HDR" image.

By contrast, I have also had the immense pleasure of taking a B&W shot with some 11 stops of DR, developing it at N-2, and playing with split contrast printing techniques to produce the most wonderful of images.

My problem? My eye tends to discover more colour pictures than B&W but the B&W, when I find them are, by far, the most satisfying.

Helen Bach
7-Sep-2007, 06:30
Yes, although I have taken some stunning shots in colour, I have had to work hard to capture the full DR that was present; especially where the highlight/shadow was not filterable unless I had a triangular or oval shaped grad filter :-) On occasions like that I have resorted to taking more than one trannie and compositing the scans into one "HDR" image....

Slightly off topic, but why not shoot colour negative if you want a good dynamic range?

Best,
Helen

David Luttmann
7-Sep-2007, 06:34
Slightly off topic, but why not shoot colour negative if you want a good dynamic range?

Best,
Helen

Far higher visible grain in final print is visible when scanning negs compared to chromes. Also, more difficult to profile a neg film for color accuracy.

Helen Bach
7-Sep-2007, 07:03
Far higher visible grain in final print is visible when scanning negs compared to chromes. Also, more difficult to profile a neg film for color accuracy.

David,

'Far higher visible grain'? What size are you printing, and how are you scanning? Shooting 4x5 Fuji Pro 160S and Kodak 160NC I don't have a problem with graininess. Is this a real concern in LF with the current colour negative films?

Unmasked film (ie reversal) is more difficult to get as colour-accurate as masked film (ie most colour negative film). Profiling is easy in comparison to compensating for dye imperfections in the absence of the two masks, if you want real colour accuracy. I consider colour negative film to be more colour accurate than reversal film.

Best,
Helen

Bruce Watson
7-Sep-2007, 07:21
'Far higher visible grain'? What size are you printing, and how are you scanning? Shooting 4x5 Fuji Pro 160S and Kodak 160NC I don't have a problem with graininess. Is this a real concern in LF with the current colour negative films?

Unmasked film (ie reversal) is more difficult to get as colour-accurate as masked film (ie most colour negative film). Profiling is easy in comparison to compensating for dye imperfections in the absence of the two masks, if you want real colour accuracy. I consider colour negative film to be more colour accurate than reversal film.

What Helen says. I've made 11x enlargements from 5x4 160PortraVC with no grain visible in the final print, even in the white fluffy clouds where you'd expect it. I also see the improved color accuracy. I can't come up with any reasons to use LF positive films beyond the instant gratification of a tranny on a light table. I've used negative films exclusively for five years now with no regrets.

Joanna Carter
7-Sep-2007, 07:55
Slightly off topic, but why not shoot colour negative if you want a good dynamic range?

Lack of saturation combined with masochistic tendencies ? ;)

paulr
7-Sep-2007, 08:01
Well, David is still right about the grain. It just means that you'll be able to enlarge more without seeing grain with transparency film than with a comparable neg film. It's not because of technology; it's about the nature of grain. Grain clusters are larger in denser parts of the film. In a transparency, that puts the larger grain in the shadows, where it's hidden. In a negative, that puts the larger grain in the highlights, like the lighter sky values, where it's much more obvious.

And yes, profiling and color accuracy is harder with neg film too.

That said, I much prefer color neg film. The dynamic range alone makes it the clear choice for the work I do.

nelsonfotodotcom
7-Sep-2007, 08:16
I prefer Acros in Diafine. With this combo I get pretty much all I want out of images, regardless of the complexity of scene values. First shot illustrates ability of this combination to render shadow details while controlling highlights - grain, yes, but unavoidable given the light, in my opinion. Still quite happy with the image. Second image illustrates delicious tonality and grain when shot under diffuse (heavy clouds) light. Third image is again under diffuse conditions.

For color-to-B&W I employ my D70s. I see no point in the additional expense of color film and processing to gain a B&W conversion. I also use the D70s for my color work. At some point I need to burn off the remaining color stocks, but then I'm finished shooting color film.

These are all RB67 images, FWIW.

http://farm2.static.flickr.com/1165/1190413844_cb1ca60382.jpg

http://farm2.static.flickr.com/1111/1019701659_30da9ea46a.jpg

http://farm2.static.flickr.com/1154/1190403176_86e69e2937.jpg

Another one for shits and grins.

http://farm2.static.flickr.com/1396/1005267062_c98d7deb6d.jpg

Best,
Craig

nelsonfotodotcom
7-Sep-2007, 08:23
Examples of digital-to-B&W conversions:

http://farm2.static.flickr.com/1304/867454079_d0ef14cac4.jpg

http://farm2.static.flickr.com/1112/867453467_45e42462f7_o.jpg

Helen Bach
7-Sep-2007, 10:10
Well, David is still right about the grain. It just means that you'll be able to enlarge more without seeing grain with transparency film than with a comparable neg film. It's not because of technology; it's about the nature of grain. Grain clusters are larger in denser parts of the film. In a transparency, that puts the larger grain in the shadows, where it's hidden. In a negative, that puts the larger grain in the highlights, like the lighter sky values, where it's much more obvious.

And yes, profiling and color accuracy is harder with neg film too.

That said, I much prefer color neg film. The dynamic range alone makes it the clear choice for the work I do.

Paul,

Your comment on graininess and density is true for silver-image negative films, but not always for dye-image films. The graininess of a dye-image negative film generally decreases as the density rises, along with a loss of definition. Reversal film has other things confusing the issue, such as the shadow tones being made from the smaller, less sensitive silver halide grains. It's not a simple comparison, especially when the film is post-processed digitally.

My original comment, however, was not that there is no difference in graininess between reversal and negative film (though I would not describe negative graininess as being 'far higher' than reversal graininess, but that could be merely a matter of our different calibrations - my adverbs are calibrated in British units), only that for me at least the graininess of negative film is not a problem in LF - so I just asked whether it is a concern for anyone else; a reason not to use negative film. It's not a reason in the movie business, and we use half-frame 35 mm blown up to a greater degree than LF negatives get blown up. It also appears not to be a reason for Joanna or yourself. My original question was to Joanna, specifically because I weighed up the difficulty of getting scans of two LF negatives to merge perfectly versus the use of colour negative film (I've tried it, and I'm in no doubt about which I prefer, but other people have other priorities).

As for the difficulty of profiling and the relative significance of that difficulty, that has to be a matter of opinion. On the other hand colour accuracy is a matter of fact: colour negative film has two masks to make corrections for dye impurities, and reversal film has more-or-less the same impurities but no masks - the only compensations for dye impurities are the interimage effects. It might appear comparatively easy to make an accurate scan of a transparency, but the transparency itself may not be an accurate record of the original scene. Not everybody wants technically accurate colours, of course. They want nice, pretty colours.

Best,
Helen

David Luttmann
7-Sep-2007, 12:36
Helen,

Transparency film ALWAYS has less grain in a scan than color or B&W negative film. This is nothing new. A simple google search or testing will prove this. A film like Astia 100 will slaughter a film like Portra 160 or Fuji Pro160 when it comes to grain. And I see grain in Fuji NPS in a 40" print quite easily......not true with Astia. So the difference is there.

As to color profiling, I'm sorry, but the opposite is true. The masking inherent to color negative films makes them MORE difficult to profile.....not less. Once again, this is old news & I'm surprised it's being questioned at all.

Helen Bach
7-Sep-2007, 13:09
David,

I think that you have mis-read what I am trying to say.

I'll say it again: I'm not disputing, and have never disputed, the fact that reversal film appears less grainy than negative film. There are old posts from me on other forums explaining why this is so. All I was asking about is whether or not this is a real concern for some people when shooting LF. It is obviously a concern for you, but it isn't a concern for me, and not a reason to shoot reversal instead of negative. Scanning Pro 160S on an Imacon 949 at 2040 ppi does not reveal any graininess if the original has been given enough exposure. I have 100% crops of Pro 160S on the web, and there is no visible graininess. Pro 160S is an improvement over NPS in this respect.

I've also tried to differentiate between profiling and colour accuracy. If you read what I have said, it is that colour negative film is inherently more colour-accurate than colour reversal because of the masks. The so-called difficulty in profiling is a separate issue. I have tried to keep these as separate issues. Some people do believe that it is more difficult to profile colour negative film than reversal (my opinion is that the difference is not sufficiently significant to favour the use of reversal film in my case), but that does not imply that reversal has greater colour accuracy. Two different issues.

This apparent disagreement may be created by a misunderstanding about the meaning of 'profiling'. Could you give a brief summary of what you think of as profiling. For example, what do you start with - an E-6 IT8/Q60 target? Thanks.

Best,
Helen

Oren Grad
7-Sep-2007, 13:28
I've also tried to differentiate between profiling and colour accuracy. If you read what I have said, it is that colour negative film is inherently more colour-accurate than colour reversal because of the masks. The so-called difficulty in profiling is a separate issue.

Helen, what is your recommended approach to profiling color negative? The question isn't intended as polemic - I've started tinkering with scanning color neg, and haven't yet nailed it to my satisfaction. If it matters, my current focus is on Portra 160NC and 400NC. Thanks...

David Luttmann
7-Sep-2007, 14:50
Helen,

I've used everything from the canned Imacon profiles to the IT8 as well. The problem with the IT8 is that they were designed in the 80s primarily for visual use. I've been using the Hutch precision scanner target with more accurate results.

I understand what you are saying with regards to grain and color accuracy, and I don't debate the greater dynamic range of color neg film over transparencies....but that said, one cannot be accurate in stating that color neg film is easier or more accurate to profile.....it's simply not. I actually started shooting some wedding work a few years back with MF Astia because I could obtain more accurate skin tones after profiling than was possible with Fuji NPS.

But, that said, when working with landscapes and other similar subjects, highly accurate color is rarely an issue for either film type....but grain / dynamic range could very well be.

Best regards,

Helen Bach
7-Sep-2007, 16:27
Oren,

For most work I do a variation of the EK telecine setup process that I am familiar with: ie I shoot a three-step card (Kodak 'Gray Card Plus' or the recent Macbeth card - both being black, grey and white) and balance the three by adjusting the curves, then I use that as the basis for all other scans in that lighting condition and exposure for that film, after setting the black and white points using the film DMax and DMin. This takes care of the uniform combination of masks and dye imperfections. The key thing is that the connection is made between the lighting in the scene and the scan before I then go and do whatever I feel like doing.

For more rigorous work I have used calibrated DSC colour bars and grey scale (http://dsclabs.com/colorbar_grayscales.htm), but that is glossy and hence requires a little more effort than the matte Gray Card Plus or Macbeth. I happen to have the DSC chart because of my cinematography - I wouldn't bother paying for one for still photography, nor would I use such a system if I wasn't already familiar with it. How often do we need colour accuracy anyway? I'm sure that others on this forum will be able to offer a more suitable method than mine. With a disciplined approach you could use inCamera or similar, rather like one would with a digital camera. I have tried inCamera, but I stuck with what I was familiar with.

Here is what I was attempting to get at, but not explaining myself very well: using a Hutch target or an IT8/Q60 is a good way of matching the scan to the transparency, but it does nothing for matching the scan with reality because the reversal film itself does not do that. That doesn't matter so much - we are used to getting transparencies looking just like we want them to, and we are not likely to want a bang-on, technically correct representation (if there is such a thing with a three-colour system) unless we are copying art or something similar. As far as skin tones go, reality is the exception rather than the norm, isn't it?

David,

"...one cannot be accurate in stating that color neg film is easier or more accurate to profile.....it's simply not..."

I hope that I haven't stated that colour negative is easier or more accurate to profile - not least because I have some fuzziness about the exact meaning of 'profile'. I have stated that colour negative is inherently capable of greater colour accuracy than reversal film. In all colour film there is a mismatch between the spectral sensitivity of the light-sensitive layers and the spectral absorption of the corresponding dyes. In colour negative film this is compensated for by the two masks.

Though colour reversal film cannot be masked in the same way as negative film without creating a colour cast, the absence of the masks can be catered for to some extent by digital post-processing if required. This is not the same as profiling using a target - it needs more than that. Even film designed solely for scanning (ie film that cannot be printed optically with any degree of colour accuracy) benefits from some degree of masking via coloured colour couplers despite the use of dedicated digital post-processing.

All this aside, we do what we are happiest with while being curious about what everyone else does.

Best,
Helen

Oren Grad
7-Sep-2007, 21:37
For most work I do a variation of the EK telecine setup process that I am familiar with: ie I shoot a three-step card (Kodak 'Gray Card Plus' or the recent Macbeth card - both being black, grey and white) and balance the three by adjusting the curves, then I use that as the basis for all other scans in that lighting condition and exposure for that film, after setting the black and white points using the film DMax and DMin. This takes care of the uniform combination of masks and dye imperfections. The key thing is that the connection is made between the lighting in the scene and the scan before I then go and do whatever I feel like doing.

Thanks, Helen. That gives me some new ideas to work with. The objective in this case is to integrate the scan into a workflow that manages color consistently between scanner, screen and printer. Concordance with the real world is a separate issue, which I deal with (or not) in other ways.

Sal Santamaura
8-Sep-2007, 08:34
Actually- of all places - I'd probably recommend Ansel Adams' book The Negative for some of this info...Why "of all places?" It's always been one of the best places for solid technical black and white film information translated into terms that anyone willing to read and think can understand.


...all the comments on this thread - very useful, and revealing of the immense differences not just between digital and film, but between b&w and colour...Not to mention between the perceptions of different humans. :)

roteague
8-Sep-2007, 10:22
Why "of all places?" It's always been one of the best places for solid technical black and white film information translated into terms that anyone willing to read and think can understand.

Perhaps. I found the book boring.