Racer,
Do you have numbers for the mean and standard deviation and the information source?
Racer,
Do you have numbers for the mean and standard deviation and the information source?
IC, Steve -
A Gaussian or Normal distribution (where the data conforms to a Bell Curve like IC has in his graph) will have the following numbers:
A range of +/- 1 std dev (or 1 sigma) means about 68% of all the results fall within that range.
A range of +/- 2 std dev (or 2 sigmas) means about 95% of all the results fall within that range.
A rannge of +/- 3 std dev (or 3 sigmas) means about 99% of all the results fall in that range.
If you are counting all the results on on side of the mean, like in IC's graph, you get:
1 std dev error => 84% above
2 std dev error => 98% above
3 std dev error => 99.9% above
If you give yourself enough over exposure that you don't have more than 2 sdt dev of your exposures outside of that range, then 98% of your exposures will not be underexposed.
The trick is being able to predict how much overexposure you need to have to make sure you meet that goal.
So this really can't be something that's tested or determined using film tests. It can only be determined by examining how many of your exposures are failures or successes based on your own exposures.
Kirk - www.keyesphoto.com
I guess if one could measure how many stops off each negative that one makes is from the minimum exposure needed for a good negative, then it would be easy to do this with a little bit of simple statistics.
Kirk - www.keyesphoto.com
Again, the last model is just a model, I don't keep records to analyze my exposure info, but it would make an interesting paper if it had not already been done.
That statistical model was a refinement of my "Bullseye" model. The "Bullseye" model simply places the exposure latitude equally on either side of scene brightness range. Exposure is 'aimed' for the center of the H&D curve. In that more refined statistical model, exposure is 'aimed' for the peak of the bell curve. With the statistical model, depending on how 'good' you are with your technique, the bell curve can be narrow and the resulting exposure will be less than the "Bullseye" approach.
Of course, if you live in a laboratory, your bell curve will be so narrow as to make your EI equal ISO with that statistical model
Anyway here is the introduction to the "Bullseye"
This diagram is just to get ones bearings, and shows a "Conventiional" exposure based on ISO or any permutation of the Zone system. As you can see there is almost no underexposure latitude, but there is plenty of overexposure latitude.
![]()
So, here is the "Bullseye" method. Unlike the statistical model, it is easy to calculate. One can set the exposure index to put Zone V smack in the middle between the minimum point (ASA/ISO or 0.1 above film base) and the maximum point (D-max or 0.2 below D-max).
So, with this system, an ISO 400 film can be put in a non-metered classic camera and a exposure data sheet for ISO 100 film is used to guess at exposure (ie "sunny 16" f8 at 500th etc).
Again this would be a cool experiment to give the ISO 100 datasheet and camera loaded with modern ISO 400 film to a novice and compare "best prints" shown to a panel. The comparison would be to an accurate exposure with a spot meter by a "zonie" based on 0.1 above film base or based on the film's ISO.
Of course the drawback with this system is that some of the negatives will be dense, which could lead to difficulty printing with some enlargers or some papers.
![]()
Thanks for the information Kirk, you shouldn't have gone to the bother. I think I wasted your time because I probably didn't state my question properly. A normal distribution curve has to relate to something. For instance, the mean Subject Luminance Range is 2.2 with a standard deviation of .38. This determines the shape of the curve as well as the range of the samples. I was just wondering the parameters Racer was using is all.
I'm in agreement with Racer's slightly higher placement on the curve. The first excellent print test determined that exposure greater than that required to reach the first excellent print produced prints that were indistinguishable from the first excellent print. It is a sound principle of exposure. I do believe it falls under a different topic of discussion than speed determination which was the topic I was discussing.
I have a question in regards to the two graphs illustrating the principle. They demonstrate the principle well. I'm just wondering if the subject luminance range is just a relative example or if it is supposed to represent realistic values? I'm asking because it appears that the range is around 0.8 or so, and it definitely falls under 1.0. It's just that the average subject luminance range is 2.2 and even if you incorporate flare, it will still be around 1.8 log-H. Zone I - VIII should be 2.1 log-H and with flare 1.7 log-H.
Steve - so if the mean LR is 2.2 and standard dev is 0.38. So I can do the math in my head, let's say that the std dev = 0.4, OK?
Like IC Racer pionted out, we only need to worry about scenes that have Lumanence ranges wider than normal, as subjects that have ranges narrower than normal will not be under or over exposed.
So with a mean of 2.2, then 1 std. dev covers up to a LR of 2.6. That means that 84% of all scenes have a LR of less than 2.6.
With 2 std dev, it covers up to a LR of 3.0, and 98% of all exposures will have a LR less than 3.0.
With 3 std dev, that covers up to a LR of 3.4, and 99.8% of all exposures will hve a LR less than 3.4.
So if you wanted to cover 98% of all exposures, then you need to have a film than has a relatively straight line section that will take an exposure range (LR) of 3.0. And that means you need to put the "bullseye" exposure point for the middle of the exposure at 1.5 above the minimum exposure value.
Not all films could handle this situation, and it gets complicated if one starts to do N-1 or N+1 developments, but then why would one, if they are trying to have a simple, non-complicated exposure system...
Kirk - www.keyesphoto.com
Kirk - www.keyesphoto.com
I'll give some background on what prompted me to come up with these models back in the 80s.
At the time, I was writing an image analysis program at the NIH for CAT scans. At the time the GE cat scan 'exposure' of the patient gathered 1024 gray levels from each area of the patient. The computer screen could only show 256 gray levels, so my gray level software allowed many ways to view the 1024 set. Here is a perfect example someone else wrote as a java applet.
http://www.emory.edu/CRL/abb/WindowLevel.3/chest2.html
So, I had a mindset that each exposure I made was to include the ENTIRE subject brightness range (as projected onto the film plane) and later, in the darkroom I would decide which 'gray level window' would show in the print, by using multigrade paper. Totally analogous to the way the CT scanner gathered all the gray level data and the CRT monitor showed it.
Although these technical details show the scientific basis of what is going on, the main thing FOR ME was the MINDSET that I WOULDN'T decide which values are, or are not, going to be in the print, at the time of exposure (opposite of zone system visualization). I wanted all tonal info from the scene at the time of exposure, (like a scientific instrument would record it), and later, I would decide which values will appear in the print.
Bookmarks