PDA

View Full Version : Digital race is only beginning. LOL ha ha



Tin Can
31-May-2013, 11:36
Read this and weep everybody with new expensive digital cameras.

http://petapixel.com/2013/05/31/new-camera-sensor-1000x-more-sensitive-than-current-sensors/

paulr
31-May-2013, 11:43
Why weep? If this technology makes it to market and is as good as the hype, that's great news. In the mean time I'm quite happy with today's technology.

Tin Can
31-May-2013, 11:47
A figure of speech. Yes it is great news, but also unnecessary for my photography. It will be great for surveillance, Big Brother will love it.


Why weep? If this technology makes it to market and is as good as the hype, that's great news. In the mean time I'm quite happy with today's technology.

Drew Wiley
31-May-2013, 12:00
That's all we need ... even more efficient rigged traffic light cameras and even better tiny drones sent over your neighborhood by the local burglary ring. They'll
probably combine the traffic camera with face recognition software, instantly assess your net value, and then be able to adjust your ticket appropriately. They already know how to digitally alter the color of the light and automate the ticket where the video info is actually processed. But I look at the bright side too.
The face and volume recognition technology will allow a McDonald's computer system to automatically call an ambulance right when that twelve year old four hundred
pound diabetic waddles up to the counter to order his twice daily McMegaGreasemeal.

paulr
31-May-2013, 12:03
Depends on how these qualities manifest themselves in a final product. "Higher sensitivity" doesn't just mean low light performance. It means higher s/n ratio, which is pretty much everything as far as image quality is concerned. It means more dynamic range, greater color bit depth, and lower visible noise at every ISO. This translates into greater sharpenability, among other things.

paulr
31-May-2013, 12:04
And thank you, Drew, for providing a negative example of high s/n ratios.

bobwysiwyg
31-May-2013, 16:20
One can only imagine the file sizes!! :rolleyes:

Nathan Potter
31-May-2013, 16:33
Don't hold your breath - there is no technical meat here at all. No identification of the possible structure of a pixel let alone the integration of the carbon matrix into a compatible CMOS type process sequence.

Nate Potter, Austin TX.

Kirk Gittings
31-May-2013, 16:35
Why weep? If this technology makes it to market and is as good as the hype, that's great news. In the mean time I'm quite happy with today's technology.

ditto its all good, the past, the present and the future image capture.

Bernice Loui
31-May-2013, 18:47
That new brief says much of nothing. They still are forced to obey the laws of physics as we understand them today.

Until they publish their findings and produce a viable commercial product. This new brief remains much of nothing.

If you're after light sensitivity, there are currently more than a few ways to get it, regardless, the laws of physics as we understand them applies to anyone who tries to endeavor into these technologies.


Bernice...

-Who remains skeptical of any new technology until proven significant by real world results.




Read this and weep everybody with new expensive digital cameras.

http://petapixel.com/2013/05/31/new-camera-sensor-1000x-more-sensitive-than-current-sensors/

Lenny Eiger
31-May-2013, 22:12
I would only be interested if it made possible a large sensor. Otherwise, its the same old stuff... just what we have now...

Lenny

Jac@stafford.net
31-May-2013, 22:31
If it really is a thousand times more light sensitive then a lot of ND filters will be sold.

paulr
31-May-2013, 23:26
If it really is a thousand times more light sensitive then a lot of ND filters will be sold.

No need. As always you can choose your ISO based on the amount of gain applied to the sensor signal. Or negative gain can be applied, which is what happens when you choose an ISO below the "native" ISO. 1000 times the sensitivity means ten stops increased s/n performance.

I share Bernice's skepticism that we'll see this much real world performance (from this or from any other technology on the horizon). But I don't agree that we are currently near a physics-imposed brick wall.

Lenny's suggestion that real improvements would require a larger sensor size is incorrect. We are in the realm of diminishing returns with resolution improvements in 35mm and smaller sensors. But there's a way to go yet. And in the world of medium format, no one has come close to the higher pixel densities that would be both possible and useful. Separately from resolution, s/n performance is important to a degree that's hard to overestimate. We are still far from what's possible there.

Jac@stafford.net
1-Jun-2013, 01:22
No need. As always you can choose your ISO based on the amount of gain applied to the sensor signal. Or negative gain can be applied, which is what happens when you choose an ISO below the "native" ISO. 1000 times the sensitivity means ten stops increased s/n performance.


Thanks for that. I had not considered that with improved s/n one could reduce ISO further than I am accustomed to with my digital camera.

Joseph O'Neil
1-Jun-2013, 05:39
What bothers me more than anything else about the "digital race" is how expectations are being blown out of proportion.

I've been trying to get my teenaged daughter into photography. My digital SLRs are too bit, to complex for her, and they are "dad's cameras". So I bought here one of these coolpix cameras form Nikon, fairly inexpensive, but if i had a camera like that when I was in high school I would of thought I died and gone to heaven. 30x zoom, image stabilized lens, almost 2000 shots on a single 8 gig card, etc, etc. I did read the reviews, and ll I see is complaints that it is too nosy, the zoom only works well in bright light, etc, etc. Compared to my old Nikon EM, sure i got better images, but wow, I only had 36 shots and i had to pay for every film to be developed. I think today, if they want to, the kids can learn faster in some ways than we did.

Well my daughter likes it, understand it, fits in her hand nicely, and more importantly, it is "her" camera, not dad's. :)

So I think I have a hit here. But by the standard of how fast things are changing in this "digital race", it is already out date. That to me is sad. I will have my 30 year old Nikon EM, and it still works, although all film I shoot today is 4x5 or 8x10. I felt proud about my camera, looked after it, and other people would say "goog for you, that camera will last you a lifetime". Well not quite, but a long time. But I think having a sense of pride in the tools you own and use, and having that sense of pride re-inforced by the community at large helps make you a batter - well, anything. A carpenter, chief, photographer, anything skill or profession - when you feel a sense of pride in your tools, when you feel you have to look after your gear and take care of it so it will take care of you, I think you become better at whatever it is you do.

but with this "digital race", IMO, you do not see that. And that to me is sad. Don't know the solution, I just think we are, as a society as a whole, not individuals here, but as a whole, becoming a bunch of spoiled, self entitled jerks because we don't have the latest and greatest.

soapbox mode = off

Corran
1-Jun-2013, 05:48
Just FYI, somewhere I read that this "1000x more sensitivity" claim was misquoted. It's actually 1000x greater than the older graphene technology, which was apparently abysmal. So it's not "1000x better" than current cameras.

Joseph O'Neil
1-Jun-2013, 08:55
Just FYI, somewhere I read that this "1000x more sensitivity" claim was misquoted. It's actually 1000x greater than the older graphene technology, which was apparently abysmal. So it's not "1000x better" than current cameras.

Well, by that standard, the boxes of tech pan I have in my freezer might have "1000x more sensitivity" than the large format tin type revival prints the guy down the street from me is currently doing.
:)

Bernice Loui
1-Jun-2013, 09:13
A real innovation would be to put into mass production 5-10 micron CMOS image sensor technology on a 5" x 7" size sensor that results in multi GB file sizes and no less than 20 bits of contrast range.

If this 5" x 7" digital sensor is to be used for color, three will be required to achieve individual RGB image files in a color separation camera.

Until then...


Bernice



I would only be interested if it made possible a large sensor. Otherwise, its the same old stuff... just what we have now...

Lenny

Bernice Loui
1-Jun-2013, 09:20
Indeed, there are already a good number of problem limiting current image sensor technology today that are not limited by current knowledge of Physics.

These trade-offs are cost-marketing driven. In the product development process, there are always compromises made to result in a sale-able product at a cost the market will accept.

What is good enough, what will it cost the end users, What profits can be made -vs- investment -vs- production cost, how can the product be made market acceptable....enough.


Bernice





But I don't agree that we are currently near a physics-imposed brick wall.

paulr
1-Jun-2013, 10:07
Just FYI, somewhere I read that this "1000x more sensitivity" claim was misquoted. It's actually 1000x greater than the older graphene technology, which was apparently abysmal. So it's not "1000x better" than current cameras.

Ahhh, good catch. Any sense how this translates to a comparison with CMOS?

paulr
1-Jun-2013, 10:17
Indeed, there are already a good number of problem limiting current image sensor technology today that are not limited by current knowledge of Physics.

Can you name one? And maybe point to a source?

I've seen a lot annecdotal ones come and go. Not long ago it was presumed that 5 micron pixels would be too small, and would give poor s/n and dynamic range performance. Then Nikon/Sony's 4.8 micron sensor broke all the performance records. A year later they produced a 4 micron sensor which does not seem to have introduced any compromises.

When the manufacturers stop investing in smaller pixels, all the attention will turn to better pixels. How to capture more energy, introduce less noise, accurately transfer more cycles per pixel, give better color purity. We will likely see alternatives to the bayer sensor and foveon, and alternatives to cmos.

And consider that these smaller pixel pitches have not been introduced yet in the world of larger sensors. This may take a while ... the guys I know with 60 and 80 megapixel phase one backs have long wish lists, but I almost never see the call for more pixels on those lists. And yes, everyone I know with these backs has large format film experience.

Corran
1-Jun-2013, 10:32
Ahhh, good catch. Any sense how this translates to a comparison with CMOS?

Sorry, no.

Lenny Eiger
1-Jun-2013, 11:17
Lenny's suggestion that real improvements would require a larger sensor size is incorrect. We are in the realm of diminishing returns with resolution improvements in 35mm and smaller sensors. But there's a way to go yet. And in the world of medium format, no one has come close to the higher pixel densities that would be both possible and useful. Separately from resolution, s/n performance is important to a degree that's hard to overestimate. We are still far from what's possible there.

You may think so, but I am not interested in resolution. I don't give a sh_t about sharpness, especially sharpness that a printer can't print. Any camera/lens combo these days can do sharp edges - especially a Leica mini, for example. However, I am far more interested in tonal reproduction, definition, etc. And in that arena sensor size is king. I will not believe that a tiny sensor at 35mm size can compare to the information in an 8x10's 10 inches of film. It can't compare in film and it won't do it in digital either.

Lenny

Daniel Stone
1-Jun-2013, 11:29
I see mfdb and 35mm FF files on almost EVERY job I work on(I'm a photo assistant).
In my book, MFDB files win hands-down, every time. However for my own shooting, I still prefer to shoot film and drum scan it.

Bigger IS better, most of the time. But these little Panasonic GH3's and XE-1 Fuji's are pretty capable little beasts, when handled well. An ill-processed mfdb file can look like sh*t, and a well-done micro-4/3rds file can sing. Just comes down to the person at the controls ;)

Film still looks better to me though, but convenience and speed? Digital by a large margin.

-Dan

Struan Gray
1-Jun-2013, 11:48
A few points.

First, university researchers get credit, extra grants and general all round kudos if articles about their work appear in the mainstream press. That provides some perverse incentives to press conference research results, which is only made worse by the superficial way science is reported in most news outlets. The end result is exaggerated claims for the significance of the breakthrough, and wildly optimistic projections about potential applications. It's *all* fluff.

This is a good result, and an interesting one. That's why it's in Nature, which in spite of everything still maintains some pretty fierce bulldogs on the gate and doesn't just publish any old thing. But it isn't about to revolutionize sensor technology, not yet at any rate. One of the reasons graphene is so weird - and stable - as a physical substance is that it's pretty inert. That makes it hard to modify, or to make sensors and other electonics out of. The advance here is in the ability to adjust the properties of the graphene sheet without it breaking apart.

The hard limit to sensor performance is photon shot noise. This is a noise built into the light itself, and it cannot be mitigated by anything other than collecting light for a longer time, or over a larger area. There's a lot of nonsense talked about photon shot noise, but it can be measured with current imaging sensors, separately from other sources of noise such as those introduced by the readout electronics of the sensor itself. So the limit is there, will be significant if ISOs climb much beyond 100 000 or so, and can only be combated by capturing every last photon as efficiently as possible.

This material is transparent. OK, partially transparent. You can see the scientist's face through it. Compare the visual effect with a stop-wedge or an ND filter in your mind's eye, and it's clear that at present it offers no threat to class leading silicon-based sensors which turn up to 95% of the light into signal (at some wavelengths). If the substrate were transparent, that would roughly be a four stop ND filter. Graphene has a way to go.

Greg Miller
1-Jun-2013, 12:55
For 99+% of photographers (especially the ones Drew trashes all the time), whatever technology they are currently using is better than their artistic vision. It really doesn't matter if they are using LF, ULF, MFDB, DSLR, point & shoot,... because their vision is their main handicap, not their equipment. Better equipment or technology might make them incrementally better, but that will pale in comparison to what growth in vision would provide (or mastering the equipment they already have).

Lenny Eiger
1-Jun-2013, 13:10
I see mfdb and 35mm FF files on almost EVERY job I work on(I'm a photo assistant).
In my book, MFDB files win hands-down, every time. However for my own shooting, I still prefer to shoot film and drum scan it.
Film still looks better to me though, but convenience and speed? Digital by a large margin.
-Dan

There is a difference between what a commercial photographer needs and what an artist needs. I don't need convenience and speed. I use a large format camera and tripod and its slower. Unless you look at the goal, of course.

The aesthetic of commercial work is that it must have "impact". Others call it "pop" or snap, intensity, etc. It should shock the viewer into considering what the image has to say. The red of the coke can should stun us into submission. If you look at journalism overall these days you will see this very graphic, intense approach as well. Everything is about shock and awe.

I understand the need for it in a commercial realm. (I have no disrespect for commercial photographers, my father was a very successful one. I grew up in his darkroom, helping him with all sorts of stuff...) Some artists like shock and awe as well. Personally, I am not that interested. I like images where I learn something, where something is so rich in understanding that I am moved. I'm interested in depth. I don't want to be yanked into submission, sold to or anything else. I want to see a work by someone who understood something at a core level and passed it along. Something I can use. It takes time to develop understanding, and to reach the core level of anything. Convenience doesn't help. Photography without understanding or without depth, that just sticks something red in the center of the frame isn't particularly interesting to me.

I have noticed that when photographs pass over the line from something a person looks at to something that someone "experiences" its often because of an overwhelming amount of tonal information. Lot's of delicious midtones. I'm looking for something subtle. I'm 60 and I already got the obvious stuff - a long time ago.

The MFDB's aren't going to do it. Primarily because the sensor size can't carry that much information. It won't compete with 4x5 or 8x10. At least not until they decide to start making larger sensors. At which point I will be happy to move....


Lenny

paulr
1-Jun-2013, 13:19
You may think so, but I am not interested in resolution. I don't give a sh_t about sharpness, especially sharpness that a printer can't print. Any camera/lens combo these days can do sharp edges - especially a Leica mini, for example. However, I am far more interested in tonal reproduction, definition, etc. And in that arena sensor size is king. I will not believe that a tiny sensor at 35mm size can compare to the information in an 8x10's 10 inches of film. It can't compare in film and it won't do it in digital either.

Lenny

Based on what? The physics disagrees with you (http://www.falklumo.com/lumolabs/articles/equivalence/).

paulr
1-Jun-2013, 13:30
Indeed, there are already a good number of problem limiting current image sensor technology today that are not limited by current knowledge of Physics.

Sorry Bernice, I misread this the first time around. We agree on this; I argued with a point you weren't making.

Lenny Eiger
1-Jun-2013, 15:01
Based on what? The physics disagrees with you (http://www.falklumo.com/lumolabs/articles/equivalence/).

No, the physics agrees with me. Once again, consider the telephone pole that has a million shades of brown. Consider what a 1/4 inch wide swath of film can reproduce... not that much. If the 1/4 inch is on a 4x5, then consider how much info is on a full inch of film on an 8x10. A lot more shades get reproduced. Of course, you can also go backwards to 1/16 on med format or 1/64 on 35mm.

What you are basically saying is that an image sensor the size of 35mm will do as much as an 8x10 piece of film - and that is simply not true. I don't care what the article says, I don't trust Luminous as far as I could throw it to begin with, but in this case I think the article is discussing other issues that have no bearing on this discussion. It seems to be about size difference with matching resolution. It's a little too technical for a Saturday... but it can't turn a 35mm into an 8x10.

Lenny

Nathan Potter
1-Jun-2013, 15:56
The quantum efficiency of current CMOS sensors is very high, I think overall > 90%. That means that >90% of incoming photon energy is converted to useable current. It would be impossible to increase this by a factor of Log 3. In fact it fundamentally cannot be increased significantly unless the pixel area is increased. A possible advantage using the pure nanolayer graphene mentioned is in superior noise performance.

As Struan mentioned in his ,as usual, excellent post, any limit to improved noise performance will be in the reduction of shot noise. Shot noise refers to light at very low photon flux such that there is a statistical variation over time in the number of photons hitting a sensor. It is a critical limit that can only be mitigated by averaging a signal over time. Thus, in principle a pixel can be sampled over a period of time then the result averaged using an abundance of sophisticated signal processing tricks; all of which consume time and bits but can be implemented in software.

Even if the graphene described can be made with high quantum efficiency it will be subject to the same shot noise, which is fundamental to all devices involving very low signal levels.

Another type of noise is associated with the resistance of a material, called thermal or Johnson noise. Graphene (certain structures at least) have demonstrated extremely low resistance (high conductivity), I think even quantum well type behavior so this form of resistive noise contribution could be very low if such a graphene device could be made with high quantum efficiency.

I'm kind of appalled that the original paper was misquoted by implying there was a possible 1000X factor increase detectivity using graphene over the current CMOS devices. Shame on the IEEE Spectrum. I think they'll hear a lot about this from the IEEE members.

Nate Potter, Austin TX.

Corran
1-Jun-2013, 17:24
No, the physics agrees with me.

No, I think your personal thought experiment agrees with you. That isn't physics, unless you have some proofs to back that up.

Lenny Eiger
1-Jun-2013, 23:10
No, I think your personal thought experiment agrees with you. That isn't physics, unless you have some proofs to back that up.

It is my personal thought experiment. I don't have tables to back it up. Wait a minute, I do...

The math to represent this thought experiment is too simple. We are talking about the amount of bit depth. Its a simple matter to multiply the bit depth times an area of film. If we assume a square, the telephone pole is 1/64th of an inch in film that's 35mm vs a full inch on the 8x10. That's 64 times the amount of data, or 2 to the 6th.

So you can snipe that I am not a physicist but this math doesn't require one.

I haven't seen anything that gives me the idea that the amount of total potential information in an 8x10 will be matched by any type of 35mm sensor. If you compare a similar sized sliver on each piece of film you will get a close result. While lenses may differ in resolution, I haven't seen much about them differing in bit depth. I am sure there is small difference, however, this might be taken over by the lesser amount of enlarging required to make the same size print.

In addition, it appears to be physical reality. I have done plenty of experiments comparing different sized film and digital files, and I have concluded, based on physical evidence, that there is a substantial difference. Probably about 2 to the 6th, if I had to hazard a guess.


Lenny

paulr
2-Jun-2013, 08:43
Lenny, you say you don't care about resolution, but your thought experiment speaks to nothing else. The qualities that you said you care about, like tonality, are attributable to bit depth, dynamic range, and signal/noise ratio. None of these (related) qualities is directly dependant on surface area.

All else being equal, more surface area will give you an advantage at a given print size, because the additional pixels / units of area will have the effect of oversampling, which reduces noise. But we are not talking about all else being equal. We are talking about increases in some parameters compensating for decreases in others. The white paper on camera equivalency covers this 100% accurately and is worth reading.

Imagine a comparison of 35mm film at ISO 50, and 6x4.5 film at ISO 3200. The medium format film is bigger, but would be at quite a disadvantage in terms of performance. The near-doubling of linear dimension cannot compensate for a 6 stop compromise in noise and dynamic range. Based on actual measurement we could calculate how big that ISO 3200 film would have to be in order to achieve equivalence. It would be pretty big. This is the kind of dynamic we're talking about.

Corran
2-Jun-2013, 09:14
Lenny, I think I've said this before, but I completely 100% agree with you, if we are talking about small film vs. big film. But the digital sensor is such a different beast I don't think you can use the same basic math / principles.

It's a similar quandary that I've read about a million times over - a comparison of vinyl records vs. CDs. Even 24-bit waveforms cannot by definition be "stepless" like an analog medium. But if we record with enough bit depth, we can approximate so close as to be indistinguishable. I think it's the same thing with megapixels - if we sample enough of even a small area with more megapixels, we can get closer to the reality of the scene in tonal shading AND resolution, which is inexorably linked.

Regarding lenses, consider for a minute DxO's tests of resolution and dynamic range, etc., of the same lens on different cameras. Even really bad lenses that performed only so-so on a 12 megapixel full-frame body perform "better" on the 36 megapixel D800. I could ramble on here about Nyquist frequency and whatever but the tests are already done and you can see for yourself that pretty much every case this happens.

paulr
2-Jun-2013, 09:19
The quantum efficiency of current CMOS sensors is very high, I think overall > 90%. That means that >90% of incoming photon energy is converted to useable current.

Is this true? I've done some searches and can't find any numbers in that range. This site (http://www.jtwastronomy.com/products/ultimate.php) (astronomy focussed) says 25-35%, and this one (a technical site) talks about new industrial/scientific sensors with QE in the 40% range. Unless this information is outdated (articles are from the last year) it seems that with uncooled CMOS technology there's still a lot of room for improvement.

To your point, 3X improvement would still seem unlikely. 10X would seem like sci fi.

Nathan Potter
2-Jun-2013, 10:39
Paul, you are correct about quantum efficiency. DSLR sensors show a wide range of effective QE from a low of only 20% to a high of around 80% (I'd guess thats in the back illuminated variety). The D800, for instance, is shown as around 60% being higher than most. It would appear none have reached a > 90% QE overall yet, and may never. Higher QE can be gained by exciting more electrons per incoming photon flux then getting the electrons to an external circuit without losing too many to recombination processes.

One could cheat by using avalanche multiplication in a high field region of the photodiode (a process used in a vacuum photomultiplier tube) but that is a heinously noisy process so would require some very sophisticated noise reduction techniques.

Establishing a QE number depends on how the measurement is specified. I think if one considers limiting the QE number to only a photon flux quantity producing a certain electron carrier density in the depletion region then the issue is one of tradeoffs between the doping density and the field width for electron collection over the visible bandwidth. A lot of the loss is incurred in getting the photogenerated carriers to the external contact.

I suspect, but am not sure, that an optimization of the pixel doping density and field width for a particular wavelength could produce QE > 90%. But such would have limited functionality in current DSLR technology.

Come to think of it there might be a useful advantage in using different field widths for the R, G and B pixels, perhaps by local variation in doping density. How that could be implemented in a process line I don't know - that's a job I won't take.

Nate Potter, Austin TX.

paulr
2-Jun-2013, 12:35
Where are you finding this info on specific cameras? Just curious.

Also, what do we know about QE of film?

Lenny Eiger
2-Jun-2013, 13:13
Lenny, you say you don't care about resolution, but your thought experiment speaks to nothing else. The qualities that you said you care about, like tonality, are attributable to bit depth, dynamic range, and signal/noise ratio. None of these (related) qualities is directly dependant on surface area.

If you want to come up with another word, I'd be fine with that. However, when you have a series of tones to reproduce its not about line pairs/mm. It's how many of those line pairs you have to work with.


Imagine a comparison of 35mm film at ISO 50, and 6x4.5 film at ISO 3200. The medium format film is bigger, but would be at quite a disadvantage in terms of performance.This is the kind of dynamic we're talking about.

Yes, but I don't think you guys are understanding what I am after. None of the measurements that people speak about have any info on them in terms of how many shades of gray they can faithfully reproduce. It doesn't appear to be in the lexicon. It's not bit depth, its bit depth x area. All other things being equal, like film, the lens resolution, etc. How smooth is the transition from one tone to the next....

If you enlarge a 35 mm piece of film to 32x40, to use the extreme example, it will shred itself to pieces in a pile of grain, or scan samples, depending. The sense of smoothness of a particular surface will be lost. Some will find it pleasing, but no one can say that it faithfully reproduces the texture of something very textural, smooth, etc..

Lenny

Lenny Eiger
2-Jun-2013, 13:23
Lenny, I think I've said this before, but I completely 100% agree with you, if we are talking about small film vs. big film. But the digital sensor is such a different beast I don't think you can use the same basic math / principles.

I appreciate that they may work on different principles. However, when I get a 568 megapixel scan from an 8x10, or a 320 from a 4x5, even after you correct severely for some of the resolution differences, you still end up with a smoother image. The 16 megapixel isn't going to compare. If you halve the 320 you are still at 160, which is 10 times the info of the 16 megapixel.

I have images here that I have printed from large format negs that are so tactile you can almost touch the objects in the image. The digital files, even the best I've seen so far, don't have this quality. If you want to print contrasty it doesn't matter. However, so far digital lacks subtlety when compared with dil and a scanner.

Lenny

Corran
2-Jun-2013, 13:34
16mp is so last generation! :)

Seriously though, we have to talk about equivalents at least. Of course a good scan of 8x10 film is going to have more resolution, but how much more, and to what end? I think you'd agree that an 8x10 print from a scanned 4x5 wouldn't be any better than most DSLR files printed to the same size. How about 16x20? Or 40x32 like your example? Compared to a D800E with 55mm Micro lens at the same size?

Here's what sticks in my craw. If you were correct, every print from an FX-sized sensor camera, be it a 12mp sensor or 36mp, would fall apart in smoothness of tones after a certain degree of enlargement. And that's simply not true, at all. I have files from a 12mp D700 and 36mp D800E to prove it. If the size of the sensor was the only thing that determined smoothness of tones, every 35mm digital camera would look the same, regardless of megapixels!

paulr
2-Jun-2013, 14:35
Yes, but I don't think you guys are understanding what I am after. None of the measurements that people speak about have any info on them in terms of how many shades of gray they can faithfully reproduce. It doesn't appear to be in the lexicon. It's not bit depth, its bit depth x area. All other things being equal, like film, the lens resolution, etc. How smooth is the transition from one tone to the next....

All of this is quantifiable with the usal metrics. People don't bother, becaus "numbers of shades of gray" is a colloquial idea and not of much use anymore. Surface area's got nuthin to do with it. A single pixel from a modern sensor can produce, over the range of brightness available in a print, more shades of gray than the human eye can discern. Many, many more. so exact numbers aren't relevant.



If you enlarge a 35 mm piece of film to 32x40, to use the extreme example, it will shred itself to pieces in a pile of grain...

Yes, that's noise. This is why s/n performance is so important. The high s/n of modern sensors is the primary reason their images look better than people coming from the film world (like me) expect at first.



... or scan samples, depending.

This, you don't ever see in a print. An over-enlarged low-noise digital image suffers from looking too smooth, not from looking broken up. You start to see a Barbie-doll version of the world. Many printmakers actually increase the maximum useful enlargement by judiciously ADDING noise. I haven't experimented with this, since it's not what I'm up to ... but it's interesting how when you enlarge beyond the point of smooth tones, a certain amount of noise is sometimes actually beneficial.

If you're looking for a large format kind of esthetic, with a sense of detail beyond what the eye can see, smooth tones, and seemingly endless number of grays, you can achieve it with any medium. The question is, at what size enlargement. There are many, many factors here, but the s/n performance, bit depth, and dynamic range ... all the stuff that organizations like DXO look at ... make a huge difference. This is why you can't make automatic assumptions about what's possible at a given size without looking at the rest of the factors.

Lenny Eiger
2-Jun-2013, 14:45
By sensor size, I mean the total number of sensors available to it. The film response is 64 times greater than what one can get with 35mm. If one imagines that digital sensors are equal to film (in bit depth) then we need a sensor of the same density to be much larger to accommodate more sensor sites.

If we compare 4x6 prints, there isn't any difference. However, if you want to compare something with subtle light, b&w, delicate textures, you need more info. The sensor sites of film (a clump of grain), let's say Delta, are about 10 microns in width. Across the width of an 8x10 I usually get about 26,000 of them. It translates to 568 megapixels, not converted for lens effects vs smaller lenses.

If you have a 30 or 40 inch print you want to make, you can see all the texture you want. If you could pack sensor sites in a chip at 5 microns, and if they were just as efficient (and we trade the inefficiency of large format lenses against the extra green in the Bayer chip, along with some other things like an IR filter, some interpolation, or other garbage they throw in) then you could possibly do the same with only half that many, around 280 or so.

My guess is that when sensors get to the 200-300 megapixel range, then everyone will be happy. Or at least when they get there and they get less expensive. 50K is a lot to pay for a camera if you aren't doing consistent commercial work with it. For the sensors to get to that range, I think they will need to be much larger. I understand that's difficult but I don't want to give up the textural quality of my images.

Lenny

paulr
2-Jun-2013, 15:05
That sounds great, but it just doesn't work that way. Show me a 568 megapixel scan from an 8x10 negative, and I can downsample it to one quarter that. In a print you will not see the difference. In most cases I could downsample by a factor of 8 and you couldn't tell. The difference has been pretty obvious when I've downsampled 16 times ... which is to say that 8x10 can do better than a 36 megapixel dlsr. But you might be surprised how big you have to print to see the difference.

I've had people send me their drum scans over the last few months and have made comparison prints and run image analytics. There is nowhere near the information in them that the sampling frequency suggests. At 100% view, you see big, mushy detail and a ton of noise. These could be considered low quality pixels. A much smaller number of high quality pixels can do the same work. All of these metrics I've been mentioning are about the quality of those pixels.

I certainly agree that medium format backs are priced out of reach of most of the people who would enjoy them. There are other ergonomic issues that make me think they're not completely ripe yet. But you cannot argue with the image quality. I've played with raw files from a phase IQ180, and they're the highest (techical) quality photographic images I've ever seen. The superior quality of the lenses plays into this. The backs currently have one significant technical shortcoming compared with film: dynamic range. They've gotten really good in this area ... much better than transparency film, and much better than they used to be. But theyr'e still a couple of stops shy of most color negative film, and of course can't come close to what's possible with b+w.

This is irrelevant to a studio photographer, but for someone doing work like mine, improvements in dynamic range would be welcome.

In terms of resolution, 4x5 and 8x10, when shot at their ideal apertures (around f11 or f16 on most lenses), are better than mf digital, in a way that you can start to see in prints above 40 or 50 inches wide. But when you stop down past f22 or f32 for depth of field, diffraction becomes a remarkable equalizer.

Tin Can
2-Jun-2013, 15:56
Despite my smart alec thread title, I find this thread very interesting and I am certainly trying to keep up. Great discussion to listen to.

tgtaylor
2-Jun-2013, 17:41
That sounds great, but it just doesn't work that way. Show me a 568 megapixel scan from an 8x10 negative, and I can downsample it to one quarter that. In a print you will not see the difference. In most cases I could downsample by a factor of 8 and you couldn't tell. The difference has been pretty obvious when I've downsampled 16 times ... which is to say that 8x10 can do better than a 36 megapixel dlsr. But you might be surprised how big you have to print to see the difference.


In other words, there is a limit beyond which increased resolution is meaningless and the native resolution of film far exceeds that limit.

Thomas

sanking
2-Jun-2013, 18:48
I've had people send me their drum scans over the last few months and have made comparison prints and run image analytics. There is nowhere near the information in them that the sampling frequency suggests. At 100% view, you see big, mushy detail and a ton of noise. These could be considered low quality pixels. A much smaller number of high quality pixels can do the same work. All of these metrics I've been mentioning are about the quality of those pixels.



Don't forget in making your comparisons that film is not a monolith. There is a huge difference in the "pixel quality" of a drum scan made with a high resolution fine grain B&W film compared to a high speed color film.

Sandy

Nathan Potter
2-Jun-2013, 19:29
Where are you finding this info on specific cameras? Just curious.

Also, what do we know about QE of film?

Paul, camera specific efficiencies can be found sensorgan.info plus more info of course on QE by search of "sensor quantum efficiency". See also Clarkvision.com. The sensorgen figures do not specify the measurement technique and I know not how that data was collected. Lack of technical specifics is what fooled me in my initial quote of over 90%.

Of course there is an abundance of data on solar cell technology and efficiencies. Solar cells designed for full solar bandwidth absorption can achieve up to 30% QE using single crystal silicon. But special structured cells have been reported at close to 100% ( "Bulk Heterojunction Solar Cells with Internal QE approaching 100%". Nature Photonics 3, pp. 293-302, 2009). Such high QE results imply that similar QE can be achieved with camera sensors as long as a fabrication sequence can be worked out.

As for the QE of film one must diffrentiate between a latent image and the developed image. The QE will also depend on the grain size and if sensitizing agents are present at the surface of the halide grain. Many variables here. Fine grained, low sensitivity emulsion exposures corresponding to standard latent image formation produce densities around 10^-4 to 10^-5, (W. E. K. Middleton, Jour. Opt. Soc. of America, 44, 1954). I'm guessing a bit here but that would suggest that about all photons that are incident on a halide crystal form a silver atom so QE would be close to 100%. Going back in my weak memory I think I remember that a photon can actually produce up to 3 atoms of silver so yield a multiplication even at the latent image stage. Obviously POP paper is a special coating case but probably still near 100% QE. Of course the advantage with film comes in the multiplication of silver from development of the image which results in a gain of 10^8 to 10^9,( Handbook of Photographic Science and Engineering, Thomas Woodlief editor, Eastman Kodak Co, 1973, p. 413). This kind of gain is the enormous advantage with film over a sensor but with the caveat that the multiplication is restricted to only the grain where a parent silver atom/s is present. I suspect that the silver grain growth during development is spacially ill defined hence the clumping and irregular grains we see upon microscopic examination.

Nate Potter, Austin TX.

paulr
2-Jun-2013, 21:56
In other words, there is a limit beyond which increased resolution is meaningless and the native resolution of film far exceeds that limit.

Well, no, what I was suggesting is that a limit is reached at resolutions much lower than typical scanning sample frequencies, at least with big sheets of film shot at typical apertures.

The point at which increased resolution (of the film or sensor) becomes unimportant depends on many variables. Among these is the size of enlargement. If you're making contact prints, you can certainly spend your brain time on other issues.

paulr
2-Jun-2013, 22:01
Nathan, that's interesting. Why wouldn't this high QE give film better low light and noise performance? It seems these were the first areas where digital sensors showed an advantage ... astronomers turned to them before just about anyone else for this reason. This was back in the day of much less efficient sensors than what we have now.

Nathan Potter
3-Jun-2013, 09:20
Nathan, that's interesting. Why wouldn't this high QE give film better low light and noise performance? It seems these were the first areas where digital sensors showed an advantage ... astronomers turned to them before just about anyone else for this reason. This was back in the day of much less efficient sensors than what we have now.

Paul, I have to guess at your query. For low light conditions the limit is not QE but detectivity. One encounters the shot noise limit with very dim light with a sensor that is limited by noise but in film the limit is at the toe of the sensitometric curve where the gamma approaches zero (near zero contrast).
I think, in effect, that the shot noise floor in sensors also limits the low light detectivity but one can really use noise reduction techniques to reach down into the noise mud to extract more toe gamma from the data than is possible with film.

The real advantage with a digital sensor is in the sophistication of the post image processing that can be employed. I don't know for sure, but I believe from my experience using Tmax 100 that I can squeeze a bit more full dynamic range out of the Tmax than an equivalent B&W rendition from my D800E.

I know my astronomer friends west of Austin on Cosmos Drive north of Fredricksburg all use sensors because of post processing flexibility and enhancements in software. That conversion to sensors was going full scale by the mid 1990s. Certainly now some of the more sophisticated cosmos viewers use cooled sensors to reduce the resistive Johnson noise independent of the multiple sampling techniques used to reduce shot noise. Both techniques can yield a remarkable image from a dim distant object using electronic wizardry.
While the QE of sensors may lag film by around a factor of two (log 0.3 in density terms) these astronomer types (amateurs) can recover image signals that are down in intensity by factors of even 100.

Nate Potter, Austin TX.

paulr
3-Jun-2013, 10:27
Interesting, Nathan. Thanks.


The real advantage with a digital sensor is in the sophistication of the post image processing that can be employed. I don't know for sure, but I believe from my experience using Tmax 100 that I can squeeze a bit more full dynamic range out of the Tmax than an equivalent B&W rendition from my D800E.

The challenge with measuring dynamic range is deciding how much noise you're willing to accept. DXOmark says the d800 has 13 stops of dynamic range. A lot! But I don't get that much. It turns out their standard is 1:1 s/n. Which looks like ass.

I get between 9 and 10 stops of useful dynamic range out of my d800. This is with all the software settings pushed as far as possible. There is some variation depending on what the shadows look like and how much noise I can tolerate there.

With Tmax 100, I get 10 stops, with normal development. I get 12 stops with N- development. That's all I've ever needed. I don't know how to compare this scientifically with the digital camera, since the noise characteristics are so different. But I do not notice the additional noise with tmx when stretching things.

However, the film is very malleable, and it would not be a difficult matter to get 15 stops or more with specialized development. The army photographed nuclear explosions using POTA, and got shadow detail in the foreground! My d800 can't do that. Not that I'm looking for the opportunity.

Lenny Eiger
3-Jun-2013, 11:01
In terms of resolution, 4x5 and 8x10, when shot at their ideal apertures (around f11 or f16 on most lenses), are better than mf digital, in a way that you can start to see in prints above 40 or 50 inches wide. But when you stop down past f22 or f32 for depth of field, diffraction becomes a remarkable equalizer.

We go back and forth on these arguments and I have to say it appears to be a total waste of time. The same arguments, from the same people are repeated. Apparently, none of us feel that we've been heard. So much so that no one moves off their position.

I have a print here at my studio that was made with an 8x10. The print is tactile to the point that you can "feel" the objects in the image and feel the cold, salt spray on your skin. Many people have said this. Is this possible to reproduce this effect using a digital camera? I have no idea. I seriously doubt it, but I don't really know. Frankly, I haven't seen anything that comes close. Doesn't mean it doesn't exist. What it means is that the people making digital prints aren't interested in the same kind of printing. It is also possible that top printers aren't getting together to compare results in physical print form. While the world may be getting smaller, apparently no one has convinced the airlines of that, it still costs money to travel.

I have real a lot of these posts over time, and every once in a while I start to believe that digital capture can approximate what I am doing. Then someone brings up diffraction again and the entire argument all falls apart. When I first said on this forum that diffraction doesn't exist, I was overstating it and a lot of folks jumped on me. In the last year this topic has come up numerous times and the general consensus has been that diffraction is a very small factor, small enough to be ignored in almost all cases. I have tested my equipment from 22 to 45 and saw no difference in image quality that my printer could print. At 64 there was a slight difference in sharpness, which could be easily handled with some sharpening. Diffraction is pure c r a p. This "word on the street" that one should shoot at f16 or f22 is a horrible thing that someone has put on photographers that ends up making depth of field something only older photographers could do. Theoretical and practical are far apart in this case and the theoretical should never have made it to the word on the street. Finally someone actually came up with the idea that its really cool that things are out of focus and depth of field is an "old idea", bokeh came on the scene. Probably all started by someone who couldn't get their depth of field to work. (That might be overstating as well ;-) ).


I've had people send me their drum scans over the last few months and have made comparison prints and run image analytics. There is nowhere near the information in them that the sampling frequency suggests.

I don't want to be disrespectful to you but this is not the same as doing your own work, on your own drum scanner where you can test out different methods of scanning. The words "drum scan" are often stated as if there were some kind of standard. One of my clients posted some drum scan examples here and the Tango looked like hell. I suspect it was seriously out of balance as I have seen better scans than that from another Tango. (Apparently, it was the same scanner that Micael Reichmann used for his drum scan comparisons - and he had it specially tuned up, which puts all of his conclusions in question.) Was that the scanner that was used for your scans? Were the drum scans done with a Premier, or an ICG 380? Those are clearly the top drum scanners, with a 3 micron engine.... What was the film used? Was it done at a lab or by an experienced operator? Who was the operator? There is simply no standard for drum scans, it isn't a matter of putting film on a device and pressing a button. There are a lot of decisions that get made. What was the criteria for judging this?

There is a difference between tweaking something to get as much out of it as you can and looking at something and saying it can't do this and it ca't do that. It's easy to dismiss things, especially if you are using a monitor vs a print.

I am not schooled as an engineer but I am schooled as a photographer. I am a very technical fellow when it comes to the various processes. Development temperature is set by a thermometer that is accurate to a 1/10 of a degree (across the entire range, not at "points") and I am quite careful about it. Glass thermometers are a joke, including the Kodak ones. I use one of those scientific ones given to me by a member here years ago. It's just one example, I have always had very tight procedures, across whatever can be controlled. I have done tests to show the effects of different filters, all the way to the print. Another, to test the different f-stops, another to test different films and developers, another to test different cameras with the same film/developer. Each one went from conception and shooting to developing, scanning and the print, all on my top paper. Therefore I can say that with my processes and my understanding that I can produce the following results, and they will differ in these ways. Someone else might be able to do more, or less. And none of us knows it all.

When people talk to me of diffraction I want to see it in the print. When they tell me a digital capture can do something I want to see it. I think too many people (not directed at any one person here) are quoting what is common knowledge and not what they themselves have tested. Especially when it comes to those white papers... with all the formulas on them.

Lenny

Struan Gray
3-Jun-2013, 12:19
Paul, Sir Geoffrey Ingram Taylor did single photon diffraction experiments in 1909 with photographic glass plates as detectors. I don't think his calibrations would pass muster today, but his conclusions are still justified by the data. But QE isn't the only figure of merit in low light imaging.

Digital sensors for technical photography and other array-sensing tasks (spectrometers, electron detectors) took over because they were linear and repeatable. Film does some unpredictable things at its tonal limits, especially at the bottom end, and CCDs and other photodiode-based arrays were just much better as measuring instruments. Science is supposed to be predictable and repeatable, so you can overcome a lower sensitivity by signal averaging (in space or time) or by doing your experiments multiple times and averaging the results.

I have done photoemission work looking at lineshapes in x-ray emissions from various chemical elements. The basic peak shows up in the first run, but the information we were actually after was well under the noise floor of a single spectrum. Average a hundred or more runs and you're able to tease them out with a bit of deconvolution magic. Similar things were done with film (astronomers used to average by stacking negatives and printing the stack) but you can trust the numbers much more with digital.

Lenny, the point I've tried to make in previous diffraction threads is that photographers don't really understand diffraction. As soon as you reach for a 'resolution' number you're stuffed, even if you appreciate the shape of the Airy function. If you actually do some convolutions you quickly find exactly the same result photographers do: diffraction blurs the image by surprisingly little. The 'theoretical' reason is that the Airy function is sharply peaked (it has what we nerds call a high kurtosis) and it spreads the light much less than, say, a bell-curve Gaussian with the same total energy.

paulr
3-Jun-2013, 13:01
I have a print here at my studio that was made with an 8x10. The print is tactile to the point that you can "feel" the objects in the image and feel the cold, salt spray on your skin. Many people have said this. Is this possible to reproduce this effect using a digital camera? I have no idea. I seriously doubt it, but I don't really know.

Well, as someone who's worked with LF for most of my life and with digital cameras for the last couple of years, I can say unequivocally yes. Which shouldn't be surprising. What's very surprising to me is the enlargement factors at which this is possible. It's been such a surprise that it's forced me to throw out a lot of my conventional wisdom and brush up on the theory.

I agree with Struan that diffraction is widely misunderstood by photographers. While it may be true that it's hard to understand by anyone, the effects are not so difficult to grasp. I can post MTF data that shows measured effects of diffraction by itself, and also combined with aberrations and defocus blur. This would give a sense of what effects we'd expect to be visible, and to what degree.

There are simple reasons the talk about diffraction can seem overblown: its effects are generally preferable to defocus blur, and are much more easily corrected by sharpening, especially with deconvolution algorithms. And there is usually plenty of blur from defocus and other aberrations, when using a very long lens with the wide angles of coverage required for LF. These shift the diffraction limit toward smaller apertures.



The words "drum scan" are often stated as if there were some kind of standard....

Yes, I agree completely. This is why I've been looking at as many samples as I can get my hands on. It wouldn't surprise me at all if something better were possible. It would have to be a hell of a lot better, though, to make a substantial difference in what I'm seeing. I would still welcome the opportunity to look at one of your scans, or even a 100% crop from one.

I'm happy to send you a the equivalent in a digital capture.

Greg Miller
3-Jun-2013, 14:30
(What I am writing pertains to color images only) I have shown the 100% crop here before. This is from Velvia RVP50 image scanned on a Nikon Coolscan 4000 at 4000DPI. This is a reflection of a sunset on a calm lake. It is full of what I would call luminance noise (whether it is from grain or dye clouds doesn't really matter). A 16 bit 35MM scan from the Coolscan at 4000 DPI yields about a 110MB file. A 16 bit image straight out of a D700 12 mp camera is about 68MB (about 40% smaller than the Velvia image).

Even though the D700 image is 40% smaller than the file size of the scanned Velvia image, a D700 image is much less noisy (much, much, much) and the tonal gradations are much smoother. I doubt few people would guess that without seeing it because the files sizes and the pixel counts are so different, and to the detriment of the D700 image. Yet the 12 mp D700 yields an image much superior to the 35 MM Velvia image in terms of noise and tonal gradations. So it is very difficult to compare numbers (only) and come to conclusions that are real-world relevant.

96324

Lenny Eiger
3-Jun-2013, 16:59
I agree with Struan that diffraction is widely misunderstood by photographers. While it may be true that it's hard to understand by anyone, the effects are not so difficult to grasp. I can post MTF data that shows measured effects of diffraction by itself, and also combined with aberrations and defocus blur. This would give a sense of what effects we'd expect to be visible, and to what degree.

But Struan is saying that the effects are minimal. I quote "diffraction blurs the image by surprisingly little" yet you want to post MTF data. I am exhausted by data. There's tons of it, much of it either wrong, or spun to someone's advantage (usually a large company). Further, I don't want to compare MTF because it doesn't mean anything to me. I haven't spent all my life with it, like I have ISO (even when it was ASA), or development time. If you told me you developed 2 minutes longer at 68 degrees in D-23 I would know exactly what you meant. Looking at MTF values I would have no reference to what it meant in a print. On my Sironar S they supply MTF values at only one f-stop. How am I supposed to make head or tail of it?

In fact, I happen to like depth of field. It tests well over here...


I would still welcome the opportunity to look at one of your scans, or even a 100% crop from one.
I'm happy to send you a the equivalent in a digital capture.

We might be able to make this happen. Still, it isn't a file I am interested in. It's a black and white print... and part of the frustration here is that there aren't good measurement criteria to judge things by... I don't think there is any metric for smoothness other than someone's eyes. I suppose someone could analyze an area and decide how many distinct values could be found right next to each other. There aren't current tools for it.

Lenny

Jac@stafford.net
3-Jun-2013, 17:38
The human eye-brain sees some things as sharper than others regardless of resolution. They eye has no concern for MTF. Some images are more susceptible to our eyes to appear sharp and other are not.

MTF is a good metric for aerial photography, reconnaissance, but in everyday photography it is weak, nearly meaningless for our lenses.
.

paulr
3-Jun-2013, 18:36
The human eye-brain sees some things as sharper than others regardless of resolution. They eye has no concern for MTF. Some images are more susceptible to our eyes to appear sharp and other are not.

MTF is a good metric for aerial photography, reconnaissance, but in everyday photography it is weak, nearly meaningless for our lenses.
.

MTF provides an almost perfect model for the way people subjectively sense sharpness—if you know how to read the charts and interpret them for particular viewing situations. Your statement that "the human eye-brain sees some things as sharper than others regardless of resolution" is precisely what MTF addresses. It shows the relationship between contrast and spacial frequencies. Our sense of sharpness is based on contrast at a very specific range of frequencies (resolutions). The science behind this is extremely mature. You would be hard pressed to find dissenting ideas among imaging scientists or perceptual psychologists.

paulr
3-Jun-2013, 18:40
We might be able to make this happen. Still, it isn't a file I am interested in. It's a black and white print... and part of the frustration here is that there aren't good measurement criteria to judge things by... I don't think there is any metric for smoothness other than someone's eyes. I suppose someone could analyze an area and decide how many distinct values could be found right next to each other. There aren't current tools for it.


You don't have to measure it, you just have to look at it!

Measurements can be great for helping predict how something will look, but if you have the thing itself ...


Unfortunately all my black and white work is on film. I haven't tried to do b+w conversions from a digital capture ... it's just not what I've been up to. I can try it though and see if I come with something worth sending.

paulr
3-Jun-2013, 19:07
http://www.paulraphaelson.com/downloads/DiffractionLimitedMTF.png

This chart shows the MTF curves of theoretically perfect lenses at different apertures. The resolution numbers that get thrown around are mostly meaningless; here's what's actually going on: contrast gets diminished as the resolution (spatial frequency goes up). Once MTF drops to zero, there's no image nor is there possiblity for recovering anything through sharpening. MTF below 0.1 means an extremely faint, ghostlike image. Whether or not its recoverable—and how it will look afterwards—depends on the noise level of the image. Sharpening sharpens noise as well as detail, so what really counts is the MTF level above the noise floor.



http://www.paulraphaelson.com/downloads/DiffractionLimitedMTF_f08.png

This chart shows the combined effect of diffraction and defocus blur, at f8. The blue, green, and purple lines show different sizes of circle of confusion, indicating different degrees of defocus. As you can see, when there's very little defocus, performance is excellent: at the diffraction limit. When there's a lot, performance plummets. Not so surprising.



http://www.paulraphaelson.com/downloads/DiffractionLimitedMTF_f22.png

This chart shows the combined effect of diffraction and defocus blur at f22. Here things are quite different. The peak performance is much lower than above, but the worst performance is much better. The moral is what most people have already discovered: if you need depth of field, stop down. You will lose sharpness in your in-focus areas, but you will gain much more in your out of focus areas. Having 20-30% MTF at 50 lp/mm would yield very good results, if print size were not large, or if noise levels allowed a healthy degree of sharpening.

f64, however, is a different story. I don't have charts for this, but know from shooting film, and looking a the math, that the effects become quite assertive beyond f32 or so for anything beyond a small enlargement factor.

Corran
3-Jun-2013, 19:19
Paul, doesn't lens design mitigate and/or change the way that diffraction happens? I've read multiple times in lens reviews/tests, regarding sharpness around f/22, that the center resolution is starting to show diffraction effects but the corners have sharpened up. Point being, a lens can be "specifically designed" to use optical phenomenon such as diffraction to enhance sharpness - i.e. being "soft" in the f/8-f/16 range so as to specifically sharpen up at f/22 and beyond?

I've shot some film with 35mm before at f/22 and even f/16 that is clearly totally diffracted, but with 4x5 images shot at f/32 and even f/45 they seem sharper, within the range of similar enlargement, than the smaller format. Basically, looking at the same DPI scans at 100%, the larger film is less diffracted. I have heard conjectures of longer lenses, and correspondingly larger aperture holes for a given f/stop, change the nature of diffraction. Thoughts?

paulr
3-Jun-2013, 19:53
Bryan, the lens isn't changing the way diffraction happens; you're seeing the effects of two unrelated sources of degradation responding differently to aperture changes. Most good lenses are relatively free of aberrations in the center, so they behave in a way that's close to an ideal, diffraction-limited lens on axis.

The farther off axis you go, the worse the aberrations become. Stopping down, while adding diffraction, also narrows the optical path toward the optical axis, so a smaller percentage of the image-forming rays are passing through the off-axis glass.

This is why there's often no single aperture at which a lens is sharpest. There will be an aperture that's best for the center (relatively wide open) and one that's best for the edges (a stop or two or three down from there).

Diffraction is not dependent on focal length. f-number keeps the aperture size and focal length at the same ratio. The white paper on camera equivalence goes into this in some detail. I'm not sure what you're seeing in those scans, but looking at film in the real world means looking at so many variables at once that it would be hard to isolate diffraction from something else.

paulr
3-Jun-2013, 19:58
(What I am writing pertains to color images only) I have shown the 100% crop here before. This is from Velvia RVP50 image scanned on a Nikon Coolscan 4000 at 4000DPI. This is a reflection of a sunset on a calm lake. It is full of what I would call luminance noise (whether it is from grain or dye clouds doesn't really matter). A 16 bit 35MM scan from the Coolscan at 4000 DPI yields about a 110MB file. A 16 bit image straight out of a D700 12 mp camera is about 68MB (about 40% smaller than the Velvia image).

Even though the D700 image is 40% smaller than the file size of the scanned Velvia image, a D700 image is much less noisy (much, much, much) and the tonal gradations are much smoother. I doubt few people would guess that without seeing it because the files sizes and the pixel counts are so different, and to the detriment of the D700 image. Yet the 12 mp D700 yields an image much superior to the 35 MM Velvia image in terms of noise and tonal gradations. So it is very difficult to compare numbers (only) and come to conclusions that are real-world relevant.

96324

Yeah, this is the phenomenon that left a lot of us scratching our heads when we started working with digital capture. My scans from black and white 4x5 are 200 megapixels! I assumed that a 12 megapixel camera would barely be good enough for an 8x10 based on this. When I saw the actual results, it took a lot of research to figure out what was going on. I had never before appreciated the role that noise plays.

Tim Povlick
3-Jun-2013, 21:13
That sounds great, but it just doesn't work that way. Show me a 568 megapixel scan from an 8x10 negative, and I can downsample it to one quarter that. In a print you will not see the difference. In most cases I could downsample by a factor of 8 and you couldn't tell. The difference has been pretty obvious when I've downsampled 16 times ... which is to say that 8x10 can do better than a 36 megapixel dlsr. But you might be surprised how big you have to print to see the difference.


I respectfully have to disagree, having a 8x10 image shot with high quality lens then that when printed at 44"x74" one can see people at a distance. If down sampled as you suggest and printed the people would disappear or become indistinguishable blobs.




I've had people send me their drum scans over the last few months and have made comparison prints and run image analytics. There is nowhere near the information in them that the sampling frequency suggests. At 100% view, you see big, mushy detail and a ton of noise. These could be considered low quality pixels. A much smaller number of high quality pixels can do the same work. All of these metrics I've been mentioning are about the quality of those pixels.



Can you provide details as to the image analytics? I would test some scanned images using these.



I certainly agree that medium format backs are priced out of reach of most of the people who would enjoy them. There are other ergonomic issues that make me think they're not completely ripe yet. But you cannot argue with the image quality. I've played with raw files from a phase IQ180, and they're the highest (techical) quality photographic images I've ever seen. The superior quality of the lenses plays into this. The backs currently have one significant technical shortcoming compared with film: dynamic range. They've gotten really good in this area ... much better than transparency film, and much better than they used to be. But theyr'e still a couple of stops shy of most color negative film, and of course can't come close to what's possible with b+w.

This is irrelevant to a studio photographer, but for someone doing work like mine, improvements in dynamic range would be welcome.

In terms of resolution, 4x5 and 8x10, when shot at their ideal apertures (around f11 or f16 on most lenses), are better than mf digital, in a way that you can start to see in prints above 40 or 50 inches wide. But when you stop down past f22 or f32 for depth of field, diffraction becomes a remarkable equalizer.


The IQ180 images sound yummy, can you post some examples?

barnninny
3-Jun-2013, 21:48
Paul, Sir Geoffrey Ingram Taylor did single photon diffraction experiments in 1909

Sure, but photons are so much better today than they were a hundred years ago.

Nathan Potter
3-Jun-2013, 21:50
Paul, thanks for those plots. They seem to agree with my intuition, which makes me feel good.

There is an effect I have noticed when scanning resolution targets that has to do with the contrast at any particular spacial frequency. I've only scanned using a Nikon Coolscan 5000 and an Epson V750 both which show the effect. This should be obvious, I think, but as the contrast at higher spacial frequencies goes down the dynamic range available also goes down. I'm sure I see this but I don't know how to characterize it, or worse, quantify it. I think I may have mentioned that effect previously in another thread. The effect is dramatically visible by taking some very low contrast spacial frequency say > 5% and artificially increasing the contrast using PS. By really squeezing the contrast to a very high level one can recover a high contrast at that same spacial frequency. In effect that is taking a slice through the density range, similar to using a lith process with film.

I think from the consequences of such an exercise one can say that there is a direct relationship between the dynamic range available and the contrast at a particular spacial frequency. At low spacial frequencies high contrast is necessary to yield a long tonal range while at high spacial frequencies high contrast is necessary to achieve smooth microcontrast in images. Maybe we already know this from subjective observation but it is interesting to connect the phenomena to scanned resolution targets.

It could be assumed that the same phenomena holds for digital sensors since like scanners the analogue image is discretized using pixels.

Nate Potter, Austin TX.

Oren Grad
3-Jun-2013, 22:13
This chart shows the combined effect of diffraction and defocus blur...

...for an idealized lens, not a real one. The behavior of defocus blur will depend on how a lens renders focus transitions.


There are simple reasons the talk about diffraction can seem overblown: its effects are generally preferable to defocus blur, and are much more easily corrected by sharpening, especially with deconvolution algorithms.

Whether its effects are preferable to defocus blur depends on the character of the defocus blur as much as its quantity. I don't think one can generalize.

Sharpening doesn't "correct" anything. A lossy transform is a lossy transform; you can't retrieve information that is gone. And you don't get to correct diffraction in isolation from other sources of blur. The simulations constructed by even relatively sophisticated algorithms like deconvolution sharpening are based on grossly simplifying assumptions. In practice what you're actually doing when you sharpen, with whatever algorithm, is just imposing another arbitrary transformation of the original data. It may be one whose effects you find pleasing, and as such it may serve your esthetic purposes, which is fine. To my eye, the effects of sharpening algorithms are disconcerting; the last thing they do is create an impression of a restored, unvarnished original.

Oren Grad
3-Jun-2013, 22:31
At low spacial frequencies high contrast is necessary to yield a long tonal range while at high spacial frequencies high contrast is necessary to achieve smooth microcontrast in images.

I don't understand what you mean by this. Can you rephrase?

More generally, is what you're describing any more than the effect of losing the test signal in the noise floor? At exactly what degree of modulation you lose what part of the test signal should depend on both the amplitude and the spatial frequency distribution of the noise floor, no?

Oren Grad
3-Jun-2013, 23:20
Here's some fun reading for anyone who's been following this and wants to get a better handle on different approaches to reversing blur induced by different sources, what they can and can't do, and what's required to get there from here:

Deconvolution sharpening revisited (http://www.luminous-landscape.com/forum/index.php?topic=45038.0)

Optimal capture sharpening (http://www.luminous-landscape.com/forum/index.php?topic=68089.0)

Is the problem of diffraction over-rated? (http://www.luminous-landscape.com/forum/index.php?topic=76371.0)

Struan Gray
4-Jun-2013, 00:04
The major problem with standard deconvolution de-blurring is that many photographic images are blurred by different amounts in different parts of the frame. At wider stops residual aberrations vary across the field, and at small stops diffraction can also be seen to be varying. Motion blur is only constant for long lenses - wide angles produce more blur at the edges than the centre. Straight deconvolution assumes a constant point spread function - the same blur everywhere.

The astrophotographers have an advantage in that they know what their stars are supposed to look like. They can easily extract local deconvolution kernals and stitch together a final image which is deconvolved by different kernals at different points in the frame:

http://www.princeton.edu/~rvdb/images/deconv/deconv.html


Usually when things get technical someone starts to complain about hairs being split or angels dancing on pins. This is partly true, but partly a reflection of the fact that in photography the hard science is mostly done for you by the manufacturers of film, sensors and lenses. You don't need to know this stuff to take good photographs, but if you're trying to squeeze ultimate resolution out of a system, or understand the limits of your technique, it can be a help. Personally, I am often surprised at how bad photographic optics turn out to be when measured, and how good the photographs turn out to be when you damn the measurements and go out and use the gear anyway.

This is a nice look at spatially varying blur in photographic optics. Figure 3 in particular is instructive. It agrees nicely with the sorts of things Roger Cicala says on the lens rentals blog - my interpretation is that if you care about the last gasp of resolution you must look at individual lenses one by one.

http://people.csail.mit.edu/sparis/publi/2011/iccp_blur/Kee_11_Optical_Blur.pdf


Resolution, noise, dynamic range and sensitivity are all related, especially once you start trying to squeeze the last little bit of performance out of the system. I keep being tempted to switch to an entirely digital setup, but get held up by the fact that my MF and LF equipment already does what I need, and does it with grace, ease, and bags of headroom. For the size of print I make, and the sorts of subject I like to photograph, LF film just delivers, with no worries about MTFs, LSBs, FFTs or SAFs.

Struan Gray
4-Jun-2013, 00:13
Sure, but photons are so much better today than they were a hundred years ago.


In 1909 photons were only four years old. According to records kept in the Wren Library at Trinity College, Taylor wanted to perform the experiments earlier, in 1906, but was forced to wait until the photons were old enough to be out on their own.

Oren Grad
4-Jun-2013, 00:58
The major problem with standard deconvolution blurring is that many blurred photographic images are blurred by different amounts in different parts of the frame. At wider stops residual aberrations vary across the field, and at small stops diffraction can also be seen to be varying. Motion blur is only constant for long lenses - wide angles produce more blur at the edges than the centre.

I'll reiterate the familiar gripe that in general you don't know the point spread function. Demos showing how you can apply a given PSF to an image and then, knowing the PSF, retrieve most of the original are fun party tricks. But usually you're stuck with an image that contains various manifestations of unsharpness, some of which are artifactual in various known and unknown ways but some of which actually belong. You can try to bootstrap a "correction" by packing a bag full of assumptions and cruising the image looking for features to which you can apply those assumptions to derive a plausible PSF to put into your deconvolution routine, and if you want to be really clever you can iterate that, but that's all voodoo. In the spirit of the paper you linked, maybe someday we'll have a way to automatically and rapidly characterize the PSF profiles all of our lens/sensor/aperture/FL combinations.

In the meantime, the distribution of sharpness in pictures is going where color has been all along, which is to say constrained to some degree in achievable fidelity by limitations of the medium but within that envelope tuned arbitrarily in post-processing to something judged most pleasing. That's as valid as any other approach if it's what you like, but it's unsatisfying if the look and feel of minimally-varnished optical projections is what one grooves on.

(No, I don't care for Howard Bond's analog USM's either.)

(Yes, film introduces its own artifacts. Pick your poison.)


I keep being tempted to switch to an entirely digital setup, but get held up by the fact that my MF and LF equipment already does what I need, and does it with grace, ease, and bags of headroom. For the size of print I make, and the sorts of subject I like to photograph, LF film just delivers, with no worries about MTFs, LSBs, FFTs or SAFs.

I keep tinkering with digital, trying to figure out whether and how I can make it behave in a way I can stomach, but get held up by the fact that my film cameras already do what I need, and do it with grace, ease, and bags of headroom. For the size of print I make, and the sorts of subjects I like to photograph, film (35, MF, LF) just delivers, with no worries about MTFs, LSBs, FFTs or SAFs. :)

Struan Gray
4-Jun-2013, 01:24
"Wat'ya doin tonite Struan?"

"Dunno. Probly jus goin' ta cruise fer kernals out by the corners."

Oren Grad
4-Jun-2013, 01:34
"Dunno. Probly jus goin' ta cruise fer kernals out by the corners."

Time for popcorn! :D

(Now where's that popcorn-munching emoticon?)

paulr
4-Jun-2013, 07:27
Whether its effects are preferable to defocus blur depends on the character of the defocus blur as much as its quantity. I don't think one can generalize.

Yes, yes, all of this is predicated on the desire to make a sharp picture. This is of course not everyone's goal all the time. If you want parts of the image defocussed, most of what I've been talking about is at best distantly relevant.


Sharpening doesn't "correct" anything. A lossy transform is a lossy transform; you can't retrieve information that is gone.

I respectfully disagree. As the charts show, until diffraction is extreme, all that's lost is contrast. Contrast can be recovered up to the point where modulation drops below the noise floor. The greater the s/n ratio, the more correction you can apply before noise becomes objectionable. This isn't just theory; I take advantage of it every day. The project I'm working on now is with a dslr and a Schneider shift lens. I'm generally forced to use the lens at f16. I'm using a camera wing <5 micron pixel pitch, which clearly shows differences in diffraction between f8, f11, and f16.

At f16 I'm able to recover almost all of the sharpnesss. In other words, two images shot at different apertures and sharpened identically will look quite different from each other. But sharpened optimally (which means more sharpening, tailored to diffraction, applied to the smaller aperture image) they look close to identical. This of course applies to images without significant defocus blur. At f22 I find my abilities are much more limited.

The best tool I've found for sharpening is one that lets you blend USM and deconvolution until you find a good mix for the image at hand. Photoshop and Lightroom have rudimentary but effecitve tools for this. There other 3rd party solutions. The weakness of every deconvolution technique I've tried is that it cannot distinguish image detail from noise, and it tends to amplify noise in worse ways than simple unsharp masking. So I only use to any significant degree on very low noise images. This mostly precludes using it on film.


And you don't get to correct diffraction in isolation from other sources of blur.

I don't find that deconvolution (used sparingly) has unwanted effects on defocused (or otherwise blurred) areas of the image. The exception to this being corners that show a lot of astigmatism ... smeared detail tends to respond badly to every sharpening algorithm I've used, unless done very sparingly.


To my eye, the effects of sharpening algorithms are disconcerting; the last thing they do is create an impression of a restored, unvarnished original.

If sharpening creates a noticeable effect at all, I don't like it either. I'm talking about doing it well, where the effect is more like seeing a veil of haze removed.

paulr
4-Jun-2013, 07:35
For the size of print I make, and the sorts of subject I like to photograph, LF film just delivers, with no worries about MTFs, LSBs, FFTs or SAFs.

What a shame ... I was going to invite you to participate in my show where we leave the image to the vierwer's imagination, but hang framed prints of the histograms and MTF charts and spectral sensitivity plots. It's the future, I'm sure.

Nathan Potter
4-Jun-2013, 08:41
I don't understand what you mean by this. Can you rephrase?

More generally, is what you're describing any more than the effect of losing the test signal in the noise floor? At exactly what degree of modulation you lose what part of the test signal should depend on both the amplitude and the spatial frequency distribution of the noise floor, no?

My wording there is incomplete. I'll rephrase: At low spatial frequencies high MTF contrast is necessary to yield a long tonal range while at high spatial frequencies a high MTF contrast is necessary to achieve smooth microcontrast in images. I think this is just obvious so there is nothing profound in the comment. Your comment about losing the test signal in the noise is a different way of saying a similar thing but my observations are based on observing the dynamic range at a given spatial frequency since that is something I can see (and measure) when capturing a resolution target.

I think you are correct when saying "At exactly what degree of modulation you lose what part of the test signal should depend on both the amplitude and the spatial frequency distribution of the noise floor". Although I'm speaking to the issue of dynamic range preservation at all spatial frequencies of importance rather than the degradation of a test signal so I'm not sure they are quite equivalent.

Nate Potter, Austin TX.

Struan Gray
4-Jun-2013, 08:43
I'm more the empiricist type:

http://wondermark.com/789/

paulr
4-Jun-2013, 08:46
I'm more the empiricist type:

http://wondermark.com/789/

Yeah, I've been admiring your work for years.

Oren Grad
4-Jun-2013, 08:55
I respectfully disagree.... (etc.)

That's all very reasonable. The theoretical understanding is helpful in thinking through what's going on, but in the end, the proof of whether artifacts are prominent enough to be a problem comes only through making and viewing the types of pictures we like to make, and judging them through our own ways of seeing.

One of the implications of my own hang-up with sharpening takes us back to a reason why surface area still matters. If one wants to avoid sharpening, the only way to get a print that doesn't look pervasively soft is to stay within a very modest degree of physical enlargement from the original capture. In that sense, even apart from noise and DR issues, 12MP in a tiny P&S sensor does not behave the same as 12MP in, say, a D300 or a 5D classic. You need to enlarge the fuzz that much more to get a print of a given size. In fact, the smallest sensors are still so noisy that the images are usually heavily processed for that too. So in practice, if you want to work with a tiny-sensored camera, you'd better really like either in-your-face impressionism or the heavily-processed-for-both-noise-and-sharpening look. As the sensors get larger, the tradeoffs get gradually more forgiving.

I do agree with you that bar-chart resolution in itself is now, in most situations, a red herring in trying to understand the fundamental ways in which film and digital capture behave differently. Similarly with respect to the capacity for tonal differentiation, which at least on the capture end is now more than enough (I'm not sure yet about output). I think the most interesting issues lie elsewhere.

Thanks for posting those MTF curves, BTW.

I hope you'll let us know if you ever reach the point of showing your current project.

Oren Grad
4-Jun-2013, 08:59
I'm more the empiricist type:

http://wondermark.com/789/

Meh... your next assignment is to delineate in this thread the meandering boundary between discussion and performance art.

Oren Grad
4-Jun-2013, 09:23
At low spatial frequencies high MTF contrast is necessary to yield a long tonal range

What do you mean by "long tonal range" in this context? Maximizing (Dmax - Dmin) or maximizing the number of tonal steps one can distinguish in between?

Nathan Potter
4-Jun-2013, 11:26
Oren, yes absolutely maximizing Dmax - D min which leads to increased number of tonal steps in between, provided the bit depth is adequate.

Nate Potter, Austin TX.

paulr
4-Jun-2013, 11:40
Oren, yes absolutely maximizing Dmax - D min which leads to increased number of tonal steps in between, provided the bit depth is adequate.

Nate Potter, Austin TX.

Noise seems to have a lot to do with this too. Throw some big blotches onto a smooth gradient, it doesn't look smooth anymore. The illusion now is of fewer shades of gray.

Ironically, the opposite is true as well. If you have a rare image that shows banding (either from extreme overprocessing or from applying too much gain and expansion to deep shadows) noise can mask this, making the tones look smoother than they are.

Struan Gray
4-Jun-2013, 12:21
Meh... your next assignment is to delineate in this thread the meandering boundary between discussion and performance art.

Oddly enough, I've been working on that recently. It's quite complex...


http://struangray.com/miscpics/nk100_3_rand_colorcombo_750.jpg (http://struangray.com/miscpics/nk100_3_rand_colorcombo.jpg)

..

Greg Miller
4-Jun-2013, 12:40
I do agree with you that bar-chart resolution in itself is now, in most situations, a red herring in trying to understand the fundamental ways in which film and digital capture behave differently. Similarly with respect to the capacity for tonal differentiation, which at least on the capture end is now more than enough (I'm not sure yet about output). I think the most interesting issues lie elsewhere.

That's why full system tests are more useful. Single component testing/measuring might be interesting, but in a complex system, and end-to-end test is more real-world relevant. Tripod, camera, lens, shutter, film/sensor, developer/RAW converter+other image manipulator, paper/chemicals/RIP/ink/printer,...

Jac@stafford.net
4-Jun-2013, 13:34
Oddly enough, I've been working on that recently. It's quite complex...


http://struangray.com/miscpics/nk100_3_rand_colorcombo_750.jpg (http://struangray.com/miscpics/nk100_3_rand_colorcombo.jpg)

..

I get a 'forbidden' message for that URL.

I am wondering if it similar to the only photo I have on the walls of my home - one by Struan Gray.

barnninny
4-Jun-2013, 19:04
In 1909 photons were only four years old. According to records kept in the Wren Library at Trinity College, Taylor wanted to perform the experiments earlier, in 1906, but was forced to wait until the photons were old enough to be out on their own.

I guess photons grew up faster, then. They had to. There were chickens to milk and cows to be picked.

Nathan Potter
4-Jun-2013, 19:28
Oddly enough, I've been working on that recently. It's quite complex...


http://struangray.com/miscpics/nk100_3_rand_colorcombo_750.jpg (http://struangray.com/miscpics/nk100_3_rand_colorcombo.jpg)

..

Struan, what is this? Could be an aerial view of a disturbed land form but it's hard to imagine any activity that could produce such folly. Then, on the other hand, maybe it's a view of some immiscible fluid strands trying to find their way to a solution. Dunno. Have you been reading about Jackson Pollock lately then discovered an imitative technique. I think I prefer to use my imagination but my grand daughter is busy trying to find her way around and out of the maze.

Splendid image.

Nate Potter, Austin TX.

Struan Gray
5-Jun-2013, 00:21
I get a 'forbidden' message for that URL.

Should work now. I've had to be quite aggressive with the hotlink protection of late. Too many leeches grabbing the images from my blog (not *my* images, of course :-).


I am wondering if it similar to the only photo I have on the walls of my home - one by Struan Gray.

Now you're making me blush. Both are informed by the same sense of curiosity and wonder, and a love of pattern.

This one is entirely synthetic, but it came about from my efforts to take LF colour images on monochrome film, so perhaps the moderators will let it stand. It has the same symmetry as a Nautilus shell, although the symmetry is stochastic rather than exact. The balance between detail and overall structure is a cheat - this is deliberately constructed to be self-similar, like a fractal but without the non-integer dimension.



Struan, what is this? Could be an aerial view of a disturbed land form but it's hard to imagine any activity that could produce such folly. Then, on the other hand, maybe it's a view of some immiscible fluid strands trying to find their way to a solution. Dunno. Have you been reading about Jackson Pollock lately then discovered an imitative technique. I think I prefer to use my imagination but my grand daughter is busy trying to find her way around and out of the maze.

It's a map of the zero crossings of the Fourier transform of a scaled set of discrete equiangular spirals, with the phase of the individual components randomised. The colour is laid on by hand (well, by Photoshop). It came about because I was tinkering with aperture masks for colour coding and started asking myself what the diffraction pattern would be for a spiral aperture.

Recipe: take an Euler spiral, select equally-spaced points along it and some limiting number of turns around the origin. Then copy the spiral at increasing magnifications. Randomise the phases of the individual dots. Then Fourier transform (or just add the waves) and take the log of the absolute value. Colour according to taste.

The tension between getting lost in the wiggles, and an appreciation of the mathematics, is a big part of the attraction for me - aesthetically and intellectually.


Thanks for the appreciation guys. When I finally turn this into a working form of LF Colour Plenoptography I'll write it up here somewhere.

sanking
5-Jun-2013, 14:38
That's why full system tests are more useful. Single component testing/measuring might be interesting, but in a complex system, and end-to-end test is more real-world relevant. Tripod, camera, lens, shutter, film/sensor, developer/RAW converter+other image manipulator, paper/chemicals/RIP/ink/printer,...

I agree, and to the full system tests you would have to also somehow figure in the real goal of the photographer, and how much equipment he/she is prepared to haul around to achieve this goal.

Sandy

Drew Wiley
5-Jun-2013, 15:51
Struan - I mistook it for a medical specimen, namely a brain slice from one of the remaining geriatric hippies still wandering our streets around here!

Struan Gray
6-Jun-2013, 07:07
That would explain the lines which just go round and round in a loop :-)