PDA

View Full Version : Camera scanning: conservator's perspective



rdeloe
14-Jun-2019, 11:13
There are many threads on the forum about "DSLR scanning", also known as "Camera scanning". An active recent example is this one: https://www.largeformatphotography.info/forum/showthread.php?152777-DSLR-scanning-vs-Dedicated-flatbed-scanning&p=1504274#post1504274 I was going to add this material to that thread, but decided to open a new thread because I think there may be some value for the community in knowing about the perspective of conservationists (the museum crowd).

Many people on the forum, and elsewhere, have found ways of digitizing negatives and positives using cameras of some kind. Others do it with scanners. It's all good. This thread is about camera scanning, which seems to be the future of digitizing film.

In figuring out how to "camera scan" my 4x5 negatives, I've discovered lots of advice, much of it contradictory, and lots of it based on the personal experiences of other people who have figured things out on their own too... It turns out that photographers who are interested in hybrid workflows (film > digital) are not the only ones thinking about this! Thanks to a tip from a fellow photographer, I discovered the conservationist's perspective on all of this. Lo and behold, there are standards out there!

Long story short, I just purchased an interesting little book called DT Digitization Guide. Digitization Workflows: Transmissive. DT is a commercial outfit that sells equipment and services to the "conservation" world, in other words, museums. Their equipment is WAY outside of my budget! However, this books was only $5 USD, so worth it to me.

The book has two main parts: a general overview of issues relating to camera-based digitization of transmissive materials, in other words, negative and positive film; and instructions that are specific to using DT's own equipment, which includes Phase cameras and Capture One software. I bought it for the first part, which is a nice, not too technically complicated overview of the conservationist's perspective. But even the second part has good value if you're willing to extend the ideas to your own gear. There are also some useful tidbits in the appendix.

The knowledge this little book contains is probably out there on the Internet somewhere for free, but I have not encountered it before, and it's nice to have it all in one place. Here are some highlights:

Conservationists have some concerns that do not necessarily overlap with those of the photographer who is shooting film with the goal of digitizing and then editing of the digital file to make their own "interpretation" of what the camera recorded. Here are a few examples:

* Conservationists seem to view wet scanning (fluid mounting) as a Very Bad Thing. Makes sense: you're immersing the negative or positive film in mineral spirits or some other fluid. However, for someone like me, who is shooting the film to make the digital file, I'm not concerned about preserving the negative for someone to look at in 100 years. I wet scan for the quality improvement.
* Similarly, using tools like ICE (or just spotting in Photoshop) is a major no-no in the conservation world when the goal is to make a "Preservation Digital Object" (a faithful reproduction of the thing).
* One interesting difference between what I need and what a conservator needs is exposure. When making a "Preservation Digital Object", the conservator wants to make an image that is faithful to what the object would look like in real life, e.g., placed on a light table and viewed by someone. In contrast, I need an exposure that gives me high quality data at each pixel while preserving the full tonal range of the image. I don't care if the resulting file is remotely "faithful" to the way a viewer would see the negative on a light table.

Apart from those kinds of concerns, the basics are the same for the person in the museum who is making a Preservation Digital Object, and the photographer who is making a digital file to work on in Lightroom or Photoshop (or whatever software they use). There's lots of good basic technical advice in the book that is consistent with standards that are emerging.

Finally, the book also gets at a common topic of debate on forums, which is the resolution at which to digitize the film. I like their position: it's not about what we can do with the equipment we have, it's what we should do for our intended purpose. For example, in the conservation world, the goal of digitization is to create "a surrogate to the original object, replacing most needs for physical access to that original object". Thus, "the selection of PPI must be based on the content of the original transmissive material." In contrast, my own goal is to balance practical considerations such as the ability of my computer to handle the file with "how many pixels do I need to make a high quality print?" In case you're curious, the FADGI 2016 standard for conservation digitization of 35mm, 645 and 4x5 film is 4,000 ppi at 90% sampling efficiency (which is way more than I can make or need); it's 2,000 ppi at 90% sampling efficiency for 8x10 film.

Rob

Peter De Smidt
14-Jun-2019, 11:32
Good stuff, Rob.

Chester McCheeserton
14-Jun-2019, 16:15
It is interesting. Those ppi numbers are exactly where most experienced scan operators would land as well.

Doubtless some conservators have extensive experience as photographers...but rarely do they actually have the experience of physically making another print or object, their equipment and methods are I bet 99% are for the screen only. Another group of people who would might know something about this (at least working from prints, not negatives) are the people operating the medium format copy cameras for reproduction at high end book places, like Meridian in Rhode Island.

One test result I posted a while back, kind of buried, but maybe relevant.
https://www.largeformatphotography.info/forum/showthread.php?146948-Practical-Resolution-discussion&p=1454940#post1454940

invisibleflash
14-Jun-2019, 18:00
Good rundown....Thanks!

Has it been determined which method produces better results? Got any photos of the conservator's setups?

rdeloe
14-Jun-2019, 18:41
Got any photos of the conservator's setups?

This link takes you to a story on the National Geographic blog. It's about digitizing "autochromes" (early colour photographs). Doug Peterson, who is the focus of the article, works for DT Cultural Heritage. He provides links in his write-up back to DT's site so you can see the gear they provide.
https://nglibrary.ngs.org/public_home/filmpreservationblog/Preservation-of-Autochromes

Peter De Smidt
14-Jun-2019, 18:44
http://www.gigamacro.com/gigapixel-macro-imaging-system/

Tin Can
14-Jun-2019, 19:42
Whoa

Bugs!


http://www.gigamacro.com/gigapixel-macro-imaging-system/

Oren Grad
15-Jun-2019, 15:35
In case you're curious, the FADGI 2016 standard for conservation digitization of 35mm, 645 and 4x5 film is 4,000 ppi at 90% sampling efficiency (which is way more than I can make or need); it's 2,000 ppi at 90% sampling efficiency for 8x10 film.

These standards are inadequate if one seeks to preserve the grain structure of the film. In fact, for ISO 320/400 films like Tri-X, 4000 ppi will produce aliased pseudograin which increases the apparent size and obliterates the native character of the grain.

Yes, I know that many users don't care about the grain structure for their purposes, which is fine.

I did note in the DTI piece on copying autochromes that faithfully recording the "grain" structure was an explicit objective for that project. Not surprisingly, the level of resolution required even for that relatively crude medium was moderately high.

Peter De Smidt
15-Jun-2019, 16:24
I agree with Oren. Scanning at 6000 spi leads to significantly finer grain than scanning at 4000 spi with my Cezanne, but I don't make big enough prints for that to matter with large format.

rdeloe
16-Jun-2019, 14:44
I wanted to understand the point Oren was making, so I did some digging*. The results may be of interest to others, so I’ll share them here. I’m also curious if I’m correct in my understanding (so please jump in if you think I’m wrong).

First, we have to distinguish between silver particles and grain. I’ve encountered people who think their high resolution scans are capturing the silver particles in their black and white film. Silver halide particles average 0.2 to 2.0 microns, depending on the film. Assuming 1 pixel per particle, we’d have to be scanning at 25,400 ppi. Double that to capture some detail in the shape of the particle. Even the best drum scanner can’t pull that off. That’s scanning electron microscope territory.

What we think of as “grain” is the clumping of the silver particles in the emulsion. Those clumps range in size from 15 to 25 microns. At the large end of this range, a scan with a resolution of 2,032 ppi will cover each 25 micron grain clump with 2 pixels. At the small end, you need 3,387 ppi to cover each 15 micron grain clump with 2 pixels.

More pixels per grain clump is better, if your objective is to record grain clumps, so if you’re scanning a fine grained film (say 15 micron grain clumps), a resolution of 5,080 ppi will allow for 3 pixels per clump. At the other end, at 6,000 ppi, you’ll be able to cover each 25 micron grain clump with around 6 pixels. So yes, it seems that 6,000 ppi not only can preserve the grain structure of most black and white films, but do it with more useful detail.

However, depending on the film and how much detail you want in recording the grain clumps, lower resolutions might do the trick too. From this perspective, it seems the people who came up with that FADGI 2016 standard of 4,000 ppi are happy having two pixels cover each 15 micron grain particle – because that only needs 3,387 ppi.

In my own setup, I am definitely not preserving grain structure! Anything that looks like a grain clump is actually sensor noise even at my current 2,667 ppi. But that's a whole other issue.


* Very useful basic information, including information about the size of silver particles and film grain clumps, came from this document written by Tim Vitale: http://vashivisuals.com/wp-content/uploads/2017/07/2007-04-vitale-filmgrain_resolution.pdf

Tin Can
16-Jun-2019, 14:53
Like!

Thank you!

Oren Grad
16-Jun-2019, 20:03
More pixels per grain clump is better, if your objective is to record grain clumps, so if you’re scanning a fine grained film (say 15 micron grain clumps), a resolution of 5,080 ppi will allow for 3 pixels per clump. At the other end, at 6,000 ppi, you’ll be able to cover each 25 micron grain clump with around 6 pixels. So yes, it seems that 6,000 ppi not only can preserve the grain structure of most black and white films, but do it with more useful detail.

However, depending on the film and how much detail you want in recording the grain clumps, lower resolutions might do the trick too. From this perspective, it seems the people who came up with that FADGI 2016 standard of 4,000 ppi are happy having two pixels cover each 15 micron grain particle – because that only needs 3,387 ppi.

No. You are correct about the fact that we're discussing grain clumps, but you are way underestimating the resolution required to accurately "describe" film grain. 6,000 ppi gets you in the ballpark for Tri-X. It is most definitely not sufficient to "preserve the grain structure of most black and white films".

One important factor you're overlooking is the effects of discrete sampling of grainy originals. Because of that, increasing resolution can sometimes actually make a scan look worse before it starts to look better. This is where the concept of "aliasing" comes in. My point about grain aliasing was that the particular mix of ~4000 ppi and Tri-X grain is a "sour spot" where the interaction between the sampling frequency and the image structure of the film makes the scanned image look distinctly worse than the original - the grain is larger and uglier than it is in the negative. For some of us who consider the grain structure an important part of what gives Tri-X its personality, or whose conservation objectives include preserving all of those attributes that make a material contribution to the esthetic character of the film, a scan at that resolution more or less ruins it.

At substantially lower resolutions the grain mushes out entirely. Users who are comfortable with that can sharpen the resulting scanned image and be happy. At substantially higher resolutions you get into the range where there's enough descriptive power to faithfully record the fine image structure. But the FADGI standard for films up to 4x5 is just wrong for a film like Tri-X, if the objective truly is faithful reproduction.

4000 ppi is treated as something of a magic number because that's the highest resolution that's widely available in consumer-grade equipment at prices that many amateurs can afford. But the reality is that those of us whose budgets limit us to equipment of that caliber, yet who are hoping to mimic the character of our silver gelatin enlargements in our inkjet prints from film scans, basically can't have what we want. I include myself in that group, with regret.

"Scanning" with digital cameras adds the further complication that the vast majority of such devices use sensors with Bayer or X-Trans color filter arrays, which force interpolation in raw conversion and which mean that the actual resolution recorded in the capture falls substantially short of what you'd suppose just by counting pixels. Live-view focusing limitations pose further challenges in achieving even that.

Now it's plain that most folks who print from scans and who are using flatbeds, dedicated film scanners or digital cameras to capture their scans aren't trying to reproduce the fine image structure of their originals, and are comfortable with low-resolution scans that are sharpened to taste for purposes of printing. In some cases they recognize the aliased character of medium-resolution scanned grain from high-speed films, and they're OK with it. There's nothing wrong with any of that. The broader lesson I would draw here is simply that it's a mistake to try to figure out how much capture resolution is sufficient, just by counting scanner PPI or digital sensor pixels or even line pairs on a test target and then doing the arithmetic to see how that maps to prints at 300 or 360 or 720 ppi. Discrete sampling and color filter arrays do complicated things to image character and make simple arithmetic on nominal resolutions or pixel counts seriously misleading.

Instead, you try the kinds of equipment that are available at a price you can afford, look at the resulting image character and decide what you're comfortable with and what kinds of tradeoffs of cost, practicality and convenience vs ultimate image character you're prepared to make. This is, in effect, what both you and Peter have done - each of you has arrived at solutions that work for your respective purposes by applying your experience, your judgment and your common sense to the actual results you've achieved with various types of scanning equipment.

If you're responsible for teaching this stuff, stay away from the bogus arithmetic and concentrate on demonstrating how to optimally use the available equipment to acquire scans, how to process scans with skill and good judgment for printing at desired sizes, and how to evaluate the results relative to esthetic objectives, and all will be well.

rdeloe
17-Jun-2019, 06:21
Thanks for adding your perspective on this topic Oren.

You can rest easy that I'm not corrupting students with bogus math. ;) I enjoy these kinds of technical conversations because learning and growth happen at the edges of one's knowledge. You're right that faithfully reproducing the grain structure of film is not what I'm currently interested in -- for my own photography, or in my teaching.

Bernice Loui
17-Jun-2019, 07:25
Unless the total structure of the film grains (example would be edge effects of high acutance developers on film grain edges) is completely preserved in the scan, that information will be lost in the scanning process. This is essentially adding another section of anti-aliasing filtering during the scanning process.... which will have an effect on the image information acquired then converted to digits.

Once information is lost via the conversion to digits process, that information is not recoverable post process.... Regardless of what any technology claims. That information can be pseudo recreated by the system anticipating what might have been lost, but it will never be a absolute precise replication of what is lost.

Does this make any difference in the print result, yes or no depending on the goals and expectations of the acquired data and what that data will be used for.


Bernice

Bernice Loui
17-Jun-2019, 07:47
One pixel per silver halide particle is not enough, no less than two pixels per silver halide particle would be needed and significant information contained within that silver halide particle would be lost with two pixels per silver halide particle.


Bernice




Silver halide particles average 0.2 to 2.0 microns, depending on the film. Assuming 1 pixel per particle, we’d have to be scanning at 25,400 ppi. Double that to capture some detail in the shape of the particle. Even the best drum scanner can’t pull that off. That’s scanning electron microscope territory.

rdeloe
17-Jun-2019, 08:42
12,903 MP of scanned information from a 4x5 negative is probably enough for me. 51,613 MP just seem like overkill. ;)



One pixel per silver halide particle is not enough, no less than two pixels per silver halide particle would be needed and significant information contained within that silver halide particle would be lost with two pixels per silver halide particle.


Bernice

Oren Grad
17-Jun-2019, 09:15
12,903 MP of scanned information from a 4x5 negative is probably enough for me. 51,613 MP just seem like overkill. ;)

Heh...

Seriously: if one insists on trying to preserve grain structure in scans, as the film gets larger the amounts of data involved indeed quickly become absolutely staggering. A purist approach is of no use when it's completely infeasible. Probably, in a few years when we have still higher-resolution capture devices and computers with still faster processors and yet more memory, we'll be able to creep a little bit further up the size + resolution scale. But still, that basic life lesson looms large: we can't always get what we want.


For example, in the conservation world, the goal of digitization is to create "a surrogate to the original object, replacing most needs for physical access to that original object".

And to close the loop on this point, it's worth clarifying that there are different conservation purposes. The standard required to preserve the source object's value as historical evidence of the way the world appeared in a certain place and a certain time, isn't so demanding as the standard required to preserve all interesting attributes of the source object itself, as an independent focus of esthetic or history-of-technology interest.

Bernice Loui
17-Jun-2019, 09:55
Harry Nyquist has spoken,,

Depends on the need of what the image data will be used for.

BTW, Cinema release Toy Story was done in digits, stored as a digital data base. At some point the original data base as corrupted and no longer viable. This resulted in trying to cut-paste using back-up data base bits to recreate the complete data base. To combat this problem, Cinema is done on..... Film.

IMO, all this digital stuff is a disruptive technology intended and designed to produce economic results for the elite few. What would be more rational is for both film and digits (Analog & Digital) to coexist based on real needs rather than perceived convenience and perceived advantage of one clearly over the other. There are plus-minus to both with neither being absolutely better than the other.


Bernice


12,903 MP of scanned information from a 4x5 negative is probably enough for me. 51,613 MP just seem like overkill. ;)

Tin Can
17-Jun-2019, 10:43
Disruptive technology indeed!

Weren't LVT's used to store bank data on film in fear of losing the cash? I can't find a reference right now.

We are closer to the stone age than the stars.

Storing Digital Data for Eternity 2015 Vint Cerf (https://www.google.com/search?rlz=1C1CHBF_enUS850US850&q=financial+data+stored+on+LVT+film&tbm=isch&source=univ&sa=X&ved=2ahUKEwig8PurhvHiAhVRMawKHRRuCXMQsAR6BAgGEAE&biw=1536&bih=760#imgrc=ysazee0wowI2jM:)

Right now I think film and paper prints will outlast any of our digital files...

rdeloe
17-Jun-2019, 13:28
And to close the loop on this point, it's worth clarifying that there are different conservation purposes. The standard required to preserve the source object's value as historical evidence of the way the world appeared in a certain place and a certain time, isn't so demanding as the standard required to preserve all interesting attributes of the source object itself, as an independent focus of esthetic or history-of-technology interest.

I didn't want to muddy the water in my original post, but that source I mentioned addresses this topic too for those who want a brief overview. Peterson distinguishes "Object Reproduction" (Preservation Digital Object), "Content Reproduction" (easy readable, looks good) and "Speculative Artist's Intention". The last one is particularly interesting in that the conservationist is trying to show the image the way the artist intended it to be viewed, which is obviously a tricky game.

Oren Grad
17-Jun-2019, 13:53
I didn't want to muddy the water in my original post, but that source I mentioned addresses this topic too for those who want a brief overview. Peterson distinguishes "Object Reproduction" (Preservation Digital Object), "Content Reproduction" (easy readable, looks good) and "Speculative Artist's Intention". The last one is particularly interesting in that the conservationist is trying to show the image the way the artist intended it to be viewed, which is obviously a tricky game.

To which I'll add that I appreciate your launching this thread. The various conservation perspectives offer a nice framework for thinking more clearly about decisions we need to make in our respective practices, and bring useful insights that are often missing from the seemingly endless wrangling, largely out of context, over input and output resolutions.

Photomagica
4-Aug-2019, 23:06
Museum professionals typically work to standards, as mentioned in this thread. Understanding the goals one is trying to achieve and establishing a standard to meets those goals consistently is key to efficient, high quality, professional work.
The Harvard DASCH Scanner for astronomical plates may be of interest. https://arxiv.org/ftp/astro-ph/papers/0610/0610351.pdf
The paper descries how DASCH was put together - essentially a cam-scanner on steroids. This scanner was built a number of years ago and continues to be used for a great deal of critical work. For example plates from a number of observatories were measured with DASCH to assist in improving calculations of the orbit of Pluto so the New Horizons probe would not miss.
This scanner operates at 2311.6 DPI and the astronomers designing it consider this sufficient, after analysis and now a great deal of experience, to "capture all of the information on a photographic plate". The scanner data consistently yields a far better analysis, called a plate reduction, than a human operator working at a manual measuring engine. DASCH digitizes a 14x17 plate in 92 seconds. It has been used to digitize hundred of thousands of plates. We may be critical of the astronomer's goal of 2311.6 DPI, however they have proven it is sufficient for essentially all research purposes, so this has proved to be an excellent standard for astronomical plates.
Bill Peters
Astronomical Consultant and Museum Professional

Pere Casals
5-Aug-2019, 03:15
The last one is particularly interesting in that the conservationist is trying to show the image the way the artist intended it to be viewed, which is obviously a tricky game.

I'd like to add that scanning is only a share, digital post processing is also very important.

In the far future it will be prossible to take a 1 MPix image for each grain :) but by now we can only have a few pixels for each large grain, and less than one pixel for one small grain.

IMHO priority is to capture the grain structure as it would be shown in the print. Prints don't show all grains perfectly, with focus accuracy, with lens/paper performance and with diffuser vs condenser, grain nature on the print varies, and then we have the print enlargement effect.

In the same way the grain nature varies in the digital image depending on digital processing, we probably will apply several sharpening actions one after the other to optimize different concepts in the image... then when we resize the image...



Regarding the capture, DSLR scanners are a good solution because we benefit from the expensive development for a mass production product, but linear sensors are still a better solution ...anyway in the DIY realm controlling a linear camera or interfacing a sensor is not as easy as triggering a DSLR.


Problem we have now is that EPSON has near a monopoly for sheets, the V800 is a 2006 product with a LED illumination that probably is cheaper, we also see a better holder, but in 14 years that product had not changed substantially.

We have to be grateful because we still have a very able device that it can be purchased new and that it has drivers for modern computers, but if they had competition and a larger market to allow investments then we would see more improvements.

Tin Can
5-Aug-2019, 04:34
More data

http://tdc-www.harvard.edu/plates/

Photomagica
10-Aug-2019, 11:54
More data

http://tdc-www.harvard.edu/plates/

Good find - thanks!