A scanner does not take a picture of the image that's really small. These samples are not images. They are numbers, generally an RGB value.
If the sample size matches the size of a grain clump (no scanner can see actual grains) then it can convert the clump to a representative number, and write that to a file.
If you do this again, and nothing moves, or is ever so slightly out of alignment, then you should get the same number. If you don't, then how does one know if the first number was correct, or the second one? Should they be averaged?
I could see a system that maps the whole image, clump by clump, and keeps this in a database, then samples it over and over, and then does an average. That might work, but none of the scanning software is that sophisticated. I could see a system that would deliberately map each grain clump, sample the center of it, then each of the edges, etc. and create multiple samples per clump, increasing the pixel count by 4 or 6. this would increase resolution, maybe.
Drum scanners do best when the clump size and the sample size match. If they are out of alignment they sample the same clump twice, its off-kilter and two values are written that are very close, and you get grain anti-aliasing, which looks like pixel partial overlap, or grainy, whatever you want to call it. One also has to remember that the grain clumps are not all the same size, that maybe 70 or with luck and development with good developers (not D-76 or other solvent type developers, or highly over-active like Rodinal), 80 per cent of the grains are the same size. That means 20% will have improper samples no matter what you do.
No matter how many times you scan, all you will be able to do is average the values. Averaging usually doesn't increase resolution, quite the reverse. Including averaging with stacking.
I don't think this has any way of succeeding. Scan samples aren't pixels. They are numbers.
Lenny
Bookmarks