PDA

View Full Version : Can multiple scans increase resolution?



Darin Boville
2-Mar-2015, 02:15
The title says it all. If we scan multiple times but bump the source slightly in some way can we get an effective increase in resolution?

Darin

koraks
2-Mar-2015, 02:30
Theoretically, yes. In practice, I'm not aware of a very user friendly way of processing the different scans into a higher-res end result.

Peter De Smidt
2-Mar-2015, 05:23
If the system isn't repeatable and precise, then it will lead to less resolution. You can clearly see this if you use multi-pass scanning on many systems. The idea here is to increase dynamic range and lower noise, which may happen, but all too often it leads to soft scans. Multi-sample system do better in this regard. Move the area sampled. Take multiple readings and combine them in some way. Move to the next area.

paulr
2-Mar-2015, 07:37
I think Koraks and Peter are right. It should theoretically be possible. The equivalent gets done in video, for the purpose of enhancing detail for surveillance work etc.. But I haven't found any software that accomplishes this with still images.

Has anyone tried using a focussing stack algorithm on multiple images with the same focus?

Greg Miller
2-Mar-2015, 08:01
I think Koraks and Peter are right. It should theoretically be possible. The equivalent gets done in video, for the purpose of enhancing detail for surveillance work etc.. But I haven't found any software that accomplishes this with still images.

Has anyone tried using a focussing stack algorithm on multiple images with the same focus?

It won't work with focus stacks if the focus does not change. The software looks for overlapping areas where focus goes from in-focus to OOF, and selects the sharpest areas from overlapping images. If it does not detect a second image with sharper pixels, it will not blend.

Will Frostmill
2-Mar-2015, 08:48
You can get an increase in real resolution if you can change the angle of reflection between scans. E.g. rotate a print with a textured surface 90 degrees with each scan, and sum or difference the layers in photoshop. I don't know how to apply this to negative scanning, unless you are dealing with a scratched negative that only shows some scratches at some angles and not others.

fishbulb
2-Mar-2015, 10:28
In this thread: http://www.dpug.org/forums/f6/aztek-dpl-experience-2314/ Tim Parkin mentions doing two scans of one negative with different settings, effectively scanning the shadows in one and the highlights in another, and then merging the two images in photoshop to create a high-dynamic-range image. So it can be done for increased dynamic range. With Photoshop's "Auto-Align Layers" tool it would be easy to align multiple images, and then use an HDR plugin (or Photoshop's native HDR tools) to merge the layers to create the final image.

As Will says, if you want to do it to increase resolution, the images (or the scans) have to be different in some way. It's just like doing it with the "new" Olympus technology (available in Sinar digital backs for many years) to generate 40?MP files with a 16MP sensor. It takes a series of images and moves the sensor in the camera around a tiny amount before each image. So each image is very, very slightly from a different angle. Then the camera merges the images together to form a higher-resolution file. It's like doing panoramic stitching, but the images are 99% overlapped.

If you want to do this with a scan of a negative, supposedly it CAN be done. In this thread http://photo.net/film-and-processing-forum/00b6OH there is some evidence that merging multiple scans of the same negative, with the negative shifted on the flatbed each time it is scanned, can be merged and increase the total resolution. Here's another example: http://www.rangefinderforum.com/forums/showthread.php?t=130731

Here is pretty clear evidence that this can work: https://farm9.staticflickr.com/8346/8248364240_756f764c02_h.jpg The upper left is one scan. The upper right is two scans, merged. The lower left is three scans merged, and the lower right is four scans, merged.

If I was going to do this myself, I would try Will's advice and do four scans, one with the negative rotated 90 degrees each time, to change how the scanner light hit the dots on the negative. The above examples are (as far as I can tell) just moving the slide on the scanner bed. Rotating it might get a better result, in theory.

Greg Miller
2-Mar-2015, 10:42
In this thread: http://www.dpug.org/forums/f6/aztek-dpl-experience-2314/ Tim Parkin mentions doing two scans of one negative with different settings, effectively scanning the shadows in one and the highlights in another, and then merging the two images in photoshop to create a high-dynamic-range image. So it can be done. With Photoshop's "Auto-Align Layers" tool it would be easy to align multiple images, and then use an HDR plugin (or Photoshop's native HDR tools) to merge the layers to create the final image.

This would increase dynamic range. But not resolution.

Multiple scanning of images, with exact registration, only works because it eliminates noise by averaging out the pixel information. In a perfect world, with zero noise introduced in the scanning stage. multiple scanning would have no effect.

Kirk Gittings
2-Mar-2015, 10:49
There is some software for this with digital capture (should work with scans too i think), but I don't remember the name of it. Anyone?

fishbulb
2-Mar-2015, 10:55
This would increase dynamic range. But not resolution.

Yes I agree. I edited my post to make that more clear.


There is some software for this with digital capture (should work with scans too i think), but I don't remember the name of it. Anyone?

The main one is PhotoAcute ( http://www.photoacute.com/ ) which is available for windows and mac. It looks very fully featured, but is $150 for the full version. There is a free trial version as well.

Also found that Deep Sky Stacker can do it, using the NASA Drizzle algorithm (yes that's what it's called): http://deepskystacker.free.fr/english/index.html Deep Sky Stacker is free. Looks complicated though. It is windows only.

Also found Chasys Draw IES can do it, as described here: http://www.jpchacha.com/chasysdraw/help.php?file=artist_process_stack_sres.htm It is also free and can be downloaded here: http://www.jpchacha.com/chasysdraw/index.php It is windows only. It looks a little easier to use, and more for the intended purpose, than Deep Sky Stacker.

Finally, you can do it in Photoshop with this tutorial, only published a few days ago: http://petapixel.com/2015/02/21/a-practical-guide-to-creating-superresolution-photos-with-photoshop/

Jody_S
2-Mar-2015, 10:57
Doesn't Vuescan do this as an option? I remember trying it when I was using an Arcus II, and I wasn't impressed with the results. Perhaps better tonality, but resolution was not increased and sharpness seemed to suffer. However, if tones are your thing, the sharpness can be adjusted in Photoshop as part of your normal workflow.

Emmanuel BIGLER
2-Mar-2015, 11:16
The rationale behind the Sinar scanning system is something very simple.

When analyzing an analogue image through a square slit, the blur induced by averaging the densities through this slit roughly corresponds to a cut-off period which is equal to the square size. Digitizing requires two samples per cut-off period, hence two samples per square size in both directions are required to extract all what is contained in the averaged image.
An hypothetical scanner with adjacent square pixels can only digitize with one sample per pixel size.
In the Sinar system, the whole pixel grid can be shifted by 1/2 pixel size in both directions i.e. (0, 0) (0, +1/2) (+1/2, 0) and (+1/2, +1/2) in pixel units. 4 passes combined together make the 2 samples per pixel size in both directions and allow to extract the whole theoretical resolution of an analogue image blurred by averaging through a square slit.

In principe, flatbed scanners can scan with double (or even more) sampling rate in the direction of the mechanical translation, hence in principle improving resolution by the theoretical factor 2, in one direction only.

One of the readers of the French MF+LF forum galerie-photo.info has tested a brand-new Epson 850 flatbed with an USAF 1951 target (both the Silverfast on silver halide film, plus another from Edmund Optics, chromium on glass) and has found the effective resolution to be ... worse in the direction of the mechanical scan, exactly the opposite of what should be expected ... I have no clue for this, except that the precision of the mechanical stage in the EPSON flatbed is probably not sufficient to allow the half-pixel sampling procedure with a good precision.
The results are explained here (text is in French but the scans of the target are visible images with no language barrier ;) )
http://www.galerie-photo.info/forumgp/read.php?3,52546,57097#msg-57097

Peter De Smidt
2-Mar-2015, 11:24
This type of thing might be possible with a high end scanner, drum, flatbed, or whatever, but I'd be very surprised if this would work with a consumer flatbed.

paulr
2-Mar-2015, 11:39
I've wondered about ways to use this principle with digital cameras. With pixel pitches now below 5 microns, this would hard to do with precision. But I've wondered if random vibration and other motions by themselves could cause shifts of a few microns, which might somehow be exploited.

toyotadesigner
2-Mar-2015, 11:45
With the Nikon LS 9000 and VueScan you can select 'Oversampling' up to 16x. At 2x the scan time already doubles, I don't want to know how long a scan with 16x oversampling would need. I've tried a 4x oversampling. Took me almost an hour for a 6x9 slide. The result: A waste of time. A better and rock solid tripod will deliver better results IMHO.

Kirk Gittings
2-Mar-2015, 11:51
Yes I agree. I edited my post to make that more clear.



The main one is PhotoAcute ( http://www.photoacute.com/ ) which is available for windows and mac. It looks very fully featured, but is $150 for the full version. There is a free trial version as well.

Also found that Deep Sky Stacker can do it, using the NASA Drizzle algorithm (yes that's what it's called): http://deepskystacker.free.fr/english/index.html Deep Sky Stacker is free. Looks complicated though. It is windows only.

Also found Chasys Draw IES can do it, as described here: http://www.jpchacha.com/chasysdraw/help.php?file=artist_process_stack_sres.htm It is also free and can be downloaded here: http://www.jpchacha.com/chasysdraw/index.php It is windows only. It looks a little easier to use, and more for the intended purpose, than Deep Sky Stacker.

Yes it was PhotoAcute. I saw some images a friend did with this software and they definitely showed significant improvement.

Peter De Smidt
2-Mar-2015, 13:03
It is interesting technology. With a dslr film scanner, it would require a lot of extra frames, and there would be no specific camera/lens profiles. If you're willing to do that, you might as well scan at a higher magnification. For instance, I normally scan at 1x. Recently, though, I tested 5x with a Nikon Measuring Microscope objective, a very high quality optic. Yes, it did give slightly better results than a Apo Rodagon D at 1x, but in my view that added complexity wasn't worth it, at least for normal scanning.

Nathan Potter
2-Mar-2015, 13:27
Using a DSLR film scanner on a rock solid mount would definitely allow precision alignment of film to camera in 1 to 2 µm shifts as Emmanual suggests above. I use a micrometer X/Y stage calibrated in 2.5 µm increments which has a glass center area for light source projection from below. As long as the camera to film alignment is close to the 1 µm range and focus remains exact the pixel shift technique should work. Given that one ends up with say four images they must then be recombined with micronish accuracy to achieve a useable increase in resolution. All fairly formidable - I gave up on the task even though I was using an optical bench on a vibration isolation table.

Nate Potter, Austin TX.

fishbulb
2-Mar-2015, 13:57
Nathan & Emmanuel,

It seems like, in order to achieve real resolution increases, you would want to move the camera 0.5 pixels in between photos (like how the Sinar and Olympus systems move the sensor). Or in the case of scanning, moving the negative by exactly 0.5 pixels. If you moved it by exactly 1.0 pixels, then you would be capturing almost exactly the same data, and only reducing noise when you combine the files, right?

In practice it is unlikely that you would move the camera (or negative) by exactly a multiple of 0.5 or 1.0 pixels. Rather, it would be different every time. 0.33 pixels, 1.79 pixels, who knows. So there would be an element of luck to it. Perhaps there is a way to determine which images were moved the most closely to 0.5 pixels, and use only those?

I have a bunch of negatives in my scanning queue; I'm going to pick a few to do four 90 degree rotation scans and see what the results are. The rotation may be enough to capture real additional data, regardless of how many pixels are actually moved.

Emmanuel BIGLER
2-Mar-2015, 14:26
From toyotadesigner
With the Nikon LS 9000 and VueScan you can select 'Oversampling' up to 16x.

Thanks for the info!
I'm not familiar with the Nikon LS 9000 scanner, but I doubt that this is possible in both X and Y directions. My guess would be: oversampling, yes, but only in the direction of the mechanical translation. And doing 16x oversampling in that direction does not make sense to me, as I explained, the maximum reasonable sampling rate of an image averaged through a square slit is 2x with respect to the pixel pitch.


From fishbulb:
If you moved it by exactly 1.0 pixels, then you would be capturing almost exactly the same data, and only reducing noise when you combine the files, right?

Yes, exactly. And since resolution and noise are highly intricated when the human eye makes the assessment of what is a good image, multiple passes certainly improve overall scanned image quality, but not in terms of resolution, if the tiny sub-pixel displacements are not implemented with the utmost precision.

The question of "jitter" in the actual position of the sampling device is important. Clearly, in amateur-grade flatbed scanners this is a real issue. At least as far as it can be judged by scanning a test target on an amateur-grade flatbed.
I have no idea on how random deviations from 1/2 pixel, i.e. between 1/3 and 2/3 pixel for example, will have an influence on the final image reconstruction.

The brilliant idea of the day: testing the resolution of a drum scanner with a chromium-on-glass test target ;)

Greg Miller
2-Mar-2015, 14:31
I have a bunch of negatives in my scanning queue; I'm going to pick a few to do four 90 degree rotation scans and see what the results are. The rotation may be enough to capture real additional data, regardless of how many pixels are actually moved.

Unless can rotate your negative exactly (and I mean exactly) 90 degrees each time, you will have to rotate something other than 90 degrees in Photoshop to get all the pixels to line up exactly. This will cause interpolation to occur, when you rotate the images, which will nullify any benefits that you hope to achieve.

Darin Boville
2-Mar-2015, 15:05
Finally, you can do it in Photoshop with this tutorial, only published a few days ago: http://petapixel.com/2015/02/21/a-practical-guide-to-creating-superresolution-photos-with-photoshop/

Outstanding. I'll give this a shot and will report back in a few days.

Thanks,

--Darin

Tin Can
2-Mar-2015, 15:35
There goes a week of experimentation/exploration.

CC will be good to have with some Ram.

:)

BetterSense
2-Mar-2015, 15:51
In theory it works and I am surprised it is not exploited more by cameras with VR features that can dither the image/sensor relationship.

It does not have to be controlled motion or registered to the pixel-level precision. It can be random motion (vibration, etc). For real resolution to be harvested, all that is required is that the motion (noise) is of at least one pixel in extent, and falls within a few other fairly generous boundary conditions as to bandwidth etc.

I do this type of thing a lot in other applications. Resolution and bandwidth (or in photo terms, total exposure time) can be traded off according to well established relationships. Roughly, averaging multiple samples is a waste, because more resolution is not obtained by averaging when in fact it is available, and simply adding the samples together (e.g. taking 4 samples and presuming 4 times the resolution) is over optimistic. In theory you want to take N samples, add them, then divide the result by 2^(N-1) e.g. take 4 samples for up to 1 bit of extra resolution or 256 samples for 4 bits more, etc. This is the theoretical limit of how much extra resolution is obtainable for each unit of bandwidth expended. Of course it is possible (noise less than one pixel/LSB, noise of wrong frequency etc) to expend bandwidth and get as little as nothing for it. If there is some way to measure the result e.g. a resolution chart or comparing an image with the real world, this can be done until the cows come home and the final result compared to the ideal. In real world applications it is surprising how close to the ideal you can come, because usually systems have 1 LSB of noise and most natural sources of noise are broadband in nature.

Think about it...with enough time, enough samples, and by moving the camera enough, you can achieve arbitrary resolution with only one pixel, with dire costs in bandwidth, of course.

fishbulb
2-Mar-2015, 16:09
Unless can rotate your negative exactly (and I mean exactly) 90 degrees each time, you will have to rotate something other than 90 degrees in Photoshop to get all the pixels to line up exactly. This will cause interpolation to occur, when you rotate the images, which will nullify any benefits that you hope to achieve.

Ah dang I didn't think of that. I'm doing this on a wet mount drum scanner, so exact 90 degree turns won't be remotely possible. A flatbed would be far more likely to be accurate.

In any case, there will likely be some small rotation adjustments when the layers are aligned in Photoshop, but minimizing the rotation needed is definitely ideal.


In theory it works and I am surprised it is not exploited more by cameras with VRA features that can dither the image/sensor relationship.

Maybe Sinar had a patent on the technology? Olympus just started with the OMD EM5 II (http://petapixel.com/2015/02/05/olympus-om-d-e-m5-ii-16mp-mft-camera-can-shoot-40mp-sensor-shifting/) very recently. 16MP sensor, eight shots, creates a 40MP jpg or 64MP raw file.

In their next iteration Olympus thinks (http://petapixel.com/2015/02/14/olympus-make-40mp-sensor-shift-photos-possible-handheld-shooting/) they can make the camera capture all eight images in under 1/60th of a second, making handheld high-res capture (sort of) possible. In bright light anyway. Eight shots in 1/60th would be 1/500th apiece, assuming no lag between shots.

It would be interesting to see this technology combined with an electronic shutter like in some of the new Fuji cameras, which are capable of up to 1/32000 (yes, 1/32000). But electronic shutters have their own limitations (http://www.fujixseries.com/discussion/7686/x100t-and-132000-s-shutter-versus-nd-filter.../p1) so it still wouldn't be ideal for handheld photography.

Lenny Eiger
3-Mar-2015, 19:37
Can multiple scans increase resolution?

No.

Darin Boville
6-Mar-2015, 18:34
Testing in progress. But another question.

Will techniques such as this, in use i the new Olympus camera, "solve" the problem of legacy glass not being up the standards of newer sensors, in terms of resolution?

--Darin

Peter De Smidt
6-Mar-2015, 19:17
Why would it? These techniques effectively act as a higher resolution sensor. If the system is already lens limited, then there shouldn't be an improvement. I'd be happy to be wrong. Older lenses, especially primes, can be outstanding performers, though.

Darin Boville
7-Mar-2015, 01:08
Why would it? These techniques effectively act as a higher resolution sensor. If the system is already lens limited, then there shouldn't be an improvement. I'd be happy to be wrong. Older lenses, especially primes, can be outstanding performers, though.

My thinking is--based on nothing!--that you are combining two sharp images, with the final image being the same size as the uncombined version. And wouldn't being pixels vs film matter in this case?

Darin

mdarnton
7-Mar-2015, 06:39
So Bettersense, you are saying that the new Olympus E-M5 I I is a scam?

wombat2go
7-Mar-2015, 08:42
Here are some numbers I am putting together for scanning 6x7 C41 with a PrimeFilm120 Pro
Scanner:
optical res = 3200 dpi >> Nyquist Pitch = 16 micrometre or 63 cycles/mm

Film:
Dye clouds are stochastic in range 1.5 to 10 um
The data sheet for Extar 100 give MTF of about 50 cycles/mm

Lens:
The MTF of a typical older manually focussed 6x7 camera/lens is in range 30 cycles/mm. 33 micrometre

I think (just opinion) that one issue is aliasing with the die cloud because the scanner Ny is in amongst the dye cloud range. When I scan 35mm C41 iso 800 I sometimes have unpleasant "graininess" but I think that is not directly visible dye clouds, it is artifacts of aliasing.
I do not see this effect so far on properly exposed Fuji or Kodak of iso 400, 160, 100

I think moving the image and taking multiple scans in some cases will be effective in reducing scanner artifacts ( film curve etc) but of course it can't improve the mtf of the film or the camera system.

As to moving the image by a pixel pitch, firstly that is difficult or impossible at the amateur level with a store bought scanner. Secondly,image alignment software works on the image data so should not be necessary provided the scanner Ny is sufficiently above the other components.

Greg Miller
7-Mar-2015, 09:14
My thinking is--based on nothing!--that you are combining two sharp images, with the final image being the same size as the uncombined version. And wouldn't being pixels vs film matter in this case?

Darin

I think there are flaws in the technique that was provided in the link. Here are the Photoshop steps that they describe (along with my comments):


Import all photos as stack of layers
Resize image to 4x resolution (200% width/height) (GM: introduces interpolation)
Auto-align layers (GM: introduces more interpolation; and auto-align uses warp so the interpolation will vary for any given area)
Average layers ((GM: Averaging interpolated pixels)

There's a lot of interpolation going on, so any actual increased resolution may appear OK, but is basically/potentially an improvement in up-res interpolation (would need to compare to a product like Genuine Fractals) and isn't technically very factual. Looking at their samples, I see increase in contrast and detail, but the detail is mostly exaggerating digital sensor artifacts. I don't see that as a positive, and I have to wonder how real any improvement is at actual real world viewing size.

BetterSense
7-Mar-2015, 11:25
So Bettersense, you are saying that the new Olympus E-M5 I I is a scam?

I have no idea. I know nothing about it or about digital photography period.

paulr
7-Mar-2015, 11:49
I played with photoacute for a few minutes. In my one attempt, using images made on a tripod with various long exposures (intended for exposure blending) it gave more resolution, but also a more processed look ... like too much noise filtering. It would take a lot more experimenting to come to a real conclusion, but so far it seems like more work than it's worth.

I'm glad someone's doing something with the idea though. Maybe we'll see a bigger variety of products like this someday.

mdarnton
7-Mar-2015, 11:55
I have no idea. I know nothing about it or about digital photography period.

Well, you certainly had a lot to say about it until I asked this question! :-)

mdarnton
7-Mar-2015, 11:55
I have no idea. I know nothing about it or about digital photography period.

Well, you certainly did until I asked this question! :-)

paulr
7-Mar-2015, 12:01
Here are some numbers I am putting together for scanning 6x7 C41 with a PrimeFilm120 Pro
Scanner:
optical res = 3200 dpi >> Nyquist Pitch = 16 micrometre or 63 cycles/mm

Film:
Dye clouds are stochastic in range 1.5 to 10 um
The data sheet for Extar 100 give MTF of about 50 cycles/mm

Lens:
The MTF of a typical older manually focussed 6x7 camera/lens is in range 30 cycles/mm. 33 micrometre


Not sure what any of these numbers mean. Do you mean MTF/50 at those resolutions? Or some other percent?

It's very difficult to come up with ideal scanning resolutions based on numbers and theory. Because the noise performance of film ultimately determines the highest useful sampling frequency. There's a limit to how much MTF you can recover through sharpening when the MTF drops close to the noise floor.

Most of the high resolution film scans I've looked at are oversampled by at least a factor of two or three. This is useful for reducing noise that's introduced in the scanning process (generally not much), but it does nothing to recover more actual image information. The result is that a lot of these 300 megapixel scans are functionally about as good as 30 or 40 megapixel scans.

BetterSense
7-Mar-2015, 12:33
Well, you certainly had a lot to say about it until I asked this question! :-)

I only know theory and practice of sampling data; I do not know about commercial/consumer tools currently available or how they work or how well they work in a given application.

One of the reasons I do not do more digital imaging is that I feel digital imaging is really advancing through creation of new tools, rather than anything done by those wielding the tools. This is similar to the way invention of new instruments is what really drives electronic music; the progress of imaging technology, more so than the people holding the cameras, is what drives digital imaging. The electronic musicians who are quite more involved in the creation of new instruments and techniques are rightly seen as the pioneers and creative core, while there are many others with ProTools on their laptops who are happy to take what's available and use it to essentially imitate that which they don't have the skill to do for real. Photography on the other hand does not even seem to have a group analogous to the first, except perhaps the developers of imagemagick and CHDK. Most real tool-creation happens behind closed doors at Adobe and Nikon et.al., and not shared in the community, and this in turn is a result of the industrial-industrial - age thinking that regards programmers as engineers rather than as artists.

polyglot
7-Mar-2015, 14:38
Yes you can, using superresolution (http://en.wikipedia.org/wiki/Superresolution). It requires that you deliberately and physically jitter the position of the film in the scanner between exposures and then combine them digitally. It specifically requires
that you do not align the film pixel-perfect for every exposure because you're trying to extract sub-pixel information; if the film were perfectly aligned and therewere no noise, you'd get no additional information in each scan.

You won't get more information than what is on the film though, so this is only valuable when scanning super-fine (Tech Pan etc) films on a low-res scanner. I believe PhotoAcute uses superresolution.

You can also use superresolution by taking multiple shots of the scene (multiple sheets of film). Easier to just use finer film though ;)

VueScan multiple-scan does not do this but is a means of obtaining additional dynamic range and reduced scan noise.

Darin Boville
7-Mar-2015, 23:55
Well, I'm getting mixed results. I'm using a cheap office scanner and scanning book pages for the test. The processed scan looked a little soft but the unprocessed scan exhibited significant artifacts at 100%. (Looked like jpeg artifacts though I was saving to TIFF.) Those artifacts were gone in the processed version. Not clear if I was seeing any more resolution but the artifact reduction was significant. (I've never seen these artifacts on my Epson 700, fyi, so this may not be relevant to most people here.)

Will have to play with this more...

--Darin

Lenny Eiger
8-Mar-2015, 12:28
A scanner does not take a picture of the image that's really small. These samples are not images. They are numbers, generally an RGB value.

If the sample size matches the size of a grain clump (no scanner can see actual grains) then it can convert the clump to a representative number, and write that to a file.

If you do this again, and nothing moves, or is ever so slightly out of alignment, then you should get the same number. If you don't, then how does one know if the first number was correct, or the second one? Should they be averaged?

I could see a system that maps the whole image, clump by clump, and keeps this in a database, then samples it over and over, and then does an average. That might work, but none of the scanning software is that sophisticated. I could see a system that would deliberately map each grain clump, sample the center of it, then each of the edges, etc. and create multiple samples per clump, increasing the pixel count by 4 or 6. this would increase resolution, maybe.

Drum scanners do best when the clump size and the sample size match. If they are out of alignment they sample the same clump twice, its off-kilter and two values are written that are very close, and you get grain anti-aliasing, which looks like pixel partial overlap, or grainy, whatever you want to call it. One also has to remember that the grain clumps are not all the same size, that maybe 70 or with luck and development with good developers (not D-76 or other solvent type developers, or highly over-active like Rodinal), 80 per cent of the grains are the same size. That means 20% will have improper samples no matter what you do.

No matter how many times you scan, all you will be able to do is average the values. Averaging usually doesn't increase resolution, quite the reverse. Including averaging with stacking.

I don't think this has any way of succeeding. Scan samples aren't pixels. They are numbers.

Lenny

fishbulb
8-Mar-2015, 12:36
Hi Lenny, recognizing that you know a lot about scanning, what are you thoughts on why it appears to work (increasing resolution with multiple scans) at least in these two examples?

http://photo.net/film-and-processing-forum/00b6OH (upper left vs. upper right in this image (http://farm9.staticflickr.com/8346/8248364240_756f764c02_h.jpg))
http://www.rangefinderforum.com/forums/showthread.php?t=130731 (this image (http://i45.tinypic.com/10fc8zk.jpg) vs. this image (http://i45.tinypic.com/otg7sz.jpg))

paulr
8-Mar-2015, 14:48
Lenny, the reason it can work is that optics degrade an image by spreading (blurring) points and lines. A binary pattern degrades into a sinusoidal function that transitions from pure white to pure black over a distance that could be many pixels, or less than one.

In either, moving the the target sub-pixel distances will change the recorded value of that pixel location. The direction of motion can be extrapolated, and this information can be used to reconstruct that spread function on a sub-pixel scale. It's a kind of virtual oversampling.

130415

(from Wikipedia) Both features extend over 3 pixels but in different amounts, enabling them to be localized with precision superior to pixel dimension.

Lenny Eiger
8-Mar-2015, 15:04
Lenny, the reason it can work is that optics degrade an image by spreading (blurring) points and lines. A binary pattern degrades into a sinusoidal function that transitions from pure white to pure black over a distance that could be many pixels, or less than one.

In either, moving the the target sub-pixel distances will change the recorded value of that pixel location. The direction of motion can be extrapolated, and this information can be used to reconstruct that spread function on a sub-pixel scale. It's a kind of virtual oversampling.

130415

(from Wikipedia) Both features extend over 3 pixels but in different amounts, enabling them to be localized with precision superior to pixel dimension.

I have a cold, and am having trouble putting two related words together, so excuse me if I don't follow exactly, or can't communicate what I think I see.

Looking at the graph, I can see that if you average the squares to the left and the right of A you might move it to be B. However, a scanner can not do All of the pixel of B with the edges. You are correct that the Area inside the square the B inhabits will record darker, but you won't get the edge effect. I suppose its another way of sharpening... and Photoshop can do some pretty amazing things. So I suppose some of it is possible... but its hard to believe you would get much and you might get a lot of false positives...

The main point I was trying to make is that its a number, not a pixel, that a scanner generates. It's a point on the drum (or flatbed glass), and not a part of an image. I suppose if you moved a grain clump to the center, as in B, you might get a better reading. But then you are moving it... Ayyyy, my head hurts...

Lenny

BetterSense
8-Mar-2015, 18:54
Lenny,

It's ok if you don't understand it. In some cases it is not intuitive. But your assertions that it cannot work are tiring, even aside from your meaningless comments about numbers not being pixels.

The mathematics of sampling signals are quite mature and oversampling, noise-shaping, filtering, and reconstruction algorithms that do what you say is impossible are in operation in millions of consumer devices and telecommunications technologies all over the world. Did you know that almost all CD players reproduce the 16 bit-depth PCM signal using a 1-bit DAC?

Taking many samples and averaging them can filter out noise, which can have a positive impact on reproduction, but nobody is arguing that it increases resolution. There are many other techniques to increase resolution beyond the "native" resolution of any of the components of the system though. In theory, one should be able to achieve almost arbitrarily high resolution and bit depth from e.g. a consumer flatbed scanner. The limiting factor is more likely to be noise and even that can be dealt with to some extent. Indeed, what's remarkable about high-quality scanners is not that they provide high-quality results, but that they do so quickly. Given that images on film do not move, and thus the only bandwidth constraint in scanning is the patience of the operator, these techniques should be exploited more.

Kirk Gittings
8-Mar-2015, 19:28
Taking many samples and averaging them can filter out noise, which can have a positive impact on reproduction, but nobody is arguing that it increases resolution. There are many other techniques to increase resolution beyond the "native" resolution of any of the components of the system though. In theory, one should be able to achieve almost arbitrarily high resolution and bit depth from e.g. a consumer flatbed scanner. The limiting factor is more likely to be noise and even that can be dealt with to some extent. Indeed, what's remarkable about high-quality scanners is not that they provide high-quality results, but that they do so quickly. Given that images on film do not move, and thus the only bandwidth constraint in scanning is the patience of the operator, these techniques should be exploited more. Curious what your opinion is of what they are doing with PhotoAcute: http://www.photoacute.com/

BetterSense
8-Mar-2015, 19:45
Curious what your opinion is of what they are doing with PhotoAcute: http://www.photoacute.com/

It is impossible to say for sure since I imagine they are not publishing their algorithms, but multiple images can be combined in an oversample-y way with simple pixel math. How well it works depends on a number of boundary conditions. In a commercial product they probably expend more effort on checking those conditions and alignment etc.

Lenny Eiger
9-Mar-2015, 12:19
Lenny,
It's ok if you don't understand it. In some cases it is not intuitive. But your assertions that it cannot work are tiring, even aside from your meaningless comments about numbers not being pixels.

Why do the conversations here always have to turn into some sort of personal vendetta? Why do you have a need to tell someone their comments are meaningless? Get a life...

As to the issue at hand, I'm still listening. If this stuff was easy, I'm sure Adobe (and every digital camera manufacturer) would have done this years ago. We all would be happy for everything to have better resolution. I can see some part of it... just not all of it. I'm not stupid, and if the explanation was clear enough I could understand fully...

Lenny

djdister
9-Mar-2015, 12:25
So what everyone is talking about, I presume, is getting better resolution than the nominal max resolution of a given film scanner, right? No one is saying you can get better resolution in a digital file than what the film is capable of resolving, are they?

Kirk Gittings
9-Mar-2015, 12:30
Why do the conversations here always have to turn into some sort of personal vendetta? Why do you have a need to tell someone their comments are meaningless? Get a life...

They don't always but yes they do too often for sure. Part of it, I am convinced, has to do with anonymous posters and lack of consequences, but that is not always true either as evidenced by the many anonymous people here who are civil. Even when not anonymous there is a tendency to be more acidic than if one were physically standing in front of the person. I have been guilty of this myself and am trying to think before I write-ie "how would I phrase this if I were face to face with this person"........always a sobering thought.

koraks
9-Mar-2015, 12:46
So what everyone is talking about, I presume, is getting better resolution than the nominal max resolution of a given film scanner, right?
Yes, although a bit more accurately: higher resolution than the maximum real-life resolution that the scanner yields, which is usually significantly less than its nominal resolution, especially with flatbed and low-end film scanners. E.g. the Epson 4990/V7x0 boasts a nominal resolution of 3200dpi, while in real life, it resolves around 2000dpi max.


No one is saying you can get better resolution in a digital file than what the film is capable of resolving, are they?
Not that I know of, no. You can't make up information that isn't there to begin with.

paulr
9-Mar-2015, 17:57
No one is saying you can get better resolution in a digital file than what the film is capable of resolving, are they?

It also applies to a digital camera sensor. Multiple exposures, offset by sub-pixel amounts, can yield an effective resolution higher than what's possible with a single exposure.

This is what I tested. I did get an improvement in resolution, but the results of my one attempt didn't happen to look good.

Randy
9-Mar-2015, 19:16
I have a friend who shoots planetary photos with a little 1 to 2mb webcam through his telescope. The scope is tracking, say, Saturn or Jupiter, and the program on his laptop will take numerous photos every minute to around 400 photos. The program then picks out the best images and then stacks them. He gets a "better" image by this method than if he just shot one picture with the webcam. I am guessing this is kind of what we are discussing here...?

Two of his pictures

https://dl.dropboxusercontent.com/u/52893762/jupitor.jpg

https://dl.dropboxusercontent.com/u/52893762/saturn.jpg