PDA

View Full Version : DSLR Scanner: Stitching and Blending of Images



Peter De Smidt
24-Feb-2012, 12:46
DIYS (Do It Yourself Scanner—pronounced like ‘dice’)--Stitching and Blending of Images Thread

Frank Pertronio started this project by suggesting that someone come up with an affordable and contemporary drum scanner, as there is currently huge gap in price and quality between consumer and professional scanners. Domaz suggested using APS-C sensors and using them to take samples of the film, similar to what Gigapan does with large stitched mosaic images. This lead to talk about making a copy stand scanning system using a dslr, a light source and a movable negative stage. Both horizontal and vertical prototypes have been made, or are in the process of being made.

The original thread (http://www.largeformatphotography.info/forum/showthread.php?84769-Making-a-scanner-with-a-DSLR) has become very long and unwieldy. As a result, I’m creating some new specialized threads for future project development.

The new build threads are:
Camera Supports and Positioning (http://www.largeformatphotography.info/forum/newthread.php?do=newthread&f=7),
Lenses (http://www.largeformatphotography.info/forum/newthread.php?do=newthread&f=7),
Negative Stages (http://www.largeformatphotography.info/forum/newthread.php?do=newthread&f=7),
Light Sources (http://www.largeformatphotography.info/forum/showthread.php?87536-DSLR-Scanner-Light-Sources),
Stitching and Blending of Images (http://www.largeformatphotography.info/forum/newthread.php?do=newthread&f=7),
Cameras and Camera Control Software (http://www.largeformatphotography.info/forum/newthread.php?do=newthread&f=7).
Workflow (http://www.largeformatphotography.info/forum/newthread.php?do=newthread&f=7).

These threads are only for positive contributions to the development in the area in question. The project may not succeed, but we’re going to find that out by trying it. But we are not unkind. As the original thread showed, some people have an overpowering urge to say negative things about the project. I’ve created a thread just for this purpose. Please post your negative comments about the project here (http://www.largeformatphotography.info/forum/newthread.php?do=newthread&f=17).

I would like to thank everyone who makes, or has made, a positive contribution to this project!

I'll be summarizing the posts from the original thread about stitching and blending of images here soon.

Peter De Smidt
24-Feb-2012, 13:11
A nice article on stitching is: here (http://www.maa.org/mathhorizons/MH-Sept2011_MathPhotography.pdf).

buggz
24-Feb-2012, 13:13
Thanks for this.

Peter De Smidt
24-Feb-2012, 17:33
I've done a little stitching before, but I'm no means an expert. Daniel Moore's help has been invaluable.

There are a number of ways to assemble all of the images of partial areas of the film.

1) Manual arrangement using photo editing software. For example, with Photoshop one can put the piece in the difference blending mode and move it with arrow key nudges. When the area goes black, it’s perfectly registered with the layer below it. Once it’s aligned, you can run Photoshop’s auto-blend-layers feature to even out the tones. This works very well, and it leads to very little distortion of the kind that can be introduced by stitching software, but it’s a PITA for a large number of samples. (If you’re not using a large number of samples, this options has a lot of appeal.)

The following image was put together using method 1.

http://i955.photobucket.com/albums/ae37/peterdesmidt/Light_House_2nd_Manual.jpg

2) Stitch the files together using:

a) Photoshop photomontage.
b) Microsoft’s Ice. http://research.microsoft.com/en-us/um/redmond/groups/ivm/ice/,

The following image was stitched with Microsoft Ice's "structure panorama" feature:

http://i955.photobucket.com/albums/ae37/peterdesmidt/Micro_Ice_Lighthouse.jpg

I expect the problem resulted from one row of the samples overlapping the lower row much more than the other samples did.

c) PTGui. www.ptgui.com

Daniel did a nice stitch of the lighthouse with PTGui. See: http://www.largeformatphotography.info/forum/showthread.php?84769-Making-a-scanner-with-a-DSLR/page51, post #506.

d) Autopano Pro or Autopaono Giga. www.kolor.com.
e) Hugin. Hugin.sourceforge.net.
f) Double take for Mac OS10 http://echoone.com/doubletake/
g) PTAssembler. http://www.tawbaware.com/ptasmblr.htm
h) Imagestack. http://code.google.com/p/imagestack/
i) Explorable Microscopy. http://www.explorablemicroscopy.org/ There's lots of interesting stuff here.
j) Panorama Tools. http://wiki.panotools.org/

Some comments on the options:
1) Most of us already have it. It does a very good job with blending.
2)
a) Doesn't do well with a complex stitch.
b) Ice is very fast and simple, but it’s not really editable. So if you get a stitch with a missing or misplaced part, there’s little you can do. It’s free.
c) The is a very feature rich program. Like most stitching software, it assumes rotational camera motion, wheras we would prefer linear. Changing the lens characterisitcs as per Daniel Moore’s suggestions help. (He suggestions listing the lens used as a 1000mm one.) It costs $110.
d) These are also very feature rich. Giga doesn’t allow an align to grid function without particular robotic heads.
e) It’s free! But as far as I can tell, one has to enter control points manually, which is a huge pita. It’s also been a bit unstable on my Windows 7- 64 bit machine, the only program which as that problem.

I haven’t investigated the other options yet. So far, the results seem to be that stitching files covering a piece of film that has detail through out isn't hard, but doing so with a film that has large areas without detail is much more challenging. We've been able to get some good results, but it takes quite a bit of work.

Daniel Moore
24-Feb-2012, 19:37
I threw Peter's lighthouse images into the experienced and capable hands of the PTGui Support Google Group (http://groups.google.com/group/ptgui?hl=en) simultaneously requesting adding fine image nudging control to PTGui program. Two experts there Erik Krause and John Houghton both offered that the 'featureless' areas as I took to referring to sky areas as, contained enough markings to in fact place control points there. Their results bore this out handily. So simply because a sky area doesn't stitch automatically, one can always go in and at 100% zoom level, add points as needed to get things to align.

For the sake of demonstration, here's a result of John Houghton's stitch efforts done in PTGui.

68912

It's apparent in this image that Peter's light source is doing a very respectable job and that PTGui seems to be blending images nicely. Also worth noting, the original negative Peter provided was a long exposure in windy conditions - manually adding control points is therefore not dependent on critically sharp images, but rather critically sharp DSLR captures at the scanning stage. In theory, a perfect stitch could be made from the blurriest of images using only defects in the film emulsion, so dust and hair are your friends.

Peter De Smidt
24-Feb-2012, 19:56
Daniel, that does look very good. Where there any other tips besides manually adding some control points?

Daniel Moore
24-Feb-2012, 20:48
Here's one of Erik Kraus' replies:

Manual fine tuning will never be as precise as control point driven
stitching. The reason is that a correction on one side will make
alignment on the other side worse.

Although http://www.ptgui.com/support.html#5_6 works, it's better to use
viewpoint correction for this kind of stitching, since this is not a
panorama but a mosaic shot from different viewpoints (same as shooting a
long mural).

See Thread "Some nice big mosaics" for details:
https://groups.google.com/d/topic/ptgui/klQ0W2cGJzA/discussion

It appears the Viewpoint Correction in the pro version is advised, I'll download a trial as I don't have that version and see if it helps.

matthewu
24-Feb-2012, 22:43
Are the source files still available. I tried: http://dl.dropbox.com/u/3595413/Light%20house%202nd%20try.rar but that no longer seems valid?

Thanks.

Peter De Smidt
25-Feb-2012, 02:46
Hm. That's funny. They seem to be gone. I didn't remove them. In any case, I re-uploaded them. Try: http://dl.dropbox.com/u/3595413/Light%20house%202nd%20try.rar

Peter De Smidt
25-Feb-2012, 22:21
I modified my negative guide so that I could do 25 samples of equal size on the 6x7cm negative, and I ran them through Ice, Giga, and PTgui. Please remember that I'm not a stitching expert.

http://i955.photobucket.com/albums/ae37/peterdesmidt/3rd_try_Lighthouse_Ice_structured_planar1.jpg

http://i955.photobucket.com/albums/ae37/peterdesmidt/3rd_try_autopano_giga.jpg

http://i955.photobucket.com/albums/ae37/peterdesmidt/3rd_Lighthouse_ptgui.jpg

Microsoft Ice was by far the fastest. This was a structured panorama with planary 1 positioning. This option is for stitches with changes in camera position, as opposed to stitches based on rotating the camera. Obviously, there were a few stitching flaws, but the blending appears to be the best of the three. It wouldn't be hard to take this file into Photoshop, load the files for the mis-aligned pieces, align them, and use auto-blend-layers. You could do all of this in less time than that other two methods. (Although I saved a template in PTgui. It'll be interesting to see how this works on my next 6x7cm negative scan.)

It wasn't hard getting Ptgui and Giga to give a pretty good stitch, but it took the addition of quite a few control points manually.

My conclusion is that although there are some issues, stitching will work, even with difficult negatives. It should be a breeze with shots with overall detail, or with shots with many less samples.

Obviously, this area could use more work, but now it's a question of refinement and not feasibility.

I'm going to turn my attention to other matters, such as building a better negative carrier, and then I'll start testing lenses. (I've also started to look into the automation question.)

I'll be happy if I never see that negative again.

Nathan Potter
26-Feb-2012, 21:20
I know nothing about stitching frames together but am wondering about a couple of things.

Can one use some dedicated fiducials, perhaps stuck on the glass film mounting plate - tiny enough so they can be cloned out in Photoshop in the assembled image?

Could the use of such fiducials reduce the overlap required from frame to frame - say to less than 10% of frame length and width?

Nate Potter, Austin TX.

Peter De Smidt
26-Feb-2012, 22:24
I've read an account where someone did exactly that, i.e. using fiducials, and it was effective. 10% overlap should be ok in that situation. The sticking point might be how well the software can blend the tiles tonally with so little overlap. Just for reference, my last series had 24.6 % overlap vertically and 20% horizontally.

Software that allows homographic positioning seems like a better way forward than using software that expects frames to be captured through rotation. Hopefully, PTgui Pro's multiple viewpoint feature is another way of saying 'homographic'.

Peter De Smidt
27-Feb-2012, 21:35
I just played around a bit in Phototshop with the light house files. Photomerge, in the "reposition only" mode, did a good job of merging the bottom two rows individually, and then it merged them together. It didn't work on the other areas, but I manually brought them in a row at a time, and using difference mode it was quite easy to put the tiles in place. After that, I ran "auto blend layers", and the file looked good. There's no need for these files to be warped to fit together. Repositioning is fine.

rdenney
27-Feb-2012, 23:39
That's exactly the approach I was planning as a first effort. I hope it's practical in use, because I can't even afford a hundred bucks for new stitching software.

But I may bring in my computer imaging professor buddy at this point, and see if he's willing to assign a project to one of his classes. If we could write an appropriate set of user needs and requirements, there might be a GNU solution that is possible. More later, if there's anything positive to report.

Rick "who just paid bills--ugly" Denney

Daniel Moore
28-Feb-2012, 02:08
Alignment without correction of any kind did not strike me as a possible solution, a lens with no distortion?? I'm not knocking it, far from it, this is very good news. Better than one could have hoped for.

Peter De Smidt
28-Feb-2012, 02:49
I'll have to test the idea on a number of other negatives. Another option is to use ICE and then manually add and edit only the problem areas. There shouldn't be very many.

Was there any benefit to using multiple viewpoints in PTgui Pro?

Peter De Smidt
28-Feb-2012, 03:53
Here's another option if you have Photoshop CS5. http://blogs.adobe.com/jnack/2010/05/optional_plug_ins_available_for_photoshop.html Download and install Photomerge Interactive Layout plug-in. This will bring back the "interactive" option in the Photomerge dialogue. Basically, this is an option that lets you drag options into place. You can choose reposition only, and there's a snap-to feature, which works pretty well. I just did this, and I was able to get a very nice stitch. The downside is that it is slow to load with 25 images, as in go get a sandwich slow. It might be quicker to do a row at a time and then use the automatic reposition to merge the rows. I'll give that a try.

Daniel Moore
28-Feb-2012, 07:08
Have yet to try viewpoints in the pro version..

rdenney
2-Mar-2012, 10:10
I have been in touch with a friend of mine who teaches computer science at a university, and who has assigned our problem to one of his students to see if there is a configuration of ImageJ that can work and maybe even make the problem easier. To facilitate that effort, and to correct a large deficiency in the needs and requirements I posted here, (http://www.largeformatphotography.info/forum/showthread.php?84769-Making-a-scanner-with-a-DSLR&p=847363&viewfull=1#post847363) I'll attempt some needs and requirements for the stitching portion of this effort in this post. This is a quickie effort to do a bit of systems engineering, and may miss some critical point. But I think all such documentation activity helps keep us from getting distracted, particularly as we seek assistance from people who haven't been following along.

User Needs

The user will make photographs of the negative being scanned in a tiled arrangement. These photographs may be stored as 8-bit JPEG, 8 or 16-bit TIFF, and RGB or monochrome. The photographs will provide the overlap needed to support accurate stitching. The position of each tile will not be more precise than +/- 2 mm, however. The planarity and parallelism between the negative and the DSLR's sensor plane is covered by other requirements, which, if fulfilled, should ensure lateral and geometric distortion less than one pixel over the length or height of each tile.

The user will take the resulting tiled images, run them through a batch process to correct for systematic variation in illumination provided by the light source. This batch process may use a calibration photo made by the user with no negative in the stage, as a means of recording light-source variability. The tile images will be exposed so that the histogram falls entirely within the dynamic range of the camera, and all tiles will use identical exposures.

After this batch process to correct for systematic variation in illumination, the tiles will be assembled by the user in the stitching software. The user will drag and drop image tiles into an assembly. The user prefers to create a single assembly for all the tiles, rather than creating intermediate sub-assemblies. After the arrangement of the tiles has been assembled, the stitching software will perform the stitch. If the stitching algorithm is unable to find the correct stitch line, the user will use manual location methods to precisely locate that tile with respect to its neighbors by moving the misaligned tile or group of tiles. The movements may be large--some multiple of the tile size--or as little as one pixel. The user will need to look at various parts of the stitch area of that tile at 100% actual pixels, and be able to quickly refer to different portions of the stitch line, when performing this manual positioning. The user would prefer some aid in highlighting when the correct stitch has been found, such as some display that shows the one image subtracted from the other or some other mechanism.

The use may also name the tiled files in such a way that the position of each tile can be determined from the file name. The user will then identify a range of files to the stitching software interface to define the assembly arrangement. Using this approach, however, the user will still perform manual alignments in case an automatic stitch cannot be found.

Once the user is satisfied with the position of all the tiles, the user will direct the stitching software to complete the assembly.

The user will use a DSLR to scan up to 8x10 film. The user may use magnifications of 2:1, and may do so using as small as an APS-C sensor DSLR.

As a bonus, the user may incorporate the stitching process into a larger process that directs the movements of the negative stage, etc. Integration with these other process will be done by others. The integrator may use some form of scripting control to direct the stitching software as an embedded process, though the user will still provide manual positioning as necessary and confirm the final assembly.

Requirements

-The stitching system shall accommodate a range of input files
.... -The stitching system shall accommodate tile files in 8-bit JPEG.
.... -The stitching system shall accommodate tile files in 8-bit TIFF.
.... -The stitching system shall accommodate tile files in 16-bit TIFF.
.... -The stitching system shall accommodate RGB tile files.
.... -The stitching system shall accommodate monochrome tile files.
-The stitching system shall be able to assemble at least 240 image tiles.
-The stitching system shall provide a GUI for graphically arranging the tiles.
-The stitching system shall provide a batch assembly method using file names.
-The stitching system shall position each tile within the assembly by evaluating overlapping image information.
-The stitching system shall complete the entire assembly regardless of the inability to locate one or more individual tiles.
.... -The stitching software shall position individual tiles that cannot be located approximately, according to the arrangement plan.
-The stitching system shall identify tiles that could not be assembled algorithmically.
-The stitching system shall keep each tile separated until the assembly has been confirmed by the user.
-The stitching system shall allow manual positioning of any tile or group of tiles.
.... -The manual positioning shall accommodate dragging the tile anywhere within the entire assembly.
.... -The manual positioning shall accommodate precise single-pixel movements.
.... -The stitching system shall provide a display of overlapping tiles during manual movements
........ -The display shall provide up to six locations for viewing overlapping tiles during positioning
........ -The display shall highlight the overlapped area in such a way as to clearly identify when an accurate overlap has been obtained
........ -The display shall be zoomable from the entire assembly down to 100% (one file pixel equals one screen pixels)
-The stitching system shall allow the user to confirm that all tiles are correctly positioned.
-The stitching system shall create a TIFF file of the final assembly
.... -The final TIFF shall be 48-bit RGB for RGB tiles
.... -The final TIFF shall be 16-bit monochrome for monochrome tiles

Bonus:
-The stitching system shall provide script control.
.... -The script control shall define the assembly of tiles
.... -The script control shall provide manual positioning
.... -The script control shall allow the user to confirm the positioning of all tiles before final assembly

(Edit: The filtering of a few spaces by the forum software really does make clean formatting of lists like this far more difficult.)

Rick "respectfully submitted" Denney

marfa boomboom tx
2-Mar-2012, 10:29
ImageJ: patchwork
Detailed explanation of the purpose of patchwork_ program

When we want to examine an object (e.g. cell) with a microscope, we
prefer to zoom in in certain areas of this object to get better
resolution, but we also want to have the whole picture. So we create
a mosaic by capturing parts of the object and getting images that
overlap (we need the overlap to reconstruct the object again).
Reconstructing the whole image from some smaller ones is the purpose
of this program.

from this: http://rsbweb.nih.gov/ij/plugins/patchwork.html

HtH with the homework:)

Peter De Smidt
2-Mar-2012, 11:03
Rick, that looks good.

I'm hoping to be able to try out the Linos lens today.

rdenney
2-Mar-2012, 11:11
When we want to examine an object (e.g. cell) with a microscope, we
prefer to zoom in in certain areas of this object to get better
resolution, but we also want to have the whole picture. So we create
a mosaic by capturing parts of the object and getting images that
overlap (we need the overlap to reconstruct the object again).
Reconstructing the whole image from some smaller ones is the purpose
of this program.

That is actually the principle are my friend has been working in, so he's already thickly involved in that topic.

Rick "who has seen GoogleMaps used to display microscope cross-sections of detached retinas and macular degeneration made by this particular expert" Denney

marfa boomboom tx
2-Mar-2012, 11:40
That is actually the principle are my friend has been working in, so he's already thickly involved in that topic.



and the link I provided is to software that anyone can download and use... to do just what it says. Also, since it is source code, mods can be made. The point, my hope, was that anyone wishing to build is able to build onto existing. This was a pointer to an existing instance. A ready to use, to modify, to enhance. If your friend's students are building something, then, hopefully, they know about this already.

Nathan Potter
2-Mar-2012, 11:59
Rick, in your first paragraph you point out the requirement of 1 pixel distortion per tile for successful stitching. Using a 7000 pixel wide full frame sensor that would be .014% distortion along an edge. This seems an impossible requirement even if we consider that we try to match up two distorted edges with identical degrees of distortion at opposite sides of the frame. Most macro lenses, even the best might keep distortion down to 0.5% which would be 30+ pixels out of rectilinear along an edge. Using an auto fit algorithm assuming both opposite edges of the image are nearly of identical distortion (not possible), I suspect there would still be up to several pixels of mismatch along an edge (and not equal in X and Y). I suspect your 1 pixel requirement might need to be up to 10 pixels.

I am woefully inexperienced in stitching techniques but this makes sense to me. What thinkest thou?

Excellent to lay down some preliminary criteria for someone to work with.

Nate Potter, Austin TX.

rdenney
2-Mar-2012, 12:14
Well, that was referring to the previous requirements I'd written which also required an 8.something-micron sensel. I was aiming at a 13-MP camera as the minimum camera.

So, writing the requirement to keep the distortion within any given number of pixels is a mistake--it's tainted by design and depends on a particular technology solution.

The Canon compact macro is tested at distortion better than 0.5%, but even with my 5D and its big sensels, that will be more than one pixel.

What I should have said is that we won't expect the stitching software to perform any corrections of geometric distortion. If any such corrections need to be made, they can be made in the pre-stitch batch process where I would also correct for falloff.

But that brings to mind something I didn't put in the requirements at all, which will be needed to correct these little errors: blending. But I know little about blending or what has been done, and so others need to add to what I've written about what the user will do for blending and therefore what is required of the software. What we don't want is for blending to reduce resolution, but I frankly have never used layer blending in Photoshop and don't know how it works.

Rick "needs are a model of what we do, not a prescription" Denney

Ben Syverson
8-Mar-2012, 21:35
You will never, ever see an error of one pixel. Such errors would only occur in the blended region, and would not interfere in any way with the stitching process. Visually they would simply be imperceptible.

The stitching software can perform geometric corrections quite easily, but with well corrected macro lenses, it's not likely to be an issue. Any software written or modified for this project needs to be fault-tolerant enough to deal with relatively large errors in alignment and correlation.

rdenney
9-Mar-2012, 07:08
Ben, this is the interface of machine and software. The question is: How much precision in the machine can we expect in order to minimize manipulations of the image in the software. I know you say that these manipulations will be invisible, but at some point they will become visible, so there is a question of where to draw the line. It seems to me we can optimize the balance between machine precision and required software manipulation at the point where the cost of the former and the amount of the latter reach a joint minimum. There is no sense in asking the software to make manipulations when the cost of prevent the need for those manipulations is fairly minor anyway. Otherwise, how much fault tolerance do we need? Can we have enough so that a guy can plug any old lens and extension tubes onto his DSLR and hand-hold the camera for a series of images across the negative? Sure, I'm taking your statement to an extreme you did not intend, but if we don't establish boundary conditions, somebody will do just that.

Once I'm done with assembly, I'll be able to characterize what I think is a reasonable expectation in terms of machine precision. Peter could already do so. I doubt either of us are in the micron range, but we have taken inexpensive construction methods pretty far for all that.

So, we won't see an error of one pixel. Will we seen an error of 10 pixels? 100 pixels? What will it cost to keep the error below 100 pixels? (Probably we just need a tripod, a light box, and a roll of adhesive tape.) We know that it will cost a lot to keep the error down to one pixel--the lenses are not that good and none of us are likely to be able to construct a machine to that level of precision. But the important questions are not at 1 pixel or 100 pixels, but rather in the 10-pixel range. We don't know the answers to those questions yet.

One thing I do know: We will not achieve, using home building, what we aim for. So, we establish a high standard, just as we do with photography.

Rick "who has done a few panoramic stitches and is familiar with the effects of manipulations" Denney

Peter De Smidt
9-Mar-2012, 08:22
It will be interesting to see if negative distortion correction is needed. No doubt this will depend on the lens, but with the light house picture and the 55mm micro it wasn't needed.

In perusing various forums, Microsoft ice works well with 20-25% overlap for a normal panorama and down to %5 overlap with a machine-based structured panorama, such as provided by a Gigapan. Since the distortion in our pictures is probably going to be even less, perhaps even less overlap would be ok. On the face of it, the less blending and geometric adjustment the software does, while still giving an acceptable result, the better.

Ben Syverson
9-Mar-2012, 08:33
Rick, I see your point, but I don't think we need to start with a theoretical ideal and work backward to achieving it. I think what we need are more empirical tests of the kind that Peter is doing. That's how we'll determine our required specifications and margins of error.

So far, it looks like off-the-shelf software can do the job, but it may help to have an application-specific version that's more suited to our purposes.

Peter De Smidt
9-Mar-2012, 08:39
Is anyone here proficient with Photoshop scripting? Could a script be used to roughly position the tiles? If one only had to select a layer, change the blending mode to difference, and then tap the arrow keys a few times to get alignment, well, that wouldn't be too hard to do. I'd prefer it to manually adding control points in pano software.

marfa boomboom tx
9-Mar-2012, 08:44
In perusing various forums, Microsoft ice works well with 20-25% overlap for a normal panorama and down to %5 overlap with a machine-based structured panorama, such as provided by a Gigapan. Since the distortion in our pictures is probably going to be even less, perhaps even less overlap would be ok. On the face of it, the less blending and geometric adjustment the software does, while still giving an acceptable result, the better.

ICE 'requires' at least 1% overlap... just the way the code was written, MS has another tool that does "just tiling" ... if interested seek HDMAKE ..

marfa boomboom tx
9-Mar-2012, 08:51
Rick, I see your point, but I don't think we need to start with a theoretical ideal and work backward to achieving it. I think what we need are more empirical tests of the kind that Peter is doing. That's how we'll determine our required specifications and margins of error.

So far, it looks like off-the-shelf software can do the job, but it may help to have an application-specific version that's more suited to our purposes.

I agree that OTS will work well for most users... the app-specific (ie domain-bind) sw is very easy IF you control the "analysis" stage (capture)(s1)

with knowns at S1, the S2 (synthesis) is much easier... 15 images can be aligned and stacked in less than 5 minutes on a small memory machine.

But algorithms such as SIFT and SURF make the real-world maker limitations of S1, somewhat easier, at S2.

This link : http://www.chrisevansdev.com/computer-vision-opensurf.html

for the files.

& OFF

Ben Syverson
9-Mar-2012, 09:57
Yes... you would need alignment algorithms regardless. There's just no getting around that. Or, put another way, there's no point spending tens of thousands of dollars on a 1500 pound XYZ stage that can position the camera to within a few microns when the software can do the same job on images that are "in the ballpark." We all have computers, and we may as well have them do some work.

Peter De Smidt
9-Mar-2012, 10:39
Is the only non-compressed output option with hdmake and hdview an 8-bit-per-channel png file?

Peter De Smidt
9-Mar-2012, 10:51
Wouldn't this work for rough positioning? Make a script involving:
1) Create a canvas big enough for all of the pieces,
2) Place the first file on the canvas and move it to a specific location
3) Repeat for all of the other tiles.
4) The tiles could be manually moved or perhaps auto aligned.
5) Auto blend layers.

Ben Syverson
9-Mar-2012, 11:06
That's the general idea, but the challenge is the automation and accuracy of steps #2, 4 and 5.

Peter De Smidt
9-Mar-2012, 11:21
Well, I only plan on investigating this method for images like the Light house, where there's large areas of low detail. I'm pretty confident that the various other solutions will work with more detailed negatives. I'm not looking for the placement to be exact.

With the lighthouse picture, the stitching programs worked, with a lot of manual editing, but the smoothness was not up to Cezanne scan nor my manual placement in Photoshop. Of course, this might have been due to the various settings I used.

peter ramm
9-Mar-2012, 14:14
Rick, distortion and blending are both issues. Re distortion, the eye will not usually see it that much in a single field, but if it repeats across fields it will be much more evident - just like the background illumination errors. Best to use lenses with very low distortion. That said, a bit of distortion won't bother the tiling algorithms all that much. They pick a best fit, usually using some sort of autocorrelation function. That doesn't do any geometric corrections - just a fit - so "best" is not "perfect". The more the distortion, the more likely the best fit is to leave a few edges dangling here and there.

Movement precision is another matter. You want errors to be random as opposed to consistent. A consistent error (e.g. 2% too much movement in X) is additive and will eat into overlap. Random errors don't add.

If you get the background illumination and corrections optimized, blending should be very minor. We avoided it because it changes data. Blends are OK for stuff you put on show, but our clients were obsessive about that sort of thing. Opal glass does a fine job of diffusing, though you need a fairly strong light behind it.

http://www.edmundoptics.com/products/displayproduct.cfm?productid=1935

Here's a pic of a tiling system doing its thing on the microscope. This was with a motor stage, but you could also move manually using any calibration markings. The DSLR user interface should be pretty similar to make it all as easy as possible.

Peter De Smidt
9-Mar-2012, 15:27
Here (http://www.ps-scripts.com/bb/viewtopic.php?f=9&t=4680&p=21278&sid=a1612da7a50f9aadfedf4809fbddd473#p21278) you'll find a script by Paul MR that'll do what I'm talking about. The script is designed to place the files left to right top down; or top down left to right, and so it won't work with a zig-zag approach.

rdenney
9-Mar-2012, 15:35
Yeah, my buddy was doing similar constructions of microscopic views of retinal detachments and macular degeneration.

My first light-source attempt will be aiming a slide projector through the negative. It will not be fully collimated (though it is a hallogen lamp with a dichroic reflector through a condenser, and then through a lens, so it should be somewhat collimated). It should be pretty even. We'll see. But if I have to go diffused, it's still a good light source--very bright. I'd love to be able to use shutter speeds in the 1000-4000 range.

My stage will be manually moved in x and y (normal to the lens axis). I can guarantee random errors, heh. I am certainly depending on software to accurately find the overlap.

In terms of the z axis, I'm hoping my apparatus will keep the film plano-parallel to the sensor within mils if not microns over the required range of motion. My slides have that level of movement precision, I just need to install them accurately. I'm pondering ways of testing my previous requirements to ensure alignment. I know I can get alignment to the nearest few thousandths of an inch just using precision machinist squares and drafting tools. Peter's apparatus should also maintain that alignment, but his (wood) system will suffer dimensional instability due to humidity changes, and my metal system might see changes due to thermal expansion, though I've tried to design it so that these will not add to the problem.

Lens distortion is easy to correct during raw conversion, as are some other faults including lateral color. It will be a challenge to find the right parameters, but that can be done. But the less one has to make corrections, the less loss. Loss may or may not be significant, but we can agree that prevention beats the cure.

Rick "maybe working on it a bit this weekend" Denney

John NYC
10-Mar-2012, 19:53
Not sure if this is the right thread, but I'd like to chime in that whatever you guys come up with, I'd LOVE for it to do 11x14. Not being able to scan 11x14 easily in my house is probably the #1 reason I have not gotten into it yet. My wallet will thank you if you ignore the request I just wrote. :)

Peter De Smidt
10-Mar-2012, 20:08
Hi John,

There's no reason, other than available space, that this type of system couldn't work with 11x14. You probably wouldn't need to work at 1:1. You could save space by having the camera move instead of the negative, but that would be more complicated to do.

Daniel Moore
10-Mar-2012, 20:14
On the note of distortion, simply imaging a very detailed slide and allowing software like PTGui to optimize for lens distortion parameters, and keeping tabs on the outliers, control point errors to be culled, and re-optimizing the stitch until you have a control point error of 1 or even less will characterize a lens more than well enough for ultra accurate stitching. Those parameters can then be saved and reused, obviating the need for any future lens distortion optimization. What's left is simply to align and blend. It seems prudent to let the software take care of distortion and not give it much more thought.

rdenney
10-Mar-2012, 20:27
It seems prudent to let the software take care of distortion and not give it much more thought.

You're right. I'll use a fisheye lens and let the software do the work.

Of course, that's an example of reductio ad absurdum, but the point is that the less distortion there is to begin with, the fewer the pixels that have to be moved (stretched, smeared, averaged, interpolated, etc.) by software.

I've completed my assembly (no light source yet), and have been measuring it with a machinist's square (ground to <16 microns accuracy over 6"), and with a bit of adjustment, I believe my system is square and parallel within one or two thousandths of an inch. It required buying the right sort of stuff, but beyond that it wasn't that hard. I put a clear ruler in place of the negative and moved it throughout its range--no change in focus, no wandering off line, etc.

Rick "film at 11, probably in the first thread" Denney

Daniel Moore
10-Mar-2012, 20:47
Note to rdenny: *while stitching with fisheye lens captures is possible, it isnot advised. I should have been more thorough in my response, my apologies.

Peter De Smidt
10-Mar-2012, 21:54
You're right. I'll use a fisheye lens and let the software do the work.
Of course, that's an example of reductio ad absurdum, but the point is that the less distortion there is to begin with, the fewer the pixels that have to be moved (stretched, smeared, averaged, interpolated, etc.) by software.

And that's an example of a strawman argument, as Daniel was not suggesting otherwise. His suggestion wasn't very different from your suggestion of correcting distortion in raw software. We all agree that we should work at minimizing problems through good hardware design, while making whatever improvements are practical through software.

rdenney
10-Mar-2012, 22:27
And that's an example of a strawman argument, as Daniel was not suggesting otherwise. His suggestion wasn't very different from your suggestion of correcting distortion in raw software. We all agree that we should work at minimizing problems through good hardware design, while making whatever improvements are practical through software.

Hey, I said it was reductio ad absurdum.

So far, we've had a train of people saying something that sounds like, don't worry about the mechanics, you can fix it in software--you'll never see the effects.

But there is obviously a point at which the effects will be visible, as the reductio ad absurdum is intended to illustrate. (That is NOT a strawman--it is a completely legitimate form of argumentation.) Do we know where that limit is? I rather doubt it. But I certainly don't want to find it by accident, negating all my expense and construction effort and having to begin again with a different precision requirement. When pixels have to be moved, there is loss. Less loss is better.

I feel a bit like you feel, Peter, when people question whether this project is worth doing. This is now the third or fourth time I've felt the need to defend mechanical and optical precision so that we can minimize software-based corrections.

Rick "especially since mechanical and optical precision just isn't that hard to achieve" Denney

Daniel Moore
10-Mar-2012, 22:53
Rick, you'll use the best lens you can. You'll then have to correct it's distortion. No defense of measures of precision is necessary. It's inescapable that you'll need some lens correction and this is the one thing that weighs the least in this project since the software is that good, so that parts really 'done' in my opinion. Bigger fish to grill.

A further note, as it occurs to me, which it doesn't immediately, that obscuration of detail in blending at the seams could be a substitute for lens correction. Let the seams go and slap some paint on 'em is another alternative, but hardly a substitute for single or smaller pixel accurate seams.

rdenney
11-Mar-2012, 00:04
Rick, you'll use the best lens you can. You'll then have to correct it's distortion. No defense of measures of precision is necessary. It's inescapable that you'll need some lens correction and this is the one thing that weighs the least in this project since the software is that good, so that parts really 'done' in my opinion. Bigger fish to grill.

A further note, as it occurs to me, which it doesn't immediately, that obscuration of detail in blending at the seams could be a substitute for lens correction. Let the seams go and slap some paint on 'em is another alternative, but hardly a substitute for single or smaller pixel accurate seams.

Daniel, you are right, of course. I did not connect your post to my statements two pages back, but I see that connection now.

I have actually followed your process using not PTGui but Panavue, which allows the placement of control points. I did iterate on lens correction until I could make my control points align perfectly for a panorama that was based on a Canon 20-35 zoom, which suffers from fairly significant barrel distortion on the wide end. Those nine images were a lot of work! Of course, for that one, I was also bending it into a cylindrical projection, and it was not helped by the fact that I did not rotate the camera on its nodal axis--a troublesome error. I thought it was worth the effort.

Nisqually Valley from Mount Ranier (http://www.rickdenney.com/images/ranier-ridge-panorama-lores.jpg), linked only because it is not large format.

I have also attempted simple positional stitches by scanning both ends of a 6x12 film in my Nikon scanner's glass holder. I never got a clean stitch, simply because I had to turn the film around and one image was always slightly rotated from the other one. The small rotational correction, when the software could find it (I was using Photomerge which doesn't have control points--my version of Panavue is obsolete on my current machine), always took the edge off parts of the image. I went back to the glassless carrier which allows the two scans to be made without repositioning the negative, and the machine precision was sufficient to support a pixel-perfect stitch with no blending. That seems a bit more like what we are hoping for here, except, as you say, lens distortion is the one major difference. That is part of what is driving me to attempt high levels of mechanical precision, though.

http://www.rickdenney.com/images/japns_maple_scan0015_lr.jpg

Rick "who's done just enough stitching to know how hard it is to do well" Denney

MYKO
27-Mar-2012, 03:57
"especially since mechanical and optical precision just isn't that hard to achieve" Denney

Hi Denny, I've been using stitching for years for studio product work and interiors. Using a 4x5 camera and a stitching adapter plate I get a 4-up matrix of images that overlap by 6mm. I focus on the groundglass of my 4x5 and it's simple and repeatable. The 4x5 camera eliminates the distortion as the camera remains fixed and the capture unit (DSLR or digital back) is what moves. The images ALWAYS line up. Working with a Canon 5D (MK1) in the studio I churn out 43MP files from a "chip" that is 42 X 66mm in size.

I work in PTGUI (pro version) and have it set up to "remember" my stitch pattern, helping with areas of soft focus or low detail. I recently had a project photographing some 30x40 watercolors for use on billboards, essentially the camera was used as a "scanner" and the results were exceptional. I shot a 9-up pattern with a 33MP digital back that covered almost the entire 4x5 glass. The finals "shots" after stitching were over 250 MP and the client called to ask how I did it.

Daniel Moore
18-Sep-2013, 15:02
I have good news for DSLR scanning stitching work, especially regarding the stitching of featureless areas. With a system that has excellent repeatability one can stitch an image that contains lots of detail throughout, hopefully arriving at a very low control point error value and save that run as a template in PTGui. From there you can open another set of images captured with the same set of values and choose 'apply template'. No control points need be generated for the new set. Choose the 'create panorama' button and done.

I played with this a bit and tried to break things and couldn't. If the replacement images are rotated or offset from those used to make the template you'll get a funky product showing such but the stitched output can only be broken by scanning with different stepping distance values or errors in repeatability due to scanner precision.

For a while I tried to use PTGui's 'replace images' feature but this was troublesome and in fact the wrong approach. Templates were designed to do just this and in practice they work beautifully. I would still advise control points be generated for images that aren't subject to orphaned images due to featureless areas, it could result in better seams along picture elements for one.

Cesar Barreto
30-Sep-2013, 03:41
I've been doing some stitching and blending with PS, but as I almost always working from B&W negatives I'm not really pleased by the fact that the grayscale frames are converted to RGB during the proccess, making files much bigger and computing much slower than necessary. Those of you who use PTGui could tell me if it behaves the same way or would it preserve the grayscale files?
Thanks, in advance.

Daniel Moore
1-Oct-2013, 14:01
PTGui always creates RGB output.