PDA

View Full Version : Resizing After Scanning Black & White Negative



IanBarber
1-Jan-2019, 08:49
Just looking for clarification that the method I am currently using for resizing after scanning with an Epson V800 scanner is considered to be the most optimal way.

Step 1
Scan at 2400ppi and bring into Photoshop.
Note: I have recently started to scan at 4800 wondering if this will give give me closer to the scanners optimal resolution of 2400.

185937

Step2
Uncheck the resample checkbox and change the Resolution to match the printer (360 for the Epson R3880)

185938

Step 3
Recheck the Resample checkbox, enter the desired output size and choose Bicubic Sharper (reduction) for the resample method.

185939

mdarnton
1-Jan-2019, 08:52
Maybe someone will respond to this: I'm not sure that it's such a bad idea to let the printer handle resampling rather than do it yourself. It should know what it wants. That's not an informed comment, so I welcome qualified answers!

Alan9940
1-Jan-2019, 11:46
Ian, that's a fine method to down-sample an image file. It's best if you don't let the printer driver resample the image in any way. FWIW, the method I established years ago is:

1. Test to determine the optimal optical resolution for your particular scanner; not always what the manufacturer claims as the max optical resolution.

2. Scan all film at your pre-determined optimal scanner resolution, thereby creating a master file that is never modified (think of it as a raw file.)

3. Edit, as needed, then target output for whatever is needed.

Note: Bicubic Sharper may not always be the best choice when down-sampling. Depending on the image, a simple Bicubic will provide better tonal transitions.

faberryman
1-Jan-2019, 11:55
What method does PS use for downsampling when Auto is selected?

Alan9940
1-Jan-2019, 13:30
What method does PS use for downsampling when Auto is selected?

Don't know for sure, but I'd assume Bicubic Smoother since that algorithm was Adobe's recommendation for reducing image size back when it was added to the Image Size dialog.

rdeloe
1-Jan-2019, 14:02
Jeff Schewe, author of The Digital Print, recommends preparing the file in the resolution needed by the printer rather than letting the printer driver do the re-sizing; this includes resizing through the print dialogue box. For downsampling, he suggests bicubic smoother. For upsizing a lot, he recommends preserve detail. For small amounts up or down, he recommends using the automatic setting and letting Photoshop decide. In his testing with Epson printers, he claims to be able to see the difference between 720 ppi and 360 ppi (so don't throw away the extra pixels is his advice). I use Lightroom, which takes care of up- and down-sizing "under the hood". I'm creating a TIFF at export, which I send to Quadtone RIP. In my experience it does a good job. There are also specialized applications out there that claim to have various kinds of secret sauce to produce better results.

Pere Casals
2-Jan-2019, 06:11
What method does PS use for downsampling when Auto is selected?

This is undisclosed, I think: "Automatic: Photoshop chooses the resampling method based on the document type and whether the document is scaling up or down." (document type ? different for indexed color, etc?)

A good guess is that it depends on if we are increasing or decreasing the pixel count, Bicubic Smoother (enlargement), Bicubic Sharper (reduction), for those situations, ...also I'm suspecting they use Bicubic (smoother gradients) if the new size is close to the original.

After any resize a sharpening should follow... but IMHO the sharpenings included in the Ps resizing algorithms are conservative to not destroy things, so my view is that anyway we should try a final manual sharpening just before moving from 16 bits/channel to 8 bits/channel for the image release.

Pere Casals
2-Jan-2019, 06:15
I would point that Nik Collection has a nice sharpening tool, it allows to pick control points in the image and to ajust settings for those points to teach the tool what we want, so many times we can craft and advanced custom sharpening in a very straight way:

https://luminous-landscape.com/3-step-sharpening-workflow-using-nik-collection-optimal-sharpness/
https://web.archive.org/web/20190102130752/https://luminous-landscape.com/3-step-sharpening-workflow-using-nik-collection-optimal-sharpness/

Peter De Smidt
2-Jan-2019, 11:47
As others have said, minimize what is thrown away. If you're scan is big enough for 720dpi at the printing size, then use that. If the size is between 720 and 360, then there are three choices, all of which you can test for yourself. Upsize to 720, send the file straight to the printer, and downsize to 360. Give it a whirl. Each method will require different levels of sharpening.

Regarding interpolation, I'd use bicubric for making smaller. bicurbric sharper simply adds sharpening. I'd rather sharpen separately. For enlargement, the best built in method with Photoshop is Preserve Details. Some versions of Lancros are better, but they invovle 3rd party software. The best that I've used is Topaz Gigapixel AI. I just had a 36"x48" print made from a 24 mp digital file, and Topaz did the best job. I tested all of the versions mentioned, as well as Genuine Fractals.

Pere Casals
2-Jan-2019, 12:18
The best that I've used is Topaz Gigapixel AI.

I guess this one falls in the convolutional neural networks category https://en.wikipedia.org/wiki/Comparison_gallery_of_image_scaling_algorithms

IanBarber
2-Jan-2019, 13:58
How did Genuine Fractals compare to Gigapixel AI Peter

Peter De Smidt
2-Jan-2019, 14:12
With my image, Gigapixel AI was better.

Alan9940
2-Jan-2019, 16:17
With my image, Gigapixel AI was better.

In all my tests comparing Gigapixel to Genuine Fractals, GF was better. I will say that Gigapixel provided an acceptable result with no effort, but since I've been using GF for nearly 20 years I know how to get the best out of it.

Peter De Smidt
2-Jan-2019, 16:29
That likely explains it, Alan. I've only used GF to test that one image.

Steven Ruttenberg
9-Jan-2019, 19:18
I scan at 6000 dpi (part of reason is it is a multiple of 300 since I use a IPf6400). I found the scans at 2400-4200 to be well, not that great. From 4200-about 5000/5500 much improved. At 6000 the sharpness, etc of the image is far greater than at 2400 or 4800. I may typically down size the scan for editing to 5700 I think it is so that I can use ACR at times. It might by 5400 not sure. But typically I edit at 6000 dpi, and my MacPro can handle it. I then down size to the print size I want, sharpen if needed, then save the file as the print master for the size. Note, before I down size, I flatten the file first. Also, I save the full size file as a working copy from which I make all my prints from. There is a method that Ken Lee outlines on his website for "turbo" charging PS that I played with that seems to work. It allows you to work on a small file size at greater speed and then when done you create the full size file at whatever dpi you started with.

You also use the same method for downsizing that I do, except I chose smoother gradients as I am sharpening after resizing anyway as virtually my last step.

The above works for me and is not intended to create a scanner war thing.

Also, I posted all of my resolution testing a while back with links to the scanned targets at all the various resolutions from 2100 to 6300. I don't use 6300 because it generates a file to big for tiff format. And all my files are raw, linear scans.

Steven Ruttenberg
9-Jan-2019, 19:22
This is undisclosed, I think: "Automatic: Photoshop chooses the resampling method based on the document type and whether the document is scaling up or down." (document type ? different for indexed color, etc?)

A good guess is that it depends on if we are increasing or decreasing the pixel count, Bicubic Smoother (enlargement), Bicubic Sharper (reduction), for those situations, ...also I'm suspecting they use Bicubic (smoother gradients) if the new size is close to the original.

After any resize a sharpening should follow... but IMHO the sharpenings included in the Ps resizing algorithms are conservative to not destroy things, so my view is that anyway we should try a final manual sharpening just before moving from 16 bits/channel to 8 bits/channel for the image release.

Photoshop allows you to choose the down sampling scheme you prefer. I prefer smoother gradients and that has been sizing files from 6000 to 300 with final sharpening after. Works very well. I used GF, tried Nik, but I don't like having yet another program to deal with. Photoshop if you are experienced at all the methods available provides very good results. I also sharpen selectively and at a level that is almost not detectable (my scan files can pretty much stand o their own with no sharpening)

Pere Casals
10-Jan-2019, 05:52
(my scan files can pretty much stand o their own with no sharpening)

Yes.. nothing like having "native sharpness" !!!

Perhaps we should look a bit in the rear-view mirror.

I like the sharpening algorithms used by Sally Mann for her impressive prints. You place a 8x10 collodion plate in the enlarger's carrier... then we execute the algorithm:


/////////////////////////////////////////////
//GNU General Public License
/////////////////////////////////////////////

#include <iostream>
#include "enlarger.h"

int main()
{

Load_Default_Gear();

if(!sharp_negative() ) return GO_TO_SHOT;

for (int i = 0; i < MAX_FOCUS_OPS; i++) {


CImage* pImg = pEnlarger->Focus_Procedure(the_negative, loupe); // pointer pImg owned by CEnlarger instance

if (Check_Sharpness(pImg)) {


Print_Nice_Image(pImg);
return YOU_HAVE_A_NICE_PRINT;

}

}

return GO_ALIGN_ENLARGER;
}

jp
10-Jan-2019, 08:36
Change the units from inches to pixels and don't resample at scan time.
Deal with pixels and not inches until you're ready to print, then consider them both.

Steven Ruttenberg
10-Jan-2019, 08:38
Yes.. nothing like having "native sharpness" !!!

Perhaps we should look a bit in the rear-view mirror.

I like the sharpening algorithms used by Sally Mann for her impressive prints. You place a 8x10 collodion plate in the enlarger's carrier... then we execute the algorithm:


/////////////////////////////////////////////
//GNU General Public License
/////////////////////////////////////////////

#include <iostream>
#include "enlarger.h"

int main()
{

Load_Default_Gear();

if(!sharp_negative() ) return GO_TO_SHOT;

for (int i = 0; i < MAX_FOCUS_OPS; i++) {


CImage* pImg = pEnlarger->Focus_Procedure(the_negative, loupe); // pointer pImg owned by CEnlarger instance

if (Check_Sharpness(pImg)) {


Print_Nice_Image(pImg);
return YOU_HAVE_A_NICE_PRINT;

}

}

return GO_ALIGN_ENLARGER;
}

I will have to give it a try!

Alan9940
10-Jan-2019, 12:35
There is a method that Ken Lee outlines on his website for "turbo" charging PS that I played with that seems to work. It allows you to work on a small file size at greater speed and then when done you create the full size file at whatever dpi you started with.


Don't know about Mr. Lee, but this sounds a lot like the "Guide File Workflow" that West Coast Imaging outlined some 10-15 years ago. The only real restriction to working this way is that all layers must be adjustment layers, only; you cannot have any pixel-based layer. Once you've finished your edit, you simply copy all the adjustment layers to the full resolution file, then complete for final output. I would guess that nowadays, with all the desktop computing power we enjoy, this workflow is not needed so much.

Steven Ruttenberg
10-Jan-2019, 14:07
Don't know about Mr. Lee, but this sounds a lot like the "Guide File Workflow" that West Coast Imaging outlined some 10-15 years ago. The only real restriction to working this way is that all layers must be adjustment layers, only; you cannot have any pixel-based layer. Once you've finished your edit, you simply copy all the adjustment layers to the full resolution file, then complete for final output. I would guess that nowadays, with all the desktop computing power we enjoy, this workflow is not needed so much.

I just remember getting the info from his site. He may have provided a link to a you tube file (It think he did) yes, you do need to use only adjustment layers. As for computing power, I have a 2012 MacPro with 32GB of memory and 20 TB of disk storage. My color files will get upwards of 60GB +. Yes, I spend a lot of time on a file when I find an image I like. My Mac handles it okay, just takes a while writing and loading the file. I hope to upgrade in the next year or so for the latest and greatest MacPro. But even then, I would still use the method outlined above to speed up the actual editing.

Any pixel level fixes needed at that point could be done to the file and recorded as an Action, that way you could redo your pixels adjsustment/fix steps after creating the full size resolution image if you so desired

john_ackbar
27-May-2019, 08:08
The best that I've used is Topaz Gigapixel AI.

A friend told me about Topaz as well but i could never get it to work on my computer, I think my graphics card is a bit outdated or some other bug. I know this site synchronet.me that claims to use AI. I don't really understand the tech behind it, sometimes the results are ok, sometimes its meh, but its been pretty helpful with large format printing. In case it comes in handy for anyone else

pepeguitarra
27-May-2019, 08:34
Just a side comment to myself (and others). If taking an analog picture, scan it, photoshop it, manipulate it to create a digital print is so complicated to create a sub-par digital copy, wouldn't it be better to use a DSLR to take the photo? Wouldn't it be easier to produce an analog print with the dodge and burn tools? That is really a personal choice, but those were my considerations when I went full analog.

Peter De Smidt
27-May-2019, 09:09
If taking an analog picture, scan it, photoshop it, manipulate it to create a digital print is so complicated to create a sub-par digital copy, wouldn't it be better to use a DSLR to take the photo?

You use the phrase "sub-par digital copy." That implies a distaste for any hybrid workflow, and that's a perfectly fine subjective choice, but I know a bunch of very experienced traditional black and white printers who think that the hybrid workflow gives them better, for their own definitions of 'better', prints than they created in the darkroom. Why throw shade on them? Why not just do what you prefer, and let others do the same?

pepeguitarra
27-May-2019, 09:34
... Why not just do what you prefer, and let others do the same? Isn't that the current case?

Peter De Smidt
27-May-2019, 09:39
Sure, but why dish-"sub-par digital copy"- what other people do?

pepeguitarra
27-May-2019, 10:03
Sure, but why dish-"sub-par digital copy"- what other people do?. My digitized scans have always been sub-par. I refuse to spend hours on a computer dealing with Photoshop.

Sasquatchian
27-May-2019, 10:55
Getting back to the original question about downsampling: The method used really depends on the image and how much resampling is taking place. Adobe Automatic uses a non-disclosed interpolation that varies depending on the amount of resampling. The trouble with this is that you never know exactly what is happening and you're trusting Adobe to make the best decision for you. The idea of Bicubic Sharper is not a bad one for downsampling but the problem here is that it applies an equal amount of unknown sharpening to your entire image, which may or may not be optimal, especially when or if you have any diagonal lines in your image. Very likely to get stairstepping in them. For most images, standard Bicubic is the best with locally applied sharpening afterward, either with USM or with FocusMagic, always applied on a duped layer then painted back in where needed. With some subjects - those that are prone to moiré, it's often helpful to use Bicubic Smoother on the downsizing, going against what Adobe would consider standard practice.

It's always helpful to use the preview box and scroll through the options while in Image Size. After enough images, you'll pretty much know what will work best for you. One other area to consider is using Free Transform to straighten out your slightly crooked scans. You have the same interpolation options in the Options Bar of Free Transform as you do in Image Size but with no preview. Here is an area where you can get weird Bicubic hatching artifacts - the same ones we used to see on scans rotated very slightly in scanning software, but now induced simply by rotating using standard Bicubic. Instead, and I've learned this through testing many drum scanned images that needed only a couple tenths of a degree rotation, using Bicubic Smoother completely fixes the hatching artifact. This is something I figured out several years ago and for some reason have not gotten the samples together to send to Chris Cox at Adobe but now you all know how to deal with it. Maybe this is not an issue with Epson scans as they are not all that sharp but certainly a problem with all the drum scans I make. It's almost as if the pattern of film grain in the scan when slightly rotated creates a large cross hatch pattern which is clearly visible at 100% viewing and more so when viewing individual RGB channels but it only happens with very slight rotations. Go figure. Actually I remember seeing those in commercial scans back in the mid 90's but no one then had any clue as to what caused it.

Peter De Smidt
27-May-2019, 11:01
Bringing up rotation is a good point. I highly recommend taking the time to scan the negative such that no rotation will be needed. Rotation other than 90* or 180* will always lead to a loss in quality. Make a guide to make sure the negative is lined up perfectly. If it's not, then take the time to re-do it before scanning.

Sasquatchian
27-May-2019, 11:16
While that's a great idea and good practice, in the real world, no matter how carefully you mount your film on the drum, it's VERY difficult to get it perfectly straight, and I've been doing this for over twenty years. You're often off .1 or .2 degrees, which is pretty hard to see when lining up with the grid on the mounting station. And if you're scanning 4x5, you can also have the film be slightly crooked in the film holder to further complicate. Trust me that when I get 10 frames mounted up on a drum and one or two of them are off a teeny tiny bit, there's no way in hell I'm going to pull them all off and start over. To quote Dana Carvey: "It ain't gonna happen"

Peter De Smidt
27-May-2019, 11:51
I believe you. I've never used a drum scanner. For my flatbed, I've machined a negative holder/system that makes getting and keeping the negative aligned properly easy.

SergeyT
30-May-2019, 20:08
.1, .4, .5 degrees of rotation make no difference on "quality", especially when we instruct the scanner to over scan.
Mounting 1 or 2 pieces instead of 10 makes it simpler on both the human and machine (both need breaks) and helps to balance the drum better

Sasquatchian
30-May-2019, 21:41
There's never been a balance problem on the 4 inch diameter Howtek drums with one or thirty pieces of film mounted. I'm far more likely to mount up to ten 35mm frames if I have that many to scan, and to be honest, with only an inch and a half on the long side to line up, 35mm frames are more likely to be a little crooked. If you're mounting a six frame strip, that's a much different proposition and easy to get straight.

If you're scanning on the large 8 inch drum on something like a Howtek 7500, which I've done quite a bit, it either helps to mount a dummy piece of film on the opposite side of the drum - or - use the speed clamp function in the control panel to slow the rotational speed of the drum. And yes, fractions of a degree make no visible impact on the file.