PDA

View Full Version : Is there any way to not lose detail when resizing for web?



ae5x
3-Nov-2006, 17:03
Hello,

I'm new to both the format and scanning and am disappointed in my lack of ability to retain detail in my transparencies when resizing them for the web. Is the best way to scan a 4x5 trannny for posting online to scan at 72 dpi or to scan at a higher res for more detail, and then to downsize?

Specifics of what I'm trying to get are here:

http://www.ae5x.com/gallery.htm

I've seen small, web-sized images on numerous sites, so I know it can be done.....but how to do it?!

Thanks,

John H

Sheldon N
3-Nov-2006, 23:54
I find that a good amount of sharpening goes a long way in prepping a web-sized image. You might try oversharpening the original sized file before downsizing. This could be perhaps both a local contrast style sharpening of 15-20,50-100,0 followed by a strong edge sharpening of 150,3,1. Then you downsize the file to your intended size, then do one final light sharpening of 50-75,0.3-0.5,1 for the end result.

All of this is somewhat subject to your taste, and it is possible to overdo it. Experiment to see what you like.

Lazybones
4-Nov-2006, 00:04
Yes, sharpening is correct.

Stewart Skelt
4-Nov-2006, 03:31
My standard Photoshop workflow (which I got from someone on usenet) is to downsample 50% at a time, preceding each downsampling with unsharp mask with the settings of 98%, radius 0.7, and threshold 2. After downsampling to the final desired size, I may or may not apply the same USM settings, depending how it looks. Of course you can adjust these settings to suit your own taste, but the principle is the same.

robc
4-Nov-2006, 05:21
here's a suggestion.

post on your website another version of the cut out in tiff or bmp format which has not been altered from the original scan in anyway. People can then download it and apply their preferred method of downsing and post the resulting jpeg with their method for us all to see which one works the best.

Specify the final size that people should downsize to, so for example, with your current cutout the long side is 676 then you should specify the long side must be downsized to 100 or what ever number you want so that everyone participating produces an image of the same size.

Those posting an image should give their procedure and the jpeg compression parameters they use when saving for web.


and forgot to say to do the scan at hardware resolution.

Frank Petronio
4-Nov-2006, 06:10
Why don't you find an example of a web image that has plenty of detail? When you find one, do it that way. You can't get finer detail than the number of pixels on your display, so expecting to see what you see with a 4x loupe is (umm, how do I say this nicely?) absurd.

The type of image you are trying to compress into a jpg is very detailed and complex, which means it will be a much larger image than a simple image with lots of smooth tones (like a sky). Once your image is more than 100 kilobytes or so, it may not be worth trying to "share" it with people as they will only be frustrated with your "too big for my bandwidth" images.

In most cases people oversharpen and oversaturate as the last step to "saving for the web" (not "saving as") in Photoshop v.7 to CS3. That's because jpgs do tend to mush things together in the compression process. But each image is different so you have to work interactively with each image to balance detail versus file size - don't be afraid of using a "low quality" setting for a smooth, simple image if you can get away with it -- your audience will appreicate the faster download time.

As you build your web galleries, aim top keep the images consistent (fix either/or the width and height in fixed number of pixels) and aim to keep the finished jpg under 50 kb, or 100 kb (depending on your ego really -- viewers really want you to use faster loading, smaller file sizes.)

BTW, you don't scan at 72 dpi or whatever. That is a meaningless number. What matters is the number of pixels width and height... when someone says "scanned at 72 dpi" it means nothing to us.

Ralph Barker
4-Nov-2006, 07:58
FWIW, my usual workflow for Web display is similar to what Stewart described above. I scan to the size of the largest digital print I might want to make, such that the output resolution will be about 300 DPI, and then downsample in steps of no more than about 50% (using the pixel dimension). I use a lighter touch of unsharp masking at each step, however, using 50% with a radius of 0.4, and a threshold of 0. Those settings seem to work well for "final" images of around 500-600 pixels per side, so it's convenient to use as Photoshop saves the last filter used and its parameters in the top of the filter menu stack. For larger images early in the workflow, it can be applied several times, if desired.

Extremely detailed subjects, such as the little flowers in the sample, will generate downsized JPEGs of considerable size, as mentioned, however. The detail contained in the large file size is often often wasted, as it can't really be seen at the display size. Thus, it may be better in some cases to lose the detail in the original scan (i.e., scan at a lower initial resolution), so the eventual file size is smaller, and within acceptable parameters for Web download times. "Acceptable" used to be around 65KB, but with many people now on high-speed connections, the definition of "acceptable" might now be closer to 120KB, or even a bit larger. I'd lean toward tailoring that to one's individual target market - smaller files for "consumers" who are likely to have slower connections.

As to the "72 DPI thing", DPI only matters at the input and paper output stages. Scanner software (at least the scanner software I've used) can use DPI as a parameter to determine the scanning resolution, and printers can use it to determine the size of the print. Once scanned, however, it's the pixel count that matters for monitor display, as Frank mentioned. Different digital image editors may, however, use DPI in various misleading ways as a parameter for resizing, thus perpetuating the confusion. The DPI setting is carried in the file header data for eventual use in printing, but doesn't affect how the image is displayed on the monitor. The display is controlled by the pixel count, the monitor's resolution setting (e.g. 1024x768), and the non-adjustable dot pitch of the individual monitor.

Jeffrey Sipress
4-Nov-2006, 09:34
If you don't care to go thru all the iterations of downsizing 5% at a time, which maintains the best quality, just use Fred Miranda's WebPresenterPro plugin for PS. It automates the proces and saves a heck of a lot of time. It can sharpen, too, if you wish to. All images on my site are done that way.

David A. Goldfarb
4-Nov-2006, 09:41
I downsize 50% at a time, sharpening at each stage in decreasing amounts.

My USM settings are radius--0.5 pixels, threshold--3 levels, and depending on the quality of the scan and the size of the original, I might start anywhere from 60-120% and I usually end up at 20-40% for the final web-sized version.

robc
4-Nov-2006, 10:14
Hello,
I've seen small, web-sized images on numerous sites, so I know it can be done


How do you know? You never saw the original size image so yoU don't know what detail was there to begin with. You are liking what you see and making a quantum leap to assuming that they are detailed and that they contain the same detail that an original contained.

This is a frequent assumption and kind of proves the point that fine detail is not nearly as important as people would have you beleive.

Frank Petronio
4-Nov-2006, 11:56
I used to advocate downsizing by a factor of 2 and doing it in multiple steps, etc.

At least until I compared several random files done that way and also done by simply resizing once and running a CS2 "Smart Sharpen" filter at what looked good at 100%.

I've been doing digital stuff since forever. I can't tell a difference. Photoshop is really, really smart software.

Paul Schilliger
4-Nov-2006, 12:18
In Photoshop, you can chose which algorithm will process the image when you change it's size. Adobe recommends "bicubic" as producing the best quality. This is true when the image is made larger or if it contains patterns, in order to avoid artifacts being created. The images produced in bicubic mode are smoother. But when it comes to down sampling very small images as the ones used for the web, this mode makes a mush of the images, and no matter what amount of unsharp filter can restore the details. That's why I always use the "bilinear" mode to reduce an image. This mode simply strips the image from the unnecessary pixel lines, and the original sharpness is preserved. Then, a 0,2 or 0,3 radius for an amount of 200 or even less will sharpen it quite well (always set levels to 0). This works well for me, others have their way of doing which seem fine too.

Sheldon N
4-Nov-2006, 12:36
..u could also buy..Ftp..space,we provide our own customer's,then u could keep high mb files,as good as original for people to see..but i know there are places to rent ftp.space....?we have a tera-byte,of space that setup..cost.$25,000..but you could also set up old computer,and dedicate that as being ftp,with some storage space of your own...

Is something going to be done about his spam?

Michael Gordon
4-Nov-2006, 14:21
I just assume my jpegs are going to suck and live with it. The best jpegs I can create will never hold a candle to even my tiniest prints, and trying to get even close is an exercise in futility. My best looking jpegs are minimally 1000 pixels wide and at max jpeg quality, which makes then unbearable to download for dialup users (between 600k and 1mb, but I will not put 1000px jpegs on my site). Any jpeg under 200k is going to look compromised. I'd get used to it, unless it's jpegs you actually want to sell.

David_Senesac
24-Nov-2006, 20:52
I'd echo what Michael just posted haha. The web levels the playing field between we large format photographers and even someone with a crummy $39.95 one-megapixel digital camera. The result of downsizing compression on each image varies with strong graphic images suffering less while images with fine subtle tones and detail sometimes resulting in an image any web audience is likely to consider mediocre. Accordingly on my own web site, I've gone to some length to give viewers some indication of how detailed my images really are. Just like you did on this thread, for each of my marketed images, I selected one or two appropriate locations on the frame that the public can view that shows a small crop of the image at the same size as my standard print sizes for a given image. Thus if I have say a 30x37.5 inch print, I'll select square foreground and background locations each 900x900 pixels, downsize by one-third that will then display an approximate 3 inch by 3 inch section of the print at the same size of the actual print given typical monitor dot pitches. The one-third downsizing does reduce sharpness, however that is necessary given the ratio of typical monitor dot pitch of 90 RGB phosphor pixels per inch versus my 304.8 ppi printing pitch else the 3x magnified monitor display will tend to look less sharp. To see what I do check out any of the images on my home page image index below then on the sub-page for specific images the crop viewing link is just below the display image. ...David