PDA

View Full Version : Tutorial: Illustrated Guide to B&W scan & processing



buze
23-Oct-2006, 08:00
I've made a small guide on how I scan & process my images. I think I describe a couple "tricks" that I haven't seen anywhere else, and I described my way of keeping the maximum quality while keeping the resource used reasonable.

The tutorial uses a 6x6 neg, but of course it is applicable to large format ! :D

Hope you enjoy, feel free to comment.

http://oomz.net/bw_workflow/

Frank R
23-Oct-2006, 09:44
Thanks for posting this; I appreciate how long it takes to write and post these things.

I scanned through and read a bit. I have it bookmarked for later reference.

Saulius
23-Oct-2006, 22:27
Thanks for posting it and sharing your knowledge. When I get some time I will give your workflow a try and see how it works for me. I'm still digesting all that you wrote but right now just a couple questions.
You say to scan as a JPEG at 8 bit instead of as a TIFF format at 16 bit. Is your recommendation only to make the file size smaller? Smaller files are easier to work with but what about a loss of quality in the image. It's my understanding that with 16 bit images you will have more tones which gives you more room to make tonal adjustments which also helps lessen the chance of posterization. Also it's my understanding that saving scanned files in Photoshop format is best, then TIFF. These are lossless formats while JPEG loses info when compressing files which can also lead to digital artifacts. What's your thoughts on this? If you or anyone else thinks my assumptions are wrong by all means please explain why as I am no expert and am always trying to learn more.

I will have to try your technique of scanning at 4800 dpi and then downsize to 2400. In my own tests with my scanner I've found that I lose a lot of sharpness to the image by scanning at 4800. My scanner produces the sharpest scans at 2100 dpi. I'm not sure if I'll have significantly less noise doing it that way to make it worth my while for losing so much sharpness. I also have the software program SilverFast which allows me to do multisampling which in effect does what your method does. I can scan at my sharpest dpi of 2100 and if need be multi scan 2,4 8 or 16 times to help illiminate noise. But again there is a loss of sharpness and that sharpness is a big reason why I shoot large format. Have you used Silver Fast mulit sampling, any thoughts on this?

Again thanks for your efforts and please don't take my questions the wrong way. I'm just looking for the best methods to scan and work my images so I do appreciate your sharing your knowledge. All the best.

Saulius
23-Oct-2006, 22:44
Ok I guess I got ahead of myself. After some closer reading I see you later in your workflow convert the image from 8 bits to 16 bits. Sorry, but I did say I was still digesting the info. :) However I'm not too familiar with regards to converting from 8 bit to 16 bits. Is it not better to scan at 16 bits at the offset instead of converting later on?
Ok, I'll stop asking questions now until I've actually tried out your method, after further reading it does sound promising.

buze
24-Oct-2006, 00:02
Thank you for your comments,

Just realize that we will be using FOUR times 8 bits from the original image at 4800dpi to make the final 2400dpi 16 bit image.
Not only that, but we will be have 1/4 of the noise level since we are blending 4 values that are very close together.
This system is far superior than doing a 4 * "multiscan" of the same area, since the film can moves when being scanned (lamp heat); so all you do when you "multiscan" is risking a softer scan, for less noise.

Also remember that photoshop "16 bits" is only "15 bits" in real life, so all in all it makes the difference of precision from the source file a lot less than it appears, in fact, in my experience, I end up with better gradation AND a lot less noise by doing this downsampling than by using "native" 16 bits/TIFF.

JPEG high quality (the maximum quality setting) has virtualy no artifacts to speak of (In the EPSON software, it is not true for VueScan that is pretty lame in that regard), and since the image is going to be processed AND resized such artifacts are --in my experience-- irrelevant and invisible in the resulting image.

/if/ I could use JPEG2000 16 bits, I would; but I would rather use high quality JPEG than waste my disk space with TIFF files.

Also remember that to see any artifact in a print of a file that size, you would have to use a pretty good loupe and look at a 20x20 inches print with it...

The idea of using "4800dpi" is just to be able to oversample by a factor of 2 from the resolution you want at the end; these values would be different from different scanners I'm sure...

Hope this help...

Ken Lee
24-Oct-2006, 07:00
Really nice.

Hail to the folders from the 1950's ! ( I have a 6x6 and 6x9... love'm)

Hail to Windsor, and to England in general. Heavenly for photography.

Larry Gebhardt
24-Oct-2006, 07:20
Saving in JPG to startwith doesn't buy you anything besides a reduction of disk space used. Once opened into photoshop the image dimensions are what determine memory requirements. If you eventually delete the initial scan then I see no need to bother with JPG.

I am not a digital math expert, but as I understand it:
Scanning in 16bit mode will give you better highlights (since you are coming from a negative), even with your averaging technique. The reason being that in the initial scan at 8bits the highest zones your higlights (shadows in the neg) use are allocated relatively few tonal values. I suspect that your 4 pixel averaging won't overcome the major rounding that will happen.

On an asthetic issue I think your image could use a strong curve adjustment on the order of input 189 output 144. Of course this is just for my taste, and if you like your image a bit flatter that is fine. However, I think this may show up some problems in the highlights with your tecniques.

Ed Richards
24-Oct-2006, 07:21
> (In the EPSON software, it is not true for VueScan that is pretty lame in that regard),

If you are scanning with the Epson software, you are not getting all of the dynamic range out of your negative, which would explain why you do not see any benefit to 16 bit scans (really 12 bit). You get much better dynamic range with Vuescan or Silverfast, and staying at 16 bit through editing. The oversampling is a good idea, and what I use, but Silverfast's multisampling with alignment works most of the time.

David Luttmann
24-Oct-2006, 08:00
Ok. Where to start? First of all, converting from 8 bit to 16bit after the fact does nothing. Second, you'll notice the missing spikes in your levels....you are aware what that is, aren't you? I just finished playing around doing some comparisons working with 16 bit and 8 bit and I can tell you that it is noticable in the smoothness of sky tonality depending on how much in the way of adjustments you've made.

Third, converting to JPG saves you no time as when PS opens the JPG, it decompresses it and it becomes the same size as a TIF file. Thus, at the least, if you have a desire to stick with 8 bit files, you should at least use the TIF format. Sharpening files that use any JPG compression will show artifacts in photos with high contrast transition lines....such as the top of a forest joining the sky. As well, if you save the master scan as a JPG, and then save your completed worked-on file as a JPG....you are running the compression process a multiple of times. Sorry, but this is a no-no!

Your tutorial is well done....however, disk space is cheap. Stick with 16 bit to start and convert to 8 bit later. There are enough problems with tonality in the EPson scanners that we don't need to throw away data to save on $.05 of hard drive space. And finally, don't use JPG as you storage format.

Kirk Gittings
24-Oct-2006, 08:06
I don't have time to dig into this now, but this is not a workflow I would recommend for high quality work for many of the reasons Ed and David stated above.

tim atherton
24-Oct-2006, 08:21
I'll add in on size, some of your choices are false economies.

Most people working in LF end up with far larger sizes than from your 6x6 negs. Working with image file sizes from 90 to 250 to 800mb+ should really be a major problem. If you are scanning 8x10 you are going to get big files.

Also, the same Dave said for using JPEG - if you are re-saving to jpeg, you are probably losing data each time. And not starting in 16 ( or even 15 or 12 bit) but converting 8 to 16 bit later on is also not really doing what you seem to think it is.

That said, some of the other stuff you detail certainly looks worth a try!

buze
24-Oct-2006, 09:10
I just can't believe it! I am a DSP Engineer, I worked in signal processing software for many may years. I /dig/ signal processing.

I'm not going to start arguying about points that obviously you don't understand.
I'd like the moderators to remove this thread, I see no point is trying to educate such a group of obviously highly competent software engineers about how signal is processed from source to bus to destination with the minimum of aliasing while keeping resources in check.

Ron Marshall
24-Oct-2006, 09:37
I just can't believe it! I am a DSP Engineer, I worked in signal processing software for many may years. I /dig/ signal processing.

I'm not going to start arguying about points that obviously you don't understand.
I'd like the moderators to remove this thread, I see no point is trying to educate such a group of obviously highly competent software engineers about how signal is processed from source to bus to destination with the minimum of aliasing while keeping resources in check.

Don't get your knickers in a twist. Simply explain in what way you consider the posters to be incorrect.

If you indeed have a better understanding of the process then please correct any misconceptions.

David Luttmann
24-Oct-2006, 09:42
I just can't believe it! I am a DSP Engineer, I worked in signal processing software for many may years. I /dig/ signal processing.

I'm not going to start arguying about points that obviously you don't understand.
I'd like the moderators to remove this thread, I see no point is trying to educate such a group of obviously highly competent software engineers about how signal is processed from source to bus to destination with the minimum of aliasing while keeping resources in check.

Yes,

I suggest this be deleted as well....nothing like someone without an understanding of basic principles trying to educate others with garbage information.

I suggest you do some reading on basic scanning and Photoshop use.

Kirk Gittings
24-Oct-2006, 09:50
Michel (Buze),

You may have been a software engineer, but many of us here are actual living breathing amatuer and professional photographers with a ton of experience scanning. I am always interested in new ideas, but don't expect your ideas to be accepted on faith. Not on this forum.

Marcus Carlsson
24-Oct-2006, 10:14
Too bad that this thread turned out in this way. I belive that there are no such thing as a golden way to produce the perfect scan, but different ways to produce a good scan.

It's always good when someone say how he/she does it and then a debate on the bad/good things start and everyone will learn something.

Therefore I hope that the reader of this thread try to discuss on the perfect way to scan instead of ruin it for the rest of us readers.

/ Marcus

Larry Gebhardt
24-Oct-2006, 11:32
I just can't believe it! I am a DSP Engineer, I worked in signal processing software for many may years. I /dig/ signal processing.

I'm not going to start arguying about points that obviously you don't understand.
I'd like the moderators to remove this thread, I see no point is trying to educate such a group of obviously highly competent software engineers about how signal is processed from source to bus to destination with the minimum of aliasing while keeping resources in check.

Educate me. I don't claim to understand all. I would like to learn. I can see how some of your methods would work, over sampling and then averaging to reduce noise for example. However I don't see how the same technique gets you all of the benefits of higher bit scanning, even if you are storing the average in a larger space.

So don't ask to have the thread removed, just explain where we are wrong, accept that people may disagree.

David Luttmann
24-Oct-2006, 12:00
Educate me. I don't claim to understand all. I would like to learn. I can see how some of your methods would work, over sampling and then averaging to reduce noise for example. However I don't see how the same technique gets you all of the benefits of higher bit scanning, even if you are storing the average in a larger space.

So don't ask to have the thread removed, just explain where we are wrong, accept that people may disagree.

The point is that oversampling and averaging to reduce noise works in 8 bit....but it would be even better with 16 bit uncompressed files to start. I think many of us have been doing this long enough to realize that you don't start with 8 bit files and use JPGs as masters. Why? Because the final result isn't as good. I've seen the difference in print and no "DSP" expert will convince me that what I've seen to be true doesn't exist.

buze
24-Oct-2006, 17:42
You might be a photographer, but you are not a DSP engineer. This discussion sounds like a painter explaining to a weaver how to make canvas. You might THINK you know, but you don't.

Just research a bit "noise to signal ratio" and ponder on "bus width versus signal source".. Surely such pompously competent people who can "not recommend this workflow" will know all about this kind of stuff.

Note that in my tutorial I pointed that /if/ I could use JPEG2k to store compressed 16 bits sources, I would use it. I wouldn't like the throw away signal.
I think you guys are still back in the 1990s where JPEG encoders were primitives. MODERN encoders are not, I would defy you to see any artifact in a clean "source" image compressed at say, 95%+ JPEG.

/If/ you recompress a JPEG the quality plummets, but on a clean source image it's return is still fantastic. In my tutorial the image is processed AND resized before doing the "final" JPEG, that makes de "danger" or overcompressed macroblocks artifacts pretty much as low as in the original image.

Oh and the "16 bits" of your scanner is bullshit. The whole Dmax is 16 bits /for it's total exposure range/, but as soon as you move the black/white point you eat into that. A normal negative will use about 1/2 of that range, up to 2/3ish on a contrasty neg; just the base "color" will eat into that anyway. So if you get away with 12 bits precision, you are a luckly person. Note that THIS will also give you exactly the same signal/noise ratio than in 8 bits; you just get better "precision" on the noise.

Jay DeFehr
24-Oct-2006, 20:42
Buze,

thank you so much for posting your workflow, it has improved the quality of my scans enormously.

Jay

Tim Lookingbill
25-Oct-2006, 08:29
I don't scan B/W or large format. I'm just an ex-graphic artist/prepress technician turned amateur photographer who likes to shoot consumer 35mm color negatives with a 1995 Minolta P&S and scan them on an Epson 4870 just to practice polishing turds.

Pardon my ignorance but I can't understand why such lengthy instructions are needed for scanning a one channel/grayscale image? This isn't the only site of well meaning individuals who've written a detailed workflow on this subject. I respect the dedication behind the work put into it, though.

I just have to scratch my head and ask...Do pro photographers really have this much trouble scanning B/W? Couldn't a simple curve pull out all the detail and tonality in such a simple capture?

David Luttmann
25-Oct-2006, 08:51
You might be a photographer, but you are not a DSP engineer. This discussion sounds like a painter explaining to a weaver how to make canvas. You might THINK you know, but you don't.

Just research a bit "noise to signal ratio" and ponder on "bus width versus signal source".. Surely such pompously competent people who can "not recommend this workflow" will know all about this kind of stuff.

Note that in my tutorial I pointed that /if/ I could use JPEG2k to store compressed 16 bits sources, I would use it. I wouldn't like the throw away signal.
I think you guys are still back in the 1990s where JPEG encoders were primitives. MODERN encoders are not, I would defy you to see any artifact in a clean "source" image compressed at say, 95%+ JPEG.

/If/ you recompress a JPEG the quality plummets, but on a clean source image it's return is still fantastic. In my tutorial the image is processed AND resized before doing the "final" JPEG, that makes de "danger" or overcompressed macroblocks artifacts pretty much as low as in the original image.

Oh and the "16 bits" of your scanner is bullshit. The whole Dmax is 16 bits /for it's total exposure range/, but as soon as you move the black/white point you eat into that. A normal negative will use about 1/2 of that range, up to 2/3ish on a contrasty neg; just the base "color" will eat into that anyway. So if you get away with 12 bits precision, you are a luckly person. Note that THIS will also give you exactly the same signal/noise ratio than in 8 bits; you just get better "precision" on the noise.

Of course, if you scan in 8 bit, you throw away any extra precision you may have hoped for by using a higher bit depth. Compressing levels with a 16 bit master file is ALWAYS better than with and 8 bit one.

If this method works for you....then great. But please, don't tell me that "you defy" anyone to see the difference. I tried your workflow and compared it to a 16 bit scan and tif method and found better results from using 16 bit. Sorry, but you're wrong. All the engineering in the world cannot tell me that I can't see the difference when I can.

I will not change my workflow and reduce quality to please some arrogant DSP who obviously is well outside his realm when it comes to high quality scanning and printing.

Go blow your attitude somewhere else when you actually learn to work with what you see as opposed to what you think "should" be correct. No one here will bow down to your "DSP" expertise!

robc
25-Oct-2006, 10:12
You might be a photographer, but you are not a DSP engineer. This discussion sounds like a painter explaining to a weaver how to make canvas. You might THINK you know, but you don't.

Just research a bit "noise to signal ratio" and ponder on "bus width versus signal source".. Surely such pompously competent people who can "not recommend this workflow" will know all about this kind of stuff.

Note that in my tutorial I pointed that /if/ I could use JPEG2k to store compressed 16 bits sources, I would use it. I wouldn't like the throw away signal.
I think you guys are still back in the 1990s where JPEG encoders were primitives. MODERN encoders are not, I would defy you to see any artifact in a clean "source" image compressed at say, 95%+ JPEG.

/If/ you recompress a JPEG the quality plummets, but on a clean source image it's return is still fantastic. In my tutorial the image is processed AND resized before doing the "final" JPEG, that makes de "danger" or overcompressed macroblocks artifacts pretty much as low as in the original image.

Oh and the "16 bits" of your scanner is bullshit. The whole Dmax is 16 bits /for it's total exposure range/, but as soon as you move the black/white point you eat into that. A normal negative will use about 1/2 of that range, up to 2/3ish on a contrasty neg; just the base "color" will eat into that anyway. So if you get away with 12 bits precision, you are a luckly person. Note that THIS will also give you exactly the same signal/noise ratio than in 8 bits; you just get better "precision" on the noise.

You may be a DSP engineer but the major flaw in your whole idea is that noise is something that can be controlled with the usual digital image processing software that you are using. By which I mean, the only place where noise is a consideration is at the point of scan before the signal is converted to digital. Once digitised noise is irrelevant in analogue terms which you seem to be thinking in. The digitised values are not effected by noise. So all you have is an image which has captured noise in it. And you have no way of controlling the noise the scanner produces (except by producing negs of suitable quality)

You should read this (http://www.largeformatphotography.info/forum/showpost.php?p=187913&postcount=48) which will show you that your theory of removing noise only achieves two things. It reduces image sharpness by averaging 4 pixels into one and introduces resizing artefacts which are not the same as noise. They are calculated digital values which are wrong.

So if you want the best quality possible from your kit you will scan at hardware resolution so as not to allow hardware or software resizing during the scan and then instead of downsizing you should print at a higher dpi which will have the the same visible effect as removing scan noise by downsampling but without introducing downsizing artefacts. This in turn means less final sharpening is required which reduces final image artefacts which are there even if you can't see them.

Anything else is just an overly complicated method of reducing filesize.

sanking
25-Oct-2006, 11:19
You may be a DSP engineer but the major flaw in your whole idea is that noise is something that can be controlled with the usual digital image processing software that you are using.

What is a DSP engineer?

Sandy King

robc
25-Oct-2006, 11:25
What is a DSP engineer?

Sandy King

Its what BUZE calls himself and which I think means Digital Signal Processing Engineer. (but I could be wrong)

sanking
25-Oct-2006, 12:51
Its what BUZE calls himself and which I think means Digital Signal Processing Engineer. (but I could be wrong)

Interesing. Is this a degree option at some universities?

Sandy King

Christopher Perez
25-Oct-2006, 13:06
No degree, as such. Think Electrical Engineering Bachelor of Science.

It means he understands, from a practical design engineering standpoint, what the effect on bus widths, signal processing, and data accuracy have on the scanning process.


Interesing. Is this a degree option at some universities?

Sandy King

Ed Richards
25-Oct-2006, 13:20
> You should read this which will show you that your theory of removing noise only achieves two things. It reduces image sharpness by averaging 4 pixels into one and introduces resizing artefacts which are not the same as noise. They are calculated digital values which are wrong.

I do not think that resizing a gif tells us anything about oversampling to control noise. Scanning at 4800 DPI and then downsampling is a reasonable way to average data to reduce noise. You start with more data points and average them. With the gif you start with the minium data for the image, then reduce it. Downsampling only works when you start with more data than you need for the final image. Since GIFs do not have any excess data, all resizing a gif tells us is that if downsampled to less resolution than is needed to define the image, that the image breaks down.

robc
25-Oct-2006, 14:16
the fact the image is a tiff is irrelevant. I've done this with a TIF and the same thing happens. I think you are wrong.

Ed Richards
25-Oct-2006, 14:25
> I've done this with a TIF and the same thing happens.

Unless the tiff has excess data points, it will degrade. Unless the tiff was generated by a random process, like scanning, it is not going to have excess data points. A scan at 4800 downsampled to 2400 or 1800 has excess data points and what is being lost is noise, not real information.

robc
25-Oct-2006, 21:25
Perhaps you would care to define for the benefit of the list just exactly what an excess data point is and also what makes scanning a random process.

robc
26-Oct-2006, 02:35
Perhaps you would care to define for the benefit of the list just exactly what an excess data point is and also what makes scanning a random process.

to add: the fact that the gif is not random is purposefully designed to emphasize the destructive process of downsizing. Just because a landscape image is relatively random doesn't mean that the destructive process of downsizing isn't taking place. It is, but it is much less evident to the eye than with a highly structured image.

So arguing against the validity of this is the same as saying it doesn't matter if fine detail is altered by the downsizing process. Well thats fine except if thats the case, then why bother with such a long drawn out process to achieve fine detail? I guess it depends on whether you are trying to achieve apparent fine detail or as close to real fine detail as possible.

Larry Gebhardt
26-Oct-2006, 05:34
Rob, scanning at 4800dpi and then averaging the 4 pixels should give you a 2400 dpi that has less noise than one scanned directly at 2400. Assuming the noise is random (which it must be to be noise) taking 4 samples and averaging will result in a more uniform signal than one sample. I have done this experiment with both resizing and multi-sampling on a drum scanner. It does work very well.

Also, the Epson scanner buze is using isn't capable of capturing much detail above 2400dpi, so scanning at 4800 dpi is only giving him 4 samples for each real pixel. I would argue that on my 4870 the real resolution is around 1600dpi so you could really get about 9 samples to average.

The problem with buze's method isn't noise reduction, but rather the claim that averaging 8 bit pixels when resizing will give you 16bit precision. In an ideal world it may not as the initial 8bit conversion will round all pixel values to one of a very few values in the dark blacks (whites in negative film). Averaging won't bring back the subtle differences if all the neighboring pixels were rounded to the same value as well.

buze
26-Oct-2006, 05:56
But, scanning at 16 bits is not going to give you 16 bits anyway. As I explained, you never reach 16 bits because for that you would have to have an image that "covers" the complete DMax of the scanner, and most of the time, you use just about a half . Here goes one bit.
Then, that remaining signal is "stretched" to fit the 16 bits space of the file, and photoshop remove another bit when opening it (because it is actually 15 bits internally). Here goes another one...

My method gives you 10 bits of clean signal, for a file that is massively smaller than your noisy 14 bits one. I'm just saying that the loss of quality incurred by using JPEG is acceptable to me, /especialy/ since my method further down ensure minimum loss of signal and banding (something that most people using complex curves, don't)
And again, this is a detail; if the software supported JPEG 2000, I would keep the "16 bits" and make this whole argument irrelevant.

The "interesting" bits of my tutorial are not about the source files anyway.

David Luttmann
26-Oct-2006, 07:44
But, scanning at 16 bits is not going to give you 16 bits anyway. As I explained, you never reach 16 bits because for that you would have to have an image that "covers" the complete DMax of the scanner, and most of the time, you use just about a half . Here goes one bit.
Then, that remaining signal is "stretched" to fit the 16 bits space of the file, and photoshop remove another bit when opening it (because it is actually 15 bits internally). Here goes another one...

My method gives you 10 bits of clean signal, for a file that is massively smaller than your noisy 14 bits one. I'm just saying that the loss of quality incurred by using JPEG is acceptable to me, /especialy/ since my method further down ensure minimum loss of signal and banding (something that most people using complex curves, don't)
And again, this is a detail; if the software supported JPEG 2000, I would keep the "16 bits" and make this whole argument irrelevant.

The "interesting" bits of my tutorial are not about the source files anyway.


You keep missing an important point....while you won't get true 16 bit data from the scan, you will get more than you will from an 8 bit scan. Thus, using your procedure with the 16 bit scanner setting will yield you more usable data. As well, saving to jpg is a non-starter....the results aren't as good....period!

The interesting point in your tutorial fall by the wayside with your erroneous capture and filing methods.

tim atherton
26-Oct-2006, 07:52
I'm just saying that the loss of quality incurred by using JPEG is acceptable to me, /especialy/ since my method further down ensure minimum loss of signal and banding (something that most people using complex curves, don't)
And again, this is a detail; if the software supported JPEG 2000, I would keep the "16 bits" and make this whole argument irrelevant.


So, for example, how often do you make 50" or 60" prints?

robc
26-Oct-2006, 09:11
Rob, scanning at 4800dpi and then averaging the 4 pixels should give you a 2400 dpi that has less noise than one scanned directly at 2400. Assuming the noise is random (which it must be to be noise) taking 4 samples and averaging will result in a more uniform signal than one sample. I have done this experiment with both resizing and multi-sampling on a drum scanner. It does work very well.


I've written this before on this list but I'll do it just one more time.
Dowsampling to reduce noise does average or perhaps a better term would be smooth out the noise. But in doing so it also reduces sharpness. Reducing sharpness equates to losing detail. Get it? It Loses detail as well as smoothing. i.e. smothing equals reduction in fine detail period. Now most people consider that 360dpi in the print is all you need because of the human eyes lack of ability to resolve detail past approx 7 line pairs per millimeter and that is with very high contrast line pairs. What I am saying is that instead of downsizing to reduce noise (which also loses sharpness and introduces downsizing artefacts) so that you can print at 360dpi, you don't downsize and you print at 720dpi instead. Result equals no loss of sharpness through downsizing. No downszing artefacts introduced to image. AND because the noisy pixels are now printed so close together, a group 4 pixels which would have been averaged by your downsize will npw be one line pair which cannot be resolved by the human eye at 720dpi. That means it will be blurred which means to all intent and purpose it is averaged. Get it? Downsizing is unecessary to achieve removal of noise. Get it? And you have not introduced any other aliasing artefacts and softening of the print to get to that point. And you can forget all the crap about signal noise ratios in the knowledge that there really isn't any need to worry about it if you print at a high enough dpi to render it irrelevant. And you won't need as much final sharpening in the print which also means less artefacts. Its the simplest workflow possible and the only limiting factor is whether you have a pc capable of processing the files. If not then get one.

robc
26-Oct-2006, 09:27
My method gives you 10 bits of clean signal


You're talking complete bollocks. The only signal is happening inside the scanner. Once that signal is converted in the analogue to digtal converter inside the scanner the term signal is no longer valid. You're stuck in analogue mode. Convert yourself to digital mode where you will understand that digital bits are no longer susceptible to noise or signal variations. If they were then all computers would fail.

David Luttmann
26-Oct-2006, 09:59
You're talking complete bollocks. The only signal is happening inside the scanner. Once that signal is converted in the analogue to digtal converter inside the scanner the term signal is no longer valid. You're stuck in analogue mode. Convert yourself to digital mode where you will understand that digital bits are no longer susceptible to noise or signal variations. If they were then all computers would fail.

Rob,

He doesn't get it because he has remained on the theoretical side as opposed to working to obtain high quality output in large prints. Theory and reality don't often mesh here. That is why certain sharpening methods that work best in theory, don't give us the best results in printing....ditto with interpolation algorythms. Until he actually compares prints like I have at large sizes, his theory of what is best holds no water with me....especially since I've compared in the past and what he states with scanning bit depth & file compression is plain wrong....and visible!

Ed Richards
26-Oct-2006, 10:08
> Dowsampling to reduce noise does average or perhaps a better term would be smooth out the noise. But in doing so it also reduces sharpness.

Only if you had 4800 real dpi that you were averaging. With these scanners you have about 1800 real DPI, so that when you average the data from 4800 to 2400, you are not losing detail because there was no real 4800 DPI (or, really, 2400 DPI) detail there in the first place.

David Luttmann
26-Oct-2006, 10:15
> Dowsampling to reduce noise does average or perhaps a better term would be smooth out the noise. But in doing so it also reduces sharpness.

Only if you had 4800 real dpi that you were averaging. With these scanners you have about 1800 real DPI, so that when you average the data from 4800 to 2400, you are not losing detail because there was no real 4800 DPI (or, really, 2400 DPI) detail there in the first place.

Ed,

You still loose detail as the downsampling process is not perfect....you therefore introduce errors in that process that does impact detail and apparent sharpness.

I would have had less of an issue with this whole process had he been working with 16 bit tiff files. Once I saw he was 8 bit scanning and saving his master file in jpg, he lost all credibility with me.

paulr
26-Oct-2006, 10:51
Ed,

You still loose detail as the downsampling process is not perfect....you therefore introduce errors in that process that does impact detail and apparent sharpness.

I would have had less of an issue with this whole process had he been working with 16 bit tiff files. Once I saw he was 8 bit scanning and saving his master file in jpg, he lost all credibility with me.

I find that with an Epson type scanner (one with an optical resolution around half the actual sampling frequency), I get small but ocasionally noticeable reductions in noise
from downsampling. Scanning at 4800, dowsampling to 2400 (which is still slightly above the optical resolution under ideal circumstances). I don't see any reduction in sharpness from this method. I wouldn't expect to, since it's really just averaging four oversampled pixels. Scanning at 2400 ppi is much more flawed--the scanner simply throws out every other row and every other scan line.

I agree that there's no good reason to use jpeg compression. It might be that the highest jpeg setting is actually lossless, but that makes its benefits no different from LZW (and adds the considerable disadvantage of 8 bit only encoding)

On general principle it's foolish to throw out any bit depth early in the game. Although there are in fact benefits to increasing the bit depth of a file before any image processing. It's not about creating information that's not there; it's about making a file that's more resistant to degradation from the processing algorithms. You can demonstrate this yourself. Take two copies off any 8 bit image file. Convert one to 16 bits. Then abuse both files by repeatedly increasing and decreasing the contrast. Look at the images (and the histograms) afterwards.

This principle is well known in audio. Most digital audio workstation software actually processes the signal in a 32 bit space, even though the files themselves are typically 16 or 24 bit. The idea is that all the processing artifacts end up several decimal spaces farther out, where they become harmlessly truncated once the file is downsampled to its final

In real life, you should never see such damage. There's rarely a reason to subject a file to more than one or two total adjustments (everyone's using adjustment layers, right?) And there's also little reason to start with anything besides a 16 bit file ... even if it's only a 15 bit file in real life.

David Luttmann
26-Oct-2006, 11:23
True Paul,

Converting from 8 to 16 bit doesn't add anything, but it does allow for less error in multiple corrections to an image. Of course, starting off with more information for the original 16 bit scan is preferable.

The JPG format is still lossy at the highest quality setting and thus is not an option for to high quality work.

I agree with the audio example. We get less linearity errors when using a 24 bit master and then downsampling to 16 bit....just like we'd have less errors starting with a 16 bit scan and working down to 8 bit.....it appears Buze just doesn't understand this.

However, whenever I do any serious listening to jazz, etc, it's a LP on the VPI....not a CD ;-) I know.....odd for a truly digital guy like me.

paulr
26-Oct-2006, 12:00
However, whenever I do any serious listening to jazz, etc, it's a LP on the VPI....not a CD ;-) I know.....odd for a truly digital guy like me.

It's true, and i don't think it's so mysterious. Both the CD and the LP are quirky media with their own fingerprints. A producer I used to work for gave me a pretty succinct impression ... "an LP sounds like an LP, a CD sounds like a CD, and neither one sounds anything at all like the master tape."

David Luttmann
26-Oct-2006, 14:31
It's true, and i don't think it's so mysterious. Both the CD and the LP are quirky media with their own fingerprints. A producer I used to work for gave me a pretty succinct impression ... "an LP sounds like an LP, a CD sounds like a CD, and neither one sounds anything at all like the master tape."

Amen!