Results 1 to 3 of 3

Thread: Signal processing on digital images (warning, esoteric!)

  1. #1

    Signal processing on digital images (warning, esoteric!)

    Having played with unsharp masking, I wonder if there is another way to sharpen detail. Once an image is digitized, we know the sampling rate and the Nyquist sp atial frequency. From published material from film manufacturers, we know the po wer spectrum of the film's response, the MTF. Assuming film induces no phase-shi ft, why not deconvolve the films MTF from the image... do a 2D FFT on the image and divide out the films MTF, then inverse transform. Assuming a good quality sc an with low noise, that should restore the image that was produced by the lens i n the film plane. Has anybody tried this? Am I missing something obvious?

  2. #2

    Signal processing on digital images (warning, esoteric!)

    The traditional way to approach this is to determine the impulse reponse of the system (taking lens plus film plus scanner) and derive a convolution mask which as nearly as possible inverts it, such that a step function in the subject is reproduced as a step function in the postprocessed data.

    I suspect that the reasons why the approach you describe generally aren't used are twofold:

    1. The processing overhead for 2-D FFTs is non-trivial, particularly compared to that required for a simple convolution. You might be able to get around that by employing a blocked approach (along the lines of the 8x8 DCT operation that forms the heart of JPEG) but then you'd risk discontinuities at the block boundaries. There are a fair number of recent papers in the literature which deal with the computational complexity of the various FFT algorithms, including one from a few years back which estimated that 40% of all computational cycles on all Cray Research machines in the installed base at that time were being spent on FFTs.

    2. Convolution-based approaches already do a very good job of extracting all of the image detail available, so that's not the issue here. The real problem is noise rejection: We want to sharpen details adequately without amplifying noise, or creating moire patterns in cases where the original isn't contone. Given that noise from scanning in particular is vastly more likely to repeat at fixed frequencies than are image details, it's not clear that going into the frequency domain and then boosting FFT terms is really what you want to do! (an exception: If you know the frequency of the noise you want to suppress, then you might see some benefits by attenuating the corresponding terms of an FFT on the data; I say "might" because I've tried that approach in the context of moire suppression and haven't had a whole lot of luck).

    Let us know how it goes ;-)

    -- Patrick Chase

  3. #3

    Join Date
    Jul 1998
    Location
    Lund, Sweden
    Posts
    2,209

    Signal processing on digital images (warning, esoteric!)

    This will work, but you have to remember that with modern lenses and good technique it is the film's MTF which limits the total MTF of the system. Even if you are careful to avoid dividing by zero, you often end up simply amplifying the noise floor as Patrick mentioned.

    Another problem is that unlike the point response function of lenses, which is nice and smooth at their resolution limit, the MTF of film does some funky things as you approach spatial frequencies corresponding to the grain size. The human eye is good at averaging the grain out and ignoring it, but including the effects of grain in a deconvolution algorithm takes some care.

    Another way of looking at the grain problem is to remember that the MTF of film varies with the contrast level, because subtle tonal variations can only be expressed with a larger number of grains. Thus the MTF depends on the amplitude of the spatial wave you are trying to capture - which knocks Fourier's theorem into a cocked hat.

    So in practice, despite the fact that deconvolving the film's MTF has a justification grounded in physics, it doesn't have many real advantages over other arbitrarily-chosen sharpening algorithms, at least for general imagery.

Similar Threads

  1. Processing for images shot in fog?
    By Scott Davis in forum Darkroom: Film, Processing & Printing
    Replies: 3
    Last Post: 5-Oct-2005, 12:14
  2. NY Times Article About Preserving Digital Images
    By David Karp in forum Digital Hardware
    Replies: 10
    Last Post: 18-Nov-2004, 17:27
  3. Warning, radioactive lenses!
    By Pete Andrews in forum Lenses & Lens Accessories
    Replies: 37
    Last Post: 6-Sep-2004, 06:20
  4. digital "tag" for keeping track of images
    By jnanian in forum New Products
    Replies: 2
    Last Post: 13-May-2004, 18:41
  5. WARNING: Don't post any film in the US!
    By Gavin Walker in forum Announcements
    Replies: 9
    Last Post: 27-Oct-2001, 20:21

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •