PDA

View Full Version : Recent advancement in image resolution enhancement



Patrick Gauthier
12-Feb-2018, 13:46
I read this article (https://arxiv.org/abs/1612.07919) a little while ago and thought some may be interested in advancements in digital image enhancement technology that (fingers crossed) may become available/applicable for photographers down the road. The context of their objectives is somewhat dissimilar to the typical photographers, yet it seems there is much overlap in the overall goal: better post-processing methods for enhancing image resolution.

https://arxiv.org/abs/1612.07919

Leigh
12-Feb-2018, 13:49
It all depends on what you mean by "enhancing image resolution".
You can increase the pixel density so larger prints look "normal" rather than pixelated.

But there's no valid way to add fine detail that's not in the original digital image.

- Leigh

Jac@stafford.net
12-Feb-2018, 13:54
I read this article (https://arxiv.org/abs/1612.07919) a little while ago and thought some may be interested in advancements in digital image enhancement technology that (fingers crossed) may become available/applicable for photographers down the road. The context of their objectives is somewhat dissimilar to the typical photographers, yet it seems there is much overlap in the overall goal: better post-processing methods for enhancing image resolution.

https://arxiv.org/abs/1612.07919

Cruise more academic literature and look for common proposals. The abstract posted is rather empty, but most abstracts are click-bait for scholars and hackers. In the end they are selling something, you know.

Patrick Gauthier
12-Feb-2018, 13:55
You can try it out for yourself! However, their code downsizes your image to make low-res examples (i.e., to see if their method can produce something similar to the original (ground truth). If you're savvy in python maybe you can try on full-res images.

https://github.com/shakeh3r/Enhancenet

Patrick Gauthier
12-Feb-2018, 13:57
It all depends on what you mean by "enhancing image resolution".
You can increase the pixel density so larger prints look "normal" rather than pixelated.

But there's no valid way to add fine detail that's not in the original digital image.

- Leigh

I would say it all depends on what you mean by "valid" :) The rest is rather objectively defined in the article.

consummate_fritterer
12-Feb-2018, 14:06
No offense intended to anyone. I know some software developers hype their wares far too much.

I once worked with a guy who believed a PS plug-in developer claiming that a six pixel image of a small bird could be made to look reasonably like that bird via use of their plug-in. Apparently, many people bought into it.

After taking a closer look, I have to admit their results are better than I expected. Still, what is not there cannot be put back with any automated process.

mmerig
12-Feb-2018, 17:36
It all depends on what you mean by "enhancing image resolution".
You can increase the pixel density so larger prints look "normal" rather than pixelated.

But there's no valid way to add fine detail that's not in the original digital image.

- Leigh

+1

The article is interesting, but the comparison of the grasshopper images says it all. Basically, their test is to down-res an image and see if their algorithm can up-res it to match the original (the ground-truth image). As they explain, their method tends to add things that were not there in the original, and loses things that were (as do other methods). To me, that is not a valid (true) outcome, even though their method does work better than the others.

Patrick Gauthier
12-Feb-2018, 20:28
+1

The article is interesting, but the comparison of the grasshopper images says it all. Basically, their test is to down-res an image and see if their algorithm can up-res it to match the original (the ground-truth image). As they explain, their method tends to add things that were not there in the original, and loses things that were (as do other methods). To me, that is not a valid (true) outcome, even though their method does work better than the others.

The results are comical, but when you see the file they had to work with its pretty impressive. Figure 7 shoes some issues with the texturing smooth features adjacent to patterned features that are problematic.

Still, it's pretty amazing how they can generate novel information that renders an otherwise blur startlingly similar to the ground-truth image. I'm very curious what it will do to an already somewhat acceptable image (for example a botched focus 4x5 negative that could still be acceptable for an 8x10 print) in terms of increasing potential/acceptable print size. My thinking is that while the image examples they provide in the paper as a whole leave much to be desired, perhaps for very large images (i.e. LF scans) the egregious artifacts in their final product would be less noticeable.

I'm going to try with a scan of an out of focus negative to see what happens. Will post results here.

mmerig
13-Feb-2018, 10:38
Your test would be more practical for most photographers compared to theirs. Who would down-sample an image and then try to reconstruct it using a complex algorithm, instead of just using the original, better image? The bicubic smoothing for down-sampling may give a different set of errors than a low-resolution image obtained in practice (low pixel count, blurry from motion, out of focus, etc.) But bicubic smoothing provided a consistent, well-understood starting point for their tests of course.

Although their method can give better-looking images, it fails from a forensic standpoint without ancillary information (e.g., Figure 7). It's basically pearls on a pig. A lot of the utility of their method depends on whether the objective is a more truthful image or something that just looks better somehow.

Regarding your test with an out-of-focus image, it would be better to also have a well-focused image, that is the same in every other way, to compare the improved out-of focus one to. Also, a scan of an analog enlargement of a section of the in-focus photo rather than an an initial zoomed-in scan would minimize the degradation during digital conversion of the true image. The same could be done for the out-of focus image. The file sizes and computation time would be smaller too.

Despite my negativity about the algorithm, I look forward to your comparisons, and thanks for letting us know about their work and what you plan on doing.

Nodda Duma
13-Feb-2018, 19:26
If you consider the "downsampling operator" is analogous to the optical point spread function and the Nyquist frequency for the imaging system, then you'll understand why this technique and others like it are not only theoretically possible, but practically possible as well.

This isn't the only approach nor are they working in a vacuum. Super-resolution techniques have been actively employed for over a decade, seeing first widespread use in smart phone cameras.

So yeah, it is a reality. Their research, like almost all research, is a small slice of ongoing incremental advancement in a field which the public conscious is only dimly aware of. So I have to chuckle at naysaying that sounds akin to explaining why this internet thing will never take off.

If you think this is amazing technology, look at actual hot optical engineering research topics such as computational optics or plastic GRIN lens printing, graphene detector research, or laminated infrared optics.

mmerig
13-Feb-2018, 23:27
If you consider the "downsampling operator" is analogous to the optical point spread function and the Nyquist frequency for the imaging system, then you'll understand why this technique and others like it are not only theoretically possible, but practically possible as well.

This isn't the only approach nor are they working in a vacuum. Super-resolution techniques have been actively employed for over a decade, seeing first widespread use in smart phone cameras.

So yeah, it is a reality. Their research, like almost all research, is a small slice of ongoing incremental advancement in a field which the public conscious is only dimly aware of. So I have to chuckle at naysaying that sounds akin to explaining why this internet thing will never take off.

If you think this is amazing technology, look at actual hot optical engineering research topics such as computational optics or plastic GRIN lens printing, graphene detector research, or laminated infrared optics.

Is anyone saying that the technique is not possible, or an isolated effort? I think the naysaying is about how well it works, not that it does not exist. In context to the specific work, down-sampling is not very relevant to the objectives. What they are really after is an accurate up-sampling, or more aptly, interpolation of estimated, missing information. In a practical application, one would start with an image with deficient resolution and try to fill in more pixels with some information from neighboring pixels. I read the paper, and down-sampling and Nyquist frequency hardly gets at what they are doing. My take-away is that their main contribution was getting away from a pixel-wise mean-squared error approach that is commonly used but does not yield very accurate results. Their method is quite impressive, but still, I wonder if it would be good enough in say a court of law where someone's life depended on a truthful image reconstruction.

The research is a reality, but the image reconstructions do not match reality, although they come close sometimes.

Nodda Duma
14-Feb-2018, 03:37
Like most papers on image enhancement going back 30 years or more, the authors start with a control image (commonly Lena's photo but not here), replicate the image degradation of real systems, and then run the degraded image through their algorithm to show how well they can replicate reality.

I find it humorous that folks get hung up on the image degradation aspect which isn't the interesting or even meaningful part of the paper at all.

Patrick Gauthier
14-Feb-2018, 09:40
Thanks for adding some depth behind this area of research Nodda. Regardless of the necessary and healthy levels of skepticism, I suspect/hope most who work in the digital medium see it as an exciting possibility. What I appreciate about the present article, is their openness in including scripts so that readers can a) validate the authors results themselves, and b) apply their program for their own purposes. The scripts they provide are elegant in their simplicity, although I don't normally write in python. In comparison to R they are rather concise.

Thanks mmerig for the suggested additional iterations for my tests. Sadly, it could be a month or two before I can complete this.

mmerig
15-Feb-2018, 16:28
Like most papers on image enhancement going back 30 years or more, the authors start with a control image (commonly Lena's photo but not here), replicate the image degradation of real systems, and then run the degraded image through their algorithm to show how well they can replicate reality.

I find it humorous that folks get hung up on the image degradation aspect which isn't the interesting or even meaningful part of the paper at all.

If the downsampling is not meaningful (and I think it is, for reasons stated earlier), then why use it? That's a rhetorical question, like my other one about "Who would downgrade an image and then try to fix it". Sorry that my facetious nature did not come through in my message, or maybe that is why Nodda Duma found it funny. Also, I know Nodda Duma is a lens designer, and I am not.

Sure, this image enhancement stuff is interesting, can be useful, and there is probably big money in it if Adobe, the FBI, etc. likes it, but as others mentioned, it's the validity aspect that can be overstated.

In practical situations, a directionally uniform downsampling method, like bicubic, may not mimic some real-world image degradation from camera movement, or selective focus due to lack of depth of field or a focus mistake, etc. where different parts of the image would need more enhancement (or none) than others -- common things that people may want to fix, rather than a bicubic downsampled one. Surveillance images are an obvious example: they can be blurry from low resolution, signal noise, and subject movement. Maybe research has dealt with this in a big way, but given your message, it sounds like the "control image" approach has been standard practice for decades. It makes sense to use it as they do, but has anyone stepped out of this box to address some other practical image problems?

Or, does bicubic smoothing mimic every kind of image degradation out there, and my concerns about its applicability are mis-placed? (This one is sort-of rhetorical).

Thanks for putting up with me!

Nodda Duma
15-Feb-2018, 18:24
Bicubic sampling / smoothing isn't in the same category. That's an interpolative method.

Super resolution techniques take advantage of the fact that real-world image degradations (think MTF) are predictable and can be removed by deconvolution. The methods in that category don't add what isn't there or guess via interpolation, but extracts what can be extracted out of the available information.

Beyond that I can't dig further...it starts to become black magic even for me. The only other thing I can say is: You can see proof of the effectiveness of this type of approach in the imagery of the iPhone and other smart phones.

Jim Andrada
19-Feb-2018, 17:59
Maybe that's why I'm not crazy about iPhone imagery:cool:

I know it's sacreligious to say so, but I do use a DSLR quite a bit. For some things it's great, for others, meh! So for the others I use film. My biggest "gripe" with the Canon 5D III is that things are too "clean" It's as if a little digital elf (Not the Digital Elph, which was a nice little pocket camera) gets in there and scrubs away the faintest bit of noise - you know, the stuff that makes an image look alive. At least to me.

Steven Ruttenberg
13-Mar-2018, 12:33
I look at it this way, you can't fix a bad image whether it is focus, composition, etc. If isn't right to begin with, no program or plug in will make it so. There are advatanges when starting with a good image since you are already starting with good you can possibly make better. But start with crap and all you get is a polished turd.

Still interesting though.

Jac@stafford.net
13-Mar-2018, 14:27
The authors of the article are not proposing that they can make high resolution images from low resolution. Their underlying effort (their career thesis) is leaning toward the effort to describe how human eyes manage to recognize images through our low-resolution eye and most important, how the hugely complex brain processing we still do not understand which makes recognition, for better or worse, usually adequate. They eye/brain is not a camera.

Their paper is only a small part of their effort.

Give the authors a break. Read up.
.

Leigh
13-Mar-2018, 17:48
I would say it all depends on what you mean by "valid" :) The rest is rather objectively defined in the article.
Valid detail is detail that existed in the original subject.

One example might be the lug nuts on the wheels of a car.

Take an image of that car from a mile away using a wide angle lens.

If you can enhance that image to the point of seeing the lug nuts and determining their angle of rotation, then you have recovered "valid detail". If not, you're just fantasizing.

- Leigh