PDA

View Full Version : More in the digital revolution



Jim Graves
22-Jun-2011, 17:18
As digital increases it's resolution, expands it's use of computer enhancements and LF type adjustments ... this latest development has me a little flummoxed .... I shoot a lot of soft focus and narrow depth of field .... check out this website (be sure to click on various areas of the photos): Link (http://www.lytro.com/picture_gallery) ... waddyathink?

Peter De Smidt
22-Jun-2011, 18:07
According to the CEO's dissertation, the trade off with a plenoptic camera versus a regular digital camera is that plenoptic camera requires more sensor sites than the regular digital camera to achieve the same image resolution. (There is also a micro lens array in front of the sensor which'll degrade the the scene information.) The added information is what allows the software algorithms to vary the focus of the image file.

If that's right, there are a number of downsides to the technology.
1) With the same sensor you could've had a regular digital camera with significantly higher resolution.
2) The increase in required photo sites probably means that a given photo site will be smaller than a site in a regular digital camera that gives the same output file resolution. Smaller photo sites tend to be less sensitive and noisier than larger ones.
3) there's going to be a lot of computation going on in the camera, which'll require more processing power with the concomitant power requirements and more memory.

Despite that, it is a neat idea. If the technology can be realized effectively in a fairly low cost and compact device, it could be quite popular with advanced snap shooters. (Regular snap shooters won't want to have to adjust every file.) More 'serious' photographers will probably not value the added versatility enough to put up with the lower resolution, lower dynamic range, and higher noise. (This is similar to the problem that many enthusiasts have with Foveon equipped cameras.)

It sure would be a great teaching tool though.

Peter De Smidt
22-Jun-2011, 18:21
Here's another possibility. In a plenoptic camera, the imaging plane is not the photo sensors but the array of micro lenses with lie between the sensor and the camera lens. Each micro lens can send information to a number of photo sites. This is what allows the capture of direction information about the light rays. Well, instead of more direction info, the micro lens could focus the light on three light sensors, one sensitive to red, one to green and one to blue. In effect this would allow point sampling of each color, although the number of points would be 1/3 the number of photo sites on the sensor.

It's true that increased processing and other techniques might minimize the noise, sensitively and dynamic range problems of smaller photo sites, but then it's likely that those same techniques could be applied to imaging sensors with larger photo sites. Thus, it'll always be a focus flexibility or color accuracy versus resolution, noise, and dynamic range trade-off.

Brian C. Miller
22-Jun-2011, 20:23
International Business Timers: Lytro: Photography May Be Seeing Its Future ("")

From the dissertation:
To record the light field inside the camera, digital light field photography uses a microlens array in front of the photosensor. Each microlens covers a small array of photosensor pixels. The microlens separates the light that strikes it into a tiny image on this array, forming a miniature picture of the incident lighting. This samples the light field inside the camera in a single photographic exposure. A microlens should be thought of as an output image pixel, and a photosensor pixel value should be thought of as one of the many light rays that
contribute to that output image pixel.

To process final photographs fromthe recorded light field, digital light field photography
uses ray-tracing techniques. The idea is to imagine a camera configured as desired, and trace the recorded light rays through its optics to its imaging plane. Summing the light rays in this imaginary image produces the desired photograph. This ray-tracing framework provides a general mechanism for handling the undesired non-convergence of rays that is central to the focus problem. What is required is imagining a camera in which the rays converge as desired in order to drive the final image computation.

It seems that the microlens array captures multiple focus points on the digital sensor, and then the software reconstructs the desired focus point for the photograph.

I was browsing B&H looking at the P&S cameras after a conversation with a coworker. There are 16Mp cameras starting at $125. What to do with all of that excess resolution? Well, Lytro has the answer. Capture multiple focus points, and then synthesize the picture.

Stephen Berg
23-Jun-2011, 10:39
It seems to me that this technology would also allow "virtual" (after capture) camera movements.

Steve Smith
27-Jun-2011, 00:19
A solution to a problem which doesn't exist.


Steve.

paulr
28-Jun-2011, 08:33
More likely a solution to problems that have yet to be imagined. I'm guessing it won't be that interesting to snap shooters (their cameras take care of focus for them already, with adequate reliability, and they don't like mucking around with post processing). But it will open up possibilities for people who like to experiment with the rendering of space, and god knows what else.

Mike Anderson
19-Oct-2011, 22:28
Well they look cool and different:

http://news.cnet.com/8301-30685_3-20122734-264/lytro-unveils-radical-new-camera-design/

...Mike

Jay DeFehr
20-Oct-2011, 10:00
There's an old saying that inventors don't know what the invention is. Inventors provide capabilities and users define the applications. To make creative use of these new capabilities might require more mental flexibility than some enjoy, but that's ok; we only need a few users to exploit the technology in creative ways that will in retrospect, seem obvious.

Brian C. Miller
20-Oct-2011, 10:15
Here's a couple of uses: Focus near and far, with the middle out of focus. Make spasmodic videos of things jumping in and out of focus.

Mike Anderson
20-Oct-2011, 12:13
Here's a couple of uses: Focus near and far, with the middle out of focus. ...

Does that thing allow multiple planes of focus (in other words can everything be in focus)?

...Mike

Brian C. Miller
20-Oct-2011, 13:20
What the camera constructs is a digital file, with a number of focal planes (Lytro science link (https://www.lytro.com/science_inside)).

Without the advertising hyperbole, the lens is a normal zoom lens. The sensor simultaneously captures a number of focal planes, thus different objects are in focus for each plane. The images can be combined into one scene, as shown on the Lytro blog (http://blog.lytro.com/), so it appears as if everything is in focus. The price isn't too bad, $400 to $500 depending on finish.

Mike Anderson
20-Oct-2011, 14:19
...The images can be combined into one scene, as shown on the Lytro blog (http://blog.lytro.com/), so it appears as if everything is in focus...

I would of thought they'd emphasize that point more, like in their demos have a single button to bring everything into focus. An obvious post processing tool would be an editor with a focus brush and an unfocus brush. But I don't think they're targeting an artistic post processing market (probably the images once staticized are low res), they seem to be emphasizing simplicity and no shutter lag. And they're emphasizing the "living pictures" concept, which doesn't seem that exciting to me.

...Mike

QT Luong
20-Oct-2011, 14:32
The final (native) resolution of the images is, at best, 1080 pixels (square).

See an analysis I wrote a few months ago: http://bit.ly/mKKIJe

Brian C. Miller
20-Oct-2011, 15:44
An obvious post processing tool would be an editor with a focus brush and an unfocus brush.

Give it a little time, and I'm sure that Adobe and Gimp will read the files and have the brush. Yes, the camera is a bit of a gimmick, but I think that people might have a bit of fun with it.

Nathan Potter
20-Oct-2011, 17:15
I assume that this technology is similar to that discussed previously in a thread here.
Out of focus blur patterns are sampled with multiple pixels that can discriminate the various angles of the incoming ray bundle then software analysis can reconstruct the point of best focus for that blur. This reconstruction is then integrated over the whole frame yielding the various planes of best focus. If these planes are superimposed on one-another one could have a final image with infinite DOF I suppose. Very ingenious of course but taking a lot of computing power with sophisticated algorithms. As QT points out there will be a sacrifice in resolution as a function of how many pixels are needed to reconstruct the points of best focus.

The loss of resolution is intriguing because the blur pattern sampling can use the same pixels multiple times depending on exactly what the nature of the algorithm is. Thus the resolution loss could be greatly minimized, it seems. Not quite sure how directionality is determined at each pixel.

At first cut this seems like a gimmick that would excite the digi freaks.

Nate Potter, Austin TX.

Robert Hughes
21-Oct-2011, 05:47
Why would you want everything in focus? Phone cameras do that already.

Nathan Potter
21-Oct-2011, 12:21
Why would you want everything in focus? Phone cameras do that already.

Most phone camera images I've seen seem to have nothing in focus. Drives me nuts watching some of these; they are so bad. Of course this is mostly poor resolution - and things are getting better.

Nate Potter, Austin TX.

E. von Hoegh
21-Oct-2011, 13:51
Here's a couple of uses: Focus near and far, with the middle out of focus. Make spasmodic videos of things jumping in and out of focus.

That will go well with the fashion of jump cutting every 15 seconds. (vomiting smiley)

DrTang
26-Oct-2011, 11:59
A solution to a problem which doesn't exist.


Steve.


I solve their "problem" by focusing

Ben Syverson
27-Oct-2011, 20:59
Anyone who has ever photographed kids should be able to see the appeal of the Lytro... You can capture just the right moment, and then focus later. Kids are the ultimate test of any autofocus system.

But the potential of the Lytro goes beyond "focus after the fact." Because the camera captures a lightfield, you could simulate any lens with incredible accuracy. So you could shoot once, and then decide later if you want the look of a Petzval or a Summicron. It wouldn't just be a filter—you could actually model the way each lens interacts with the lightfield.

You could also do weird things, like bend the DOF range nonlinearly. So you could photograph people at a table sitting near and far, and make all of them in focus—but still throw the background out of focus.

Not only that, but you could shoot a scenic shot from behind a chain link fence or a net, and then remove the obstruction with one click. That's because lightfield cameras can see "around" foreground objects to some degree.

The big test will be resolution. They haven't specified anything yet, so I doubt this camera makes a file big enough for a 4x6" print.

I'm a little surprised they didn't go after video first, since video resolutions are so much lower. The biggest challenge in the video world today is getting that shallow DOF effect everyone likes, but with good autofocus. Lytro's tech sidesteps that problem neatly. They could use face detection to perfectly rack focus—and if you want to adjust the focus on the computer, you could.