I am very familiar with the technique in PhotoShop that you mentioned. Perhaps my explanation is not making sense, or my choice of terms is too confusing.
Let us suppose someone is photographing an urban scene, with several bulidings. Some objects/structures in the scene are 10m closer, others 20m or more farther than the plane of sharpest focus. Using whatever defocus method I might with a view camera, or large aperture lens on another camera, the plane of focus will be sharp, with any objects away from that plane being defocused. The amount of defocus varies in the frame from slightly defocused for objects at a distance nearer the plane of sharpest focus. Those objects farther (physically in the scene) from the plane of sharpest focus will appear even more defocused. Also, objects closer to the camera will have a different defocus rendering (in-camera) than objects farther from the camera and farther from the plane of focus, though this effect depends upon how the lens renders the scene.
Compare this to trying to replicate this in software. There is no way in PhotoShop to tell the filter that one object is 10m away from the camera, and another is 50m away from the camera (Note: distances randomly chosen by me to illustrate this point). PhotoShop will treat an object, in the scene, as being exactly the same distance as any other object within that scene. To be very fair in this, the average viewer would likely not know the difference between software defocus and in-camera defocus rendering; and many would find the software manipulated image acceptable.
Just to use another example, suppose you photographed a fence at an angle to the camera. Let us suppose the farther end of the fence is on the left of the frame, and the closer end is on the right. Then let us suppose that we choose our plane of focus to be near the Golden Section line, with the rest becoming defocused. The part of the fence closer to us, closer to the plane of focus, and to the right side of our image, will be less defocused than the farther fence posts that fall upon the left side of our scene. Suppose I took another shot, but at a smaller aperture, and then nearly all the fence was in focus (or I could have used swing to get all the fence in focus). To get this second full focus shot to look like the selective focus shot, I would start by blurring the image where I remembered my point of focus. Then I could do many different methods to refine that blur, like creating a mask that would fade the effect a bit over the length of the fence. Quite likely I could create a convincing software rendering of the second image that is near the original selective focus shot done in-camera; the problem is that I would know the difference when viewing a print of either image, and many creative professionals and photographers I know would also be able to tell the difference; that's a problem when you do this for a living, which many others will find unacceptable.
PhotoShop is a wonderful tool, and I have nearly 15 years using it on a daily basis. I know what many cameras and lenses can do, and have shot wide open aperture on numerous cameras and various lenses (including large format wide open). Yes, you can try to fake it, and maybe be happy with the software result, but I guarantee you someone will spot the PhotoShop, and their opinion of the image will be decreased. Yes, you can fake defocus, and fool many many people, but sooner or later you will run into someone who knows exactly what manipulation was done to the image. In my opinion, when you take away the magic of an image, you have diminished it's value.
I hope I have explained this better. It is the various distances of objects in a scene that alters the effects of defocus rendering. Software cannot know all those distances. To tie this into the Petzval type lenses, many of those defocus along a curved plane of focus, which further complicates trying to replicate the look in software.
Gordon Moat Photography
Last edited by Gordon Moat; 17-Aug-2009 at 00:06. Reason: clarity
The "Lens Blur" effect mimics the center portion of a perfect apochromatic copy lens. Any real lens, especially a Petzval, will progressively deteriorate toward the edge of the frame. By the very edge, you're looking at swirl (coma), falloff, mechanical vignetting, and a million intangibles.
Throwing Lens Blur on the background based on a gradient will look exactly like what it is: fake. Google "fake tilt shift" to get a sense of what this looks like--
Not a substitution, but you can fool the suckers by rocking dual layers of radial blur, each masked and painted out to reveal whatever sharpness you want. I tried it once farting around, below, but won't admit to pawning it off anywhere. In any case it kind of looks old.
It's like when people add fake 4x5 black borders to their point and shoot pictures. To each his own, but this kind of thing has always struck me as tacky at best and disingenuous at worst.
With the proper use of selections, masking, gradients and differing blurs you can certainly mimic what I have seen here made with the old lenses. How does it stack up against the real thing? I don't know because I have not seen the real thing up close and personal. I suspect that judicious use of the available tools will get you very, very close. I don't see this as a plug-in that someone will run in Photoshop...it would seem to be a one-off effort.
Thanks for the various responses. Gordon, I do understand planes of focus, circles of confusion, the effect of distance from the plane of focus on the size of the circles of confusion and hence on the degree of "defocus." To others who responded by saying Photoshop couldn't "know" how far away various objects were, Photoshop doesn't need to know the actual distances any more than a lens "knows" actual distances. Photoshop "knows" what I tell it about distances when I create the transition mask. And finally, and most importantly, I didn't say that one could duplicate a lens effect in Photoshop. In fact I said the opposite, that I didn't know whether the effect duplicated a Petzval lens or not. The only point of my earlier message was to question Gordon's statement that Photoshop can't replicate the gradual increase in the "defocus" areas that takes place with a lens as objects in the scene become more distant.
I've attached two photographs. The one on the left is the original (revised to meet the size requirements of this forum), in which everything from the front to the top of the hill is pretty much sharp. The second shows the use of a transition mask to make a gradual transition from sharp to unsharp. I don't know how obvious the transition will be given the file size needed to post photographs here but in the originals on my monitor it's very obvious that the "defocus" area begins approximately an inch up from the bottom of the screen and gradually becomes more "defocused" as the distance from the bottom of the screen increases up to the top of the hill in the background.
Just in case it needs repeating - I don't claim that the photograph on the right duplicates the image that would have been made by a lens focused at the near and opened up, only that it's possible to create a gradual transition from "sharp" to "unsharpest" in Photoshop similar to what would have been done with a len. And with a little more time and effort, mainly by spending more time and experimenting with different transition masks, I think the effect can be pretty close.
Before you criticize someone, walk a mile in their shoes. That way when you do criticize them you'll be
a mile away and you'll have their shoes.
Right, but first you have to generate (or worse yet, capture) the depth map, and then you have to know what to do with it. Applying blur based on z-depth is nowhere near enough to get you to the "petzval look."