Page 1 of 1

Depth of field feature in real time

Posted: Fri Apr 06, 2007 2:33 pm
by alexxx_95
Hello every body,

So my wish would be to have a feature like multilight but for Depth of field.

To can change the DOF during or after the render.
It would be very very very usefull feature and it would reduce considerably the time of seting.

Thanks NL

regards

Alexxx

Posted: Fri Apr 06, 2007 2:43 pm
by deadalvs
as discussed in many previous threads: not possible

Posted: Fri Apr 06, 2007 2:57 pm
by alexxx_95
hello deadalvs,

ok sorry to didn't have view these topics before.... I will be more attentive the next time.

Alexxx

Posted: Fri Apr 06, 2007 3:07 pm
by -Adrian
And quick OpenGL preview feature would be very useful though.

If your software supports it and you own Photoshop CS2

Posted: Fri Apr 06, 2007 3:57 pm
by Peder
If your 3D soft supports creating depthmaps (Maxwells feature is broken) you can do it. Check out the tut I posted on the FZ forum. The same should be possible in other softwares, no?

http://www.formz.com/forum2/messages/14 ... 1173903168

Peder

Posted: Fri Apr 06, 2007 4:44 pm
by deadalvs
that tutorial looks interesting.

it just seems to me that the real DOF algorithms are a «little» more complex than this workflow.

Posted: Fri Apr 06, 2007 4:46 pm
by deadalvs
-Adrian wrote:And quick OpenGL preview feature would be very useful though.
that is a good point. i think that red/blue cursor in studio is a little in need of attention.
but how to work this out in the host 3d app ...

Re: If your software supports it and you own Photoshop CS2

Posted: Fri Apr 06, 2007 5:15 pm
by rivoli
Peder wrote: If your 3D soft supports creating depthmaps (Maxwells feature is broken) you can do it.
well, as already said, that's how you would apply a post blur in ps, but has very little to do with real time dof.

Out of curiosity

Posted: Sat Apr 07, 2007 12:27 am
by Peder
Out of curiosity, what about the Photoshop technique do you consider as "having little to do with real time dof" alternatively "real DOF algorithms are a «little» more complex"

Peder

Posted: Sat Apr 07, 2007 12:50 am
by rivoli
in ps you adjust the quantity of blur with parameters that are meaningless in maxwell, starting from a Z channel which can be arbitrarily mapped in 3d space to the radius of the actual blur. in maxwell you have different depth of field if you, for example, widen or narrow the camera aperture (both before and beyond the actual focal point). you'd have an hard time finding consistent values switching between the two. and a true 3d dof such as maxwell's will give bokeh effects that lens blur can't generate.
and it wouldn't be really real time, would it? not only you'd have to switch to another app, but you'd actually have to render an output, which kinda contradicts the whole idea of a preview before or while you're rendering.
just my 0,02 cents :D

Posted: Sat Apr 07, 2007 5:21 am
by michaelplogue
I've actually given this subject some thought in the past, and I believe that it could be possible, although difficult. What got me thinking about this was an article I saw way-back-when about this camera prototype:

http://graphics.stanford.edu/papers/lfcamera/


To do this in a renderer, you would need a much more robust z-depth channel that's more than just a 256 level greyscale image. There's no reason whatsoever that you couldn't use the full spectrum of color to create a very detailed depth channel. In addition, you would need an MXI-type format that contains image data that does not include any sort of lens distortion (ie a non-DOF image). With these two 'channels' you - in theory - could have interactive sliders that would select your focal depth (based on a specific color value from your z-depth channel) and blur/bokeh effects amount (f-stop sort of), and have the program create your DOF effects as a post effect (like glare).

Just like you can now with the multilight system, you would be able to create an animation from a single MXI where the focal distance and/or f-stop(blur) would change.

Piece of cake! :P

Posted: Fri Apr 13, 2007 10:09 pm
by droid42
To do this in a renderer, you would need a much more robust z-depth channel that's more than just a 256 level greyscale image. There's no reason whatsoever that you couldn't use the full spectrum of color to create a very detailed depth channel. In addition, you would need an MXI-type format that contains image data that does not include any sort of lens distortion (ie a non-DOF image). With these two 'channels' you - in theory - could have interactive sliders that would select your focal depth (based on a specific color value from your z-depth channel) and blur/bokeh effects amount (f-stop sort of), and have the program create your DOF effects as a post effect (like glare).

Just like you can now with the multilight system, you would be able to create an animation from a single MXI where the focal distance and/or f-stop(blur) would change.


What you describe is indeed very elegant :)

The main problem is that the result would be slightly biased at best, or downright wrong at worst. With a physically-based camera model (or even a simple thin-lens model) it's possible to partially see behind blurred foreground objects. This information would not be present in the non-DOF render so the post-render tweaked version would be no better than a photoshopped DOF (although it would be MUCH easier to use).

Ian.