Any features you'd like to see implemented into Maxwell?
By alexxx_95
#219735
Hello every body,

So my wish would be to have a feature like multilight but for Depth of field.

To can change the DOF during or after the render.
It would be very very very usefull feature and it would reduce considerably the time of seting.

Thanks NL

regards

Alexxx
User avatar
By deadalvs
#219737
as discussed in many previous threads: not possible
By alexxx_95
#219738
hello deadalvs,

ok sorry to didn't have view these topics before.... I will be more attentive the next time.

Alexxx
User avatar
By -Adrian
#219739
And quick OpenGL preview feature would be very useful though.
User avatar
By deadalvs
#219751
that tutorial looks interesting.

it just seems to me that the real DOF algorithms are a «little» more complex than this workflow.
User avatar
By deadalvs
#219753
-Adrian wrote:And quick OpenGL preview feature would be very useful though.
that is a good point. i think that red/blue cursor in studio is a little in need of attention.
but how to work this out in the host 3d app ...
By Peder
#219800
Out of curiosity, what about the Photoshop technique do you consider as "having little to do with real time dof" alternatively "real DOF algorithms are a «little» more complex"

Peder
User avatar
By rivoli
#219804
in ps you adjust the quantity of blur with parameters that are meaningless in maxwell, starting from a Z channel which can be arbitrarily mapped in 3d space to the radius of the actual blur. in maxwell you have different depth of field if you, for example, widen or narrow the camera aperture (both before and beyond the actual focal point). you'd have an hard time finding consistent values switching between the two. and a true 3d dof such as maxwell's will give bokeh effects that lens blur can't generate.
and it wouldn't be really real time, would it? not only you'd have to switch to another app, but you'd actually have to render an output, which kinda contradicts the whole idea of a preview before or while you're rendering.
just my 0,02 cents :D
User avatar
By michaelplogue
#219824
I've actually given this subject some thought in the past, and I believe that it could be possible, although difficult. What got me thinking about this was an article I saw way-back-when about this camera prototype:

http://graphics.stanford.edu/papers/lfcamera/


To do this in a renderer, you would need a much more robust z-depth channel that's more than just a 256 level greyscale image. There's no reason whatsoever that you couldn't use the full spectrum of color to create a very detailed depth channel. In addition, you would need an MXI-type format that contains image data that does not include any sort of lens distortion (ie a non-DOF image). With these two 'channels' you - in theory - could have interactive sliders that would select your focal depth (based on a specific color value from your z-depth channel) and blur/bokeh effects amount (f-stop sort of), and have the program create your DOF effects as a post effect (like glare).

Just like you can now with the multilight system, you would be able to create an animation from a single MXI where the focal distance and/or f-stop(blur) would change.

Piece of cake! :P
By droid42
#221009
To do this in a renderer, you would need a much more robust z-depth channel that's more than just a 256 level greyscale image. There's no reason whatsoever that you couldn't use the full spectrum of color to create a very detailed depth channel. In addition, you would need an MXI-type format that contains image data that does not include any sort of lens distortion (ie a non-DOF image). With these two 'channels' you - in theory - could have interactive sliders that would select your focal depth (based on a specific color value from your z-depth channel) and blur/bokeh effects amount (f-stop sort of), and have the program create your DOF effects as a post effect (like glare).

Just like you can now with the multilight system, you would be able to create an animation from a single MXI where the focal distance and/or f-stop(blur) would change.


What you describe is indeed very elegant :)

The main problem is that the result would be slightly biased at best, or downright wrong at worst. With a physically-based camera model (or even a simple thin-lens model) it's possible to partially see behind blurred foreground objects. This information would not be present in the non-DOF render so the post-render tweaked version would be no better than a photoshopped DOF (although it would be MUCH easier to use).

Ian.

So, is this a known issue?

Thanks a lot for your response, I will update and […]

did you tried luxCore?