Any features you'd like to see implemented into Maxwell?
By a.behrens
#376487
in addition: LuxRender has an "importance brush". While Luxrender does its render job I can paint with that brush over the image and mark areas which a very important and shall be rendered more intensive.

Such a brush would be nice for MR too.
By Ha_Loe
#378841
The basic point of unbiased rendering is the light sources throwing light into the scene and the camera detecting the light, just as in the real world. Balancing in an unbiased engine can only happen by balancing sources, not screen regions.

If there was a simple and reliable way to trace back from a screen pixel to the lightsource and add importance to that region, then biased engines were perfect and we wouldn't need the costly unbiased approach at all.

As much as I would second that wish, I doubt that can ever happen without making Maxwell just another biased engine.

Since you mentioned LuxRender. The website even states that it's not a straight down unbiased engine. I don't know how they differentiate or combine both approaches but the importance brush is clearly a biased concept.
By jfrancis
#378847
Ha_Loe wrote: If there was a simple and reliable way to trace back from a screen pixel to the lightsource and add importance to that region, then biased engines were perfect and we wouldn't need the costly unbiased approach at all.

You could compare differences between successive updates. If the difference is less than some amount then the region is 'rendered enough.'

If I submit a render to Maxwell and I see by inspection that the overal difference between SL 9 and SL 10 is visually small enough that I accept SL 10 as 'finished,' but I notice there is some unacceptable noise in one small region, and I rectangle-select that region in, say Maya, and resubmit the subregion to render to SL 15, and I composite the two together is what I have done 'unbiased' because I interfered?
By Ha_Loe
#378886
jfrancis wrote: You could compare differences between successive updates. If the difference is less than some amount then the region is 'rendered enough.'

If I submit a render to Maxwell and I see by inspection that the overal difference between SL 9 and SL 10 is visually small enough that I accept SL 10 as 'finished,'
In general a good idea and an option that Maxwell should have as a third option over SL and time, but...

Your idea works great for simple and smooth materials. But how would you define "rendered enough" for a screen showing static noise? How about a smooth white surface that begins to show subtle caustics after about SL 30 but was "visually smooth" before? How about flaky candy carpiant?

Maxwell is "unbiased", because all it does is collect light that was somehow thrown into the scene. You just don't care about what object bounced the light of which source to that specific pixel (or region) on the film. That's fine and models the way real world photography works. Only problem is that real world cameras get to insane SLs in microseconds... ;)

In order to adjust for those special cases, the render engine would need information about what material and object was rendered at that specific pixel. At that point, you would start to "bias" your render. You'd trace from the film pixel via the scene geometry towards the light sources. At first that's more work to do. Then you can save time if you decide to trace similar paths around the current or not. While your're at it, you can cut some corners because now you "know" that certain areas of the source emit rays that never reach the film so you can skip those rays... except you cut a corner too short and there was a possible way after three more bounces but you skipped those... If you now just throw a few rays into the scene to get a rough feel(/map) of how light bounces in the scene then trace the film pixel by pixel, your've invented a scanline renderer with a GI pass.

In short: If you cut corners, you might loose information at the corner. That's okay for most cases and the reason why(and how) biased renderers work but also the reason why certain effects need to be faked.
jfrancis wrote: but I notice there is some unacceptable noise in one small region, and I rectangle-select that region in, say Maya, and resubmit the subregion to render to SL 15, and I composite the two together is what I have done 'unbiased' because I interfered?
I always wanted to compare render performance of region vs. full format. Of course you will save time because you will have fewer memory operations for fewer pixels. But the amount of traced "photons" for a given SL should be the same, which should mean about the same calculation time except for gathering in the camera and writing the "film" bitmap.
That's a guess on my part here but if you render a 4k image and a 1k about the same time, then scale down the 4k to 1k, you should have about the same noise levels in both 1k images.

Then again, compositing requires some additional info on your part. The noisy area doesn't just stop at the end of your region but generally blends. "visually acceptable" will mean in a 8bit representation. There will still be a lot of noise if you adjust the exposure of the underlying 32bits. So you need to visually blend the region and the lower SL image, probably taking into account object boundaries or other visual context.

In short: yes, that would be a biased approach.
By jfrancis
#378888
A subregion renders faster. I haven't tried to figure out why, but I use them. So I guess my maxwell renders are biased.
By numerobis
#378889
corona
http://www.youtube.com/watch?v=ceOJTE5Kvls#t=167

v-ray
http://www.youtube.com/watch?v=dtxQ_gDNttw#t=1130

it would be helpful if you could at least specify multiple regions in one render instead of only one. I don't know how render region works and if this would be possible, but then
you could select all critical regions at once and run them in ONE second render.
Or it would be nice, if you could at least load a full frame mxi, specify a region and get the two combined automatically as it works in corona - but maybe with a soft transition and no hard edge... :wink:
User avatar
By eric nixon
#378918
Just put a back plane in front of your cam with certain quads deleted then you get multiple region renders... slightly quicker than doing them one by one.. I'm not really kidding, you could do that if you really need to.

I guess you'll need to crop the pieces with a soft edge, also needed because the plane will be out of focus.

I always do region renders when dealing with the combination of a deadline + a problematic rendering, or ofcourse a client asking for tiny changes. That way I can give each region as long as it needs, and when I place the 'revisions' in PS (using a soft edge) they ALWAYS match the render underneath, so its not 'unbiased' the results look identical... apart from them being 'revised' obviously.

The softedge might show slightly less noise if the render still has some noise.
By numerobis
#378930
eric nixon wrote:Just put a back plane in front of your cam with certain quads deleted then you get multiple region renders... slightly quicker than doing them one by one.. I'm not really kidding, you could do that if you really need to.
Really?!? I think i have to try it... but i still would prefer a more "integrated" solution ;)

So, is this a known issue?

Thanks a lot for your response, I will update and […]

did you tried luxCore?