jfrancis wrote:
You could compare differences between successive updates. If the difference is less than some amount then the region is 'rendered enough.'
If I submit a render to Maxwell and I see by inspection that the overal difference between SL 9 and SL 10 is visually small enough that I accept SL 10 as 'finished,'
In general a good idea and an option that Maxwell should have as a third option over SL and time, but...
Your idea works great for simple and smooth materials. But how would you define "rendered enough" for a screen showing static noise? How about a smooth white surface that begins to show subtle caustics after about SL 30 but was "visually smooth" before? How about flaky candy carpiant?
Maxwell is "unbiased", because all it does is collect light that was somehow thrown into the scene. You just don't care about what object bounced the light of which source to that specific pixel (or region) on the film. That's fine and models the way real world photography works. Only problem is that real world cameras get to insane SLs in microseconds...
In order to adjust for those special cases, the render engine would need information about what material and object was rendered at that specific pixel. At that point, you would start to "bias" your render. You'd trace from the film pixel via the scene geometry towards the light sources. At first that's more work to do. Then you can save time if you decide to trace similar paths around the current or not. While your're at it, you can cut some corners because now you "know" that certain areas of the source emit rays that never reach the film so you can skip those rays... except you cut a corner too short and there was a possible way after three more bounces but you skipped those... If you now just throw a few rays into the scene to get a rough feel(/map) of how light bounces in the scene then trace the film pixel by pixel, your've invented a scanline renderer with a GI pass.
In short: If you cut corners, you might loose information at the corner. That's okay for most cases and the reason why(and how) biased renderers work but also the reason why certain effects need to be faked.
jfrancis wrote: but I notice there is some unacceptable noise in one small region, and I rectangle-select that region in, say Maya, and resubmit the subregion to render to SL 15, and I composite the two together is what I have done 'unbiased' because I interfered?
I always wanted to compare render performance of region vs. full format. Of course you will save time because you will have fewer memory operations for fewer pixels. But the amount of traced "photons" for a given SL should be the same, which should mean about the same calculation time except for gathering in the camera and writing the "film" bitmap.
That's a guess on my part here but if you render a 4k image and a 1k about the same time, then scale down the 4k to 1k, you should have about the same noise levels in both 1k images.
Then again, compositing requires some additional info on your part. The noisy area doesn't just stop at the end of your region but generally blends. "visually acceptable" will mean in a 8bit representation. There will still be a lot of noise if you adjust the exposure of the underlying 32bits. So you need to visually blend the region and the lower SL image, probably taking into account object boundaries or other visual context.
In short: yes, that would be a biased approach.