All posts relating to Maxwell Render 1.x
By kirkt
#123147
Here is a simple example of averaging 10 images with Gaussian noise (15% monochromatic) to get a single image. The original image is a simple grayscale image of concentric circles:

Image

This is an example of noise applied:

Image

And this is the arithmetic mean of 10 images averaged together.

Image

FYI - the original, noise-free target image had noise added and the noisy image was saved. Repeat 10 times so you have 10 uniquely noisey images from fresh copies of the noise-free original (i.e., psuedo-random noise).

Here is the screen shot of the PS CS2 noise filter:

Image

This demo is done on an 8bit gray image - 24 bit RGB gets more hairy for reasons of color shift, etc. that require more sophsticated reconstruction techniques, but you get the idea.

Averaging done in "pcomb" app of RADIANCE (http://www.radiance-online.org/), according to the input:

pcomb -e 'lo = (li(1)+li(2)+li(3)+li(4)+li(5)+li(6)+li(7)+li(8+li(9)+li(10))/10' pic1 pic2 pic3 pic4 pic5 pic6 pic7 pic8 pic9 pic10 > avg

where lo is the output brightness, li(n) is the input brightness for each image (n), picn is the image name for each of the 10 noisy images, and avg is the output image.

[/url]
Last edited by kirkt on Mon Feb 27, 2006 10:32 pm, edited 1 time in total.
By kirkt
#123148
Here are grayscale profiles along a horizontal line across the middle of the target:

Original Target Image
Image

Typical Noisy Image
Image

Average of 10 Noisy Images
Image

[/nerd_alert]
By kirkt
#123150
@Ernest Burden

Well - if there is noise in a "perfect" image your brain will perceive and possibly blur it, depending upon any number of factors, so the definiton of what a "perfect" rendering is becomes important. If you are using Maxwell, you are probably trying to get something "photoreal" and aesthetically pleasing, as opposed to something quantitatively correct (I am assuming here). So, what is perfect or correct?

Averaging is used to extract signal from noise and there are trade-offs that go along with the various techniques. If you are trying to measure the radiance or some other physical parameter at some pixel in your rendering, then knowing the limits and error of the reconstruction technique at that pixel is probably important. If you are trying to get something that looks nice, a trade-off between speed and "accuracy" - whatever you may define as accuracy - may be a tolerable balance to strike. An image wih no noise may look artificial, so noise may actually be desirable. I guess it depends on your goal.

Perhaps with distributed rendering may come a set of filters or reconstruction techniques that are configurable and may be used on a cache of the raw distributed data so that several different reconstructions may be compiled from a single set of cached, distributed data, according to the desires of the user. :D

rock on.
By kingpin
#123157
Sorry... but isn't Maxwell unbiased renderer?

My impression was that if you render a scene for the same length of the time, renderer will produce identical images. And for cooperative render, I thought (guessed) that it will render different paths (or sample or whatever) and combine them to make a single image.

If this is the case, I somewhat understand that cooperative rendering works (and at the same time I doubt that it will produce a "linear" improvement because of how Maxwell renders the scene..)...

What I mean is... cooperative rendering is not just about adding or averaging multiple "random(biased)" images...

Cheers.
User avatar
By aitraaz
#123161
yo Ernest!! Maybe take a look at this, good bathroom reading:

http://rivit.cs.byu.edu/a3dg/publicatio ... torial.pdf

A tad complicated (i don't understand a word of it), but discusses alot of these issues, especially the metropolis sampling method (which can be independant of the path tracing part - eg., not something tied exclusively to gi rendering), histograms, random averaging of samples, sample distribution, path mutations, etc..

It might not give a clear cut answer to what you want to know, but as a general blueprint as to how maxwell works it should be pretty valid, from a theoretical point of view. Quite complex, very interesting in any case...
:)
User avatar
By Frances
#123162
Ah, the images were resized in the html doc. So at least the aliasing can be explained. The lighting still emphasizes everything I find so disappointing about the new render engine.
User avatar
By Kabe
#123163
Ernest, I think you really should think of maxwell as a CCD element.

The noise of the CCD depends on both random noise and light.
The noise of M~R depends on random "color" of the rays.

In both cases you get some info, but some noise as well, and in both
cases the true solution takes *very* log to render. We are interested in
good enough solutions, not in the perfect one. If you are really exact,
then what you see is based on nothing else than an approximation
as well.

See "Frame averaging" at this link:
http://www.bensoftware.com/btvpro.html
Averaging produces a much better image than single frame, because it
just takes more samples.

Back to M~R: Cooperative rendering just means the following trivial fact:

Instead of taking 100 samples/pixel on one machine, you could also take
10 samples/pixel on each of 10 machines and average the pixel value
over machines instead of averaging over time. The only thing to consider
is that the sampling should be random to avoid biasing.

It's quite a simple fact and not worth an E-Mail that states that
"time barriers are broken and are no longer an issue".

Kabe
User avatar
By b-kandor
#123165
it just doesn't smell right to mix random pixels into an image to make it 'better'.
Remember that maxwell is not 'mixing' pixels of images together at all. It is combining mathematically the electromagnetic calculations of the various co-op machines - then after they are binned together - then the rgb image is extracted.

(not that I actually know anything about it....!)
By kirkt
#123166
Ernest Burden wrote: What's ironic here is that I add noise to my work. In high-end audio dither is added to avoid crossover errors in digital signals, for example. I'm aware of all that, and I appreciate your examples and explanations.

Its this:
Maxwell’s method of calculation always converges to the correct solution without introducing artifacts due to the fact that it is an unbiased renderer.
What we're saying is we will take those correct, artifact-free portions of a render and introduce errors, just to get a nice image faster. Sounds biased to me.
I have no real knowledge of MR's render engine, just averaging techniques for increasing SN ratio. I guess it all comes down to how much "error" is tolerable or even necessary to achieve the final image.
User avatar
By bathsheba
#123175
paxreid wrote: The assumption by NL that the 'problem is solved' by the customers shelling out an additional 4k-6k to make a rendering farm is a slap in the face. Very disappointing.
I agree -- this is bad news. I will never buy a render farm to use Maxwell.

-Sheba
User avatar
By Mihai
#123180
......................and just how much faster did people expect it to get??

If you're not in a hurry, then you can already build 2 X2 machines for a total of 4 cores, and render your images at quite high rez over night.

They just showed you an improved version of cooperative rendering, very useful to many people. It's not useful for you, fine, understood LOUD AND CLEAR :roll: :roll: :roll:

How the hell did you conclude from those images, that Maxwell is actually adding noise into an image, and that added noise will work to hide the "bad" noise. WHAT THE HELL ARE YOU TALKING ABOUT?????????????

This has nothing to do with pixel blurring/averaging.........

6 pages of this crap, unbelievable....
User avatar
By Frances
#123193
baboule wrote:I agree on the total lack of sexyness of this image and the previous ones in the corridor...
I suppose that it really is relevant technically speaking !?
What? Having the sample image look like something worth achieving? Yes. That would be relevant. I personally am happy that cooperative rendering is working again. But this drab, dreary image where the only discernable light is some odd burned out places, is just plain depressing.
User avatar
By Mihai
#123194
Because test scenes are not ment to be sexy, but to test as many features as possible, in certain conditions.

Is there anything else we should complain about besides the preview images? Was the font OK in the email? Was the logo too big/not big enough?
render engines and Maxwell

I'm talking about arch-viz and architecture as tho[…]

When wanting to select a material with File > O[…]

> .\maxwell.exe -benchwell -nowait -priority:[…]