All posts relating to Maxwell Render 1.x
By lllab
#154045
thanks thomas, so it shows v1 is quite a lot faster:-)

thats also my feeling after working with it some time now.

Thomas or someone of NL, do you know if those patterns in earlier sampling levels(but already nice looking images) could possibly changed to a more natural noise? i think that would make maxwell even make faster, as the "perceptual" look of the image would look fine even earlier than the 4x times measured by thomas. wouldbe a cool thing to speed up the render.

great test:-)

cheers
Stefan
User avatar
By Thomas An.
#154048
Voidmonster wrote:...and I'm basing my understanding on the method that LZH uses, not the patent-free version in PNG -- the compression algorithm is building a library of similar parts in the image. Entropy is somewhat artificially reduced by the patterning of the noise, which is quite regular.
Thank you for taking the time to review these results !

About the patterning issue: This is what the image looked like initially.
Image

About the compression algorithm (the following is not my text. It is a quote):
  • PNG uses a non-patented lossless data compression method known as deflation. This method is combined with prediction, where for each image line, a filter method is chosen that predicts the color of each pixel based on the colors of previous pixels and subtracts the predicted color of the pixel from the actual color. An image line filtered in this way is often more compressible than the raw image line would be, especially if it is similar to the line above (since deflate has no understanding that an image is a 2D entity, and instead just sees the image data as a stream of bytes).
It is history aware, so patterning (if any) it could come into effect. I will investigate alternate (less efficient) compression metthods and see what we get.
I think you'd get a more solid entropy number from a straight RLE compression.
I will give it a try. Thanks.
Also, Thomas, why didn't you save the images in BMP to begin with and then convert to PNG later?
What would be the point of this extra step ? (PNG is lossless anyway, the files can be converted it to BMP now if needed, but what would be the point in that ?
The part where you determined the level of compression that Maxwell is using seemed a bit needless to me. :)
It is a crucial step. The same PNG when at level 9 compresses signifficantly more (a data bias caused by this aspect, if untreated, would not be acceptable for this test)
Minor technical quibbles aside, this is a really fantastic bit of data. It's tests like this that make this forum truly great.

Despite the monumental levels of bickering. :)
Last edited by Thomas An. on Mon May 15, 2006 10:19 am, edited 2 times in total.
User avatar
By Thomas An.
#154049
lllab wrote:thanks thomas, so it shows v1 is quite a lot faster:-)

thats also my feeling after working with it some time now.

Thomas or someone of NL, do you know if those patterns in earlier sampling levels(but already nice looking images) could possibly changed to a more natural noise? i think that would make maxwell even make faster, as the "perceptual" look of the image would look fine even earlier than the 4x times measured by thomas. wouldbe a cool thing to speed up the render.

great test:-)

cheers
Stefan
Hi lllab,

Thank you for the comments, but I am not really working for NL (never did). Used to be a moderator a while back, but not any longer. Sorry.
By lllab
#154051
i see,

but thanks for the test anyway:-)

cheers
stefan
User avatar
By Thomas An.
#154052
markps wrote:Other renders generate noiseless images in seconds. The lack of noise doesn't translate in quality of image.
It does in Maxwell.

The assumption you make is not germane to this test since the comparison is between versions of the same renderer.
Again, the idea that Maxwell is unbiased is not the same to say that every pixel will have the correct color and tone every time a plixel is placed on the framebuffer. Maybe on different versions of the algorithm it is creating a different quality of noise.
Quality of noise is not as important as you try to think it is. In the bottom line the beta engine has an entropy assymptote that is much higher than the V1.0 engine no matter how you slice it.
On thing to pay attention is that the Maxwell V1 generates on its early Render time iterations a pattern based noise instead of pseudo random noise. That proves that the way it generates noise has changed. And traces of that noise pattern could have been dragged further in Render time even though it is not visible.
I will investigate the petterning issue.
User avatar
By Thomas An.
#154055
markps wrote:Well, does it mean that the perceivable noise is reduced 4 times faster or that the same amount of "information" on the noisy areas is reached 4 times faster?
An attempt was made to keep the image factors very similar so the the only significant variable is the noise.

Under these conditions the amount of information is a function of noise. More noise causes more information to be held.
By markps
#154056
But not talking about pattern but talking about the algorithm itself. Couldn't the new algorithm generating a closed matched pixel that would be raising the entropy thus at the same time generating a better quality image with the same amount of noise?

1k 0% noise. (PNG 6)
Image

13k 50% noise low contrast. (good quality noise) (PNG 6)
Image

13k 50% noise high contrast. (bad quality noise) (PNG 6)
Image

You could be measuring the noise without taking in account the quality of the picture.

Isn't it the same as measuring the alignment of watter molecules by measuring the amount of H20 and not taking in account the temperature of the watter?
Last edited by markps on Mon May 15, 2006 10:32 am, edited 1 time in total.
User avatar
By Voidmonster
#154058
Thomas An. wrote: Thank you for taking the time to review these results !
I'm happy to! This is useful info.
About the patterning issue: This is what the image looked like initially.
Image

About the compression algorithm (the following is not my text. It is a quote):
  • PNG uses a non-patented lossless data compression method known as deflation. This method is combined with prediction, where for each image line, a filter method is chosen that predicts the color of each pixel based on the colors of previous pixels and subtracts the predicted color of the pixel from the actual color. An image line filtered in this way is often more compressible than the raw image line would be, especially if it is similar to the line above (since deflate has no understanding that an image is a 2D entity, and instead just sees the image data as a stream of bytes).
It is history aware, so patterning (if any) it could come into effect. I will investigate alternate (less efficient) compression metthods and see what we get.
Yeah, I suspect that compression is generating slightly off results for V1 due to the regular patterns. I could easily be wrong, however!
Also, Thomas, why didn't you save the images in BMP to begin with and then convert to PNG later?
What would be the point of this extra step ? (PNG is lossless anyway, I can convert it to BMP now if needed, but I do not see the point in that.
The part where you determined the level of compression that Maxwell is using seemed a bit needless to me. :)
It is a crucial step. The same PNG when at level 9 compresses signifficantly more (a data bias caused by this aspect, if untreated, would not be acceptable for this test)
Your test required that the images get compressed. If you'd saved in BMP to begin with, you wouldn't have needed to determine the compression mode used and thus would have cut one step out -- you had to compress them anyway. :)
User avatar
By Thomas An.
#154060
Voidmonster wrote: Your test required that the images get compressed. If you'd saved in BMP to begin with, you wouldn't have needed to determine the compression mode used and thus would have cut one step out -- you had to compress them anyway. :)
I see :)

Well, I feel it would take much more to disc space to store all those BMPs and it would also take a lot more steps to batch process the images to a compressed form (out from their archived folders), then to do a 10 sec step of determining the compression level. That step was so trivial it hardly registered timewise :)
User avatar
By Voidmonster
#154064
Thomas An. wrote:
Voidmonster wrote: Your test required that the images get compressed. If you'd saved in BMP to begin with, you wouldn't have needed to determine the compression mode used and thus would have cut one step out -- you had to compress them anyway. :)
I see :)

Well, I feel it would take much more to disc space to store all those BMPs and it would also take a lot more steps to batch process the images to a compressed form (out from their archived folders), then to do a 10 sec step of determining the compression level. That step was so trivial it hardly registered timewise :)
Duh! I'd managed to not figure out why you'd done that particular step and somehow, in my mind, you were batch processing all the pictures anyway. What you did now makes perfect sense. Carry on! :roll:
User avatar
By Thomas An.
#154069
markps wrote:But not talking about pattern but talking about the algorithm itself. Couldn't the new algorithm generating a closed matched pixel that would be raising the entropy thus at the same time generating a better quality image with the same amount of noise?
....

13k 50% noise low contrast. (good quality noise) (PNG 6)
Image
The above image has a histogram mean value of 126
13k 50% noise high contrast. (bad quality noise) (PNG 6)
Image
The above image has a histogram mean value of 190

However, the histogram values have been explicitly taken into account in the test and their values are ensured to be nearly identical (+/- 1 point).
By markps
#154119
As I understand the histogram will give you an average of brightness of the entire image and will not indicate the noise contrast on a patch of the image right? This still could be throuwing the data off. (ok this is my last try :o before I give up to your awesome test :o )

You are just seeing this big difference on these two images I've posted because of the hight contrast of the noise that I've created but on a well complex image this would not indicate that the perceived noise is any better at a certain point with relation to total noise measured.

For example the histogram here is closer to the second image but doesn't indicate a better perceived noise image.
Image


What do you think?
Thomas An. wrote:
markps wrote:But not talking about pattern but talking about the algorithm itself. Couldn't the new algorithm generating a closed matched pixel that would be raising the entropy thus at the same time generating a better quality image with the same amount of noise?
....

13k 50% noise low contrast. (good quality noise) (PNG 6)
Image
The above image has a histogram mean value of 126
13k 50% noise high contrast. (bad quality noise) (PNG 6)
Image
The above image has a histogram mean value of 190

However, the histogram values have been explicitly taken into account in the test and their values are ensured to be nearly identical (+/- 1 point).
User avatar
By Frances
#154152
That's quite a lot of work you've done Thomas. I admire your dedication. You've certainly proven that metal and caustic rendering has significantly improved with V1. 8)

I hope you or some other knowledgeable a-teamer will consider doing such in-depth research and a demonstration for interior scenes with indirect lighting and dielectric objects.

So, Apple announced deprecation at the developer c[…]

render engines and Maxwell

I'm talking about arch-viz and architecture as tho[…]

> .\maxwell.exe -benchwell -nowait -priority:[…]