- Sun May 14, 2006 9:04 pm
#153887
TEST RESULTS:
____________________________________________________
INTRODUCTION:
____________________________________________________
INTRODUCTION:
- This is an attempt to quantitatively measure grain levels of Maxwell output images. The method avoids using the human eye (and its inherent inaccuracies) for image comparison and instead uses an objective, repeatable, indirect-grain-sensing technique. (This technique was initially hinted here: http://www.maxwellrender.com/forum/viewtopic.php?t=1895 )
A scene was chosen and rendered (using as much identical settings as possible) in both Maxwell beta1.2.2a and Maxwell V1.0. The output image entropy levels were logged over time for each render and then compared.
Two metrics were extracted from this test:
a) Absolute Entropy Speed Ratio (mxclV1.0 / mxcl1.2.2a)
b) Apparent Entropy Speed Ratio (mxclV1.0 / mxcl1.2.2a)
The absolute entropy ratio is interesting but not a very useful quantity since it detects entropy levels in ways that the human eyes can't detect. On the other hand, the most useful quantity in this test is the apparent entropy ratio; which describes grain in ways that most users would perceive.
The absolute entropy speed ratio was found to be ABESR=23.10
The apparent entropy speed ratio was found to be APESR=4.15.
Since the APESR metric is what would be detectable by most users, we can say that Maxwell 1.0 clears noise at a rate of 4.15 times faster (or 415%) than Maxwell beta1.2.2a (for a particular scene involving a studio setup and substantial presence of direct/indirect caustics)
- There was a need to have a fairly reliable method of detecting grain within an image. You can think of grain as image information entropy (http://en.wikipedia.org/wiki/Information_entropy). In that context, the most clean possible image will have a pure state of least entropy (asymptote at infinite Maxwell time) and the most grainy image will have the most entropy (beginning of Maxwell time). In other words, entropy is the level of image pixel disorder (compared to its ideal pure state of crystal clarity).
A method of detecting image grain is by use of lossless image compression algorithms. In this particular case we chose to piggyback on the PNG compression algorithm to indirectly detect noise levels.
Since this is a computational method (excludes human factor) it is considered fairly reliable and the results are repeatable.PNG uses a non-patented lossless data compression method known as deflation. This method is combined with prediction, where for each image line, a filter method is chosen that predicts the color of each pixel based on the colors of previous pixels and subtracts the predicted color of the pixel from the actual color. An image line filtered in this way is often more compressible than the raw image line would be, especially if it is similar to the line above (since deflate has no understanding that an image is a 2D entity, and instead just sees the image data as a stream of bytes).
A compression algorithm works by detecting redundancy of data and then summarizing those redundant data to result in more compact information.
Think of an image as a grid of pixels. A non compressed format (such as BMP or uncompressed TIFF) will store the color information for each and every pixel in the grid. Therefore any image will always hold the same amount of information regardless of complexity as long as the resolution is the same.
For example:
An uncompressed 200x200 image will hold the roughly the same amount of information (120,054bytes) regardless of context.
Pure image 01 - 200x200 BMP
Pure image 02 - 200x200 BMP
Compression algorithm theory can get complex. In a most simplified form (such as a run length method http://en.wikipedia.org/wiki/Run-length_encoding) you can think of image compression as a way to describe clusters of identical data in a non-redundant way. For example, instead of storing information for each and every one of the 25 pixels (Fg.1 in the following schematic), a compressed image would store information for only one pixel and how many times it needs to repeat (Fg. 2). Therefore if the entire image consists of a single color then it has a very high redundancy and will result in the greatest possible image compression.
If a second color is introduced (Fg. 3), then the redundancy is slightly broken and now information is needed for two pixels instead.
For those further interested in how image compression works. Here are some references on the subject.
http://computer.howstuffworks.com/file-compression1.htm
http://en.wikipedia.org/wiki/Image_compression
http://www.libpng.org/pub/png/
http://www.w3.org/TR/PNG-Compression.html
http://en.wikipedia.org/wiki/LZ77_and_L ... orithms%29
To illustrate the compressive behavior, a small sample image was generated and then noise was added in post. The result of this action is to break the uniformity of the image causing/forcing the compression algorithm to hold more data.
Solid color results in highest compression (total image size 700bytes)
Same image, but with random noise added. The randomness factor renders the compression algorithm less effective forcing it to store more data (in this case 41 times more with a total image size of 28828 bytes).
Further example: - Pure image (858 bytes) ----------> With noise (59386 bytes)
...
- For the compression method of grain sensing to work effectively it would be best to use images where no textures are involved (and the algorithm is left to work on the pure lighting solution).
- Since caustics seem to be a large contributor to image grain for product shots, the chosen scene would need to be heavily reliant on direct/indirect caustics.
- Finally, a scene that is publicly available was chosen so that the results can be verified / repeated / peer reviewed by others as well. The chosen scene can be found here:
http://forums.cgsociety.org/showthread.php?t=328799
- System used was a low/mid end workstation with a single AMD 64 3800 processor.
The scene was allowed to render with beta1.2.2a for a period of 144 hours (8663min). The output image information was collected at regular intervals in the following table.
Beta1.2.2a data table
At this point it became important to consider the main factors that could influence the behavior of PNG compression.
a) Compression level is affected depending on texture details. This factor was eliminated be defaulting to a scene that contains no textures at all.
b) Compression level is affected by the overall brightness of the image. Particularly, the exact same image will show higher entropy if rendered to be bright and will show lower entropy if rendered darker.
Therefore in the next step it is important to try and use Maxwell Studio to match the overall illumination of the Beta1.2.2a render and use materials that are as close to the beta as possible (given that beta was using a different material system). The images need not have to be virtually identical. Particularly, it will suffice if the median and mean histogram values are the same.
Preliminary tests were done with Maxwell 1.0 and it was possible to achieve a median and mean value match in the histogram. (The V1.0 image render was allowed to be slightly brighter; giving it a small disadvantage)
Histogram for Beta1.2.2a image
Histogram for V1.0 image
Once the proper settings were established the V1.0 render was allowed to proceed for 78 hours. The data were collected at regular intervals and the following table was produced.
V1.0 data table
- The data of both renders were placed in a graphical form.
The lowest entropy level (image size) that was achieved by beta1.2.2a during the 144 hours was 492115 bytes.
The resulting Beta1.2.2a image was this:
Output Image 01
At this point it was important to establish whether both Beta and V1.0 are actually using the same compression levels while saving their PNG data. Since PNG is lossless, the beta image was opened in an external program and re-saved at various PNG compression levels until the re-saved output size was the same as the original. From this process it was determined that Maxwell Beta was using PNG compression level 6. The same process was applied for Maxwell V1.0 and was determined that the compression level was also PNG-6.
Now proceeding with the test:
The archived data from the Maxwell 1.0 render was searched to find an image of the same entropy level as "Output Image 01". The corresponding V1.0 image of the same overall entropy level was found to be a 6hr 15min render (375min) which is backup index 69 in the above mentioned data file.
Output Image 02
This gives a raw (or absolute) entropy reduction speed ratio ABESR=23.10
However, looking at these two images we perceive more noise in the V1.0 event hough the PNG algorithm tells us that their overall data redundancy is the same. It is possible that the clean areas in the beta image are not as perfectly clean as in the V.10 image, but the apparent difference is not noticeable.
At this point we notice that most of the noise differences occur in the metallic parts (the rings and the coins). Particularly the back of the rings and the top of the right hand coin.
These noisy areas were selected as seen in the following images to form three "Noise Isolates"- Noise Isolate A - combined areas (for beta1.2.2a). Entropy level 102398 bytes.
- Noise Isolate B (for Beta1.2.2a). Entropy level 50776 bytes.
- Noise Isolate C (for Beta1.2.2a). Entropy level 19013 bytes.
The closest match was found to be Archive Index 114 of the V1.0 render. The noise isolates for that image were:- Noise Isolate A - combined areas (for V1.0 archive image114). Entropy level 100570 bytes.
- Noise Isolate B (for V1.0 archive image114). Entropy level 50462 bytes.
- Noise Isolate C (for V1.0 archive image114). Entropy level 19014 bytes.
The resulting V1.0 image that has the same "apparent" entropy level (meaning the roughest region is as rough as the one in beta1.2.2a) is archive image 114 at 34hr 45min (2085min)
Output image 03
- Noise Isolate A - combined areas (for beta1.2.2a). Entropy level 102398 bytes.
- This gives an "apparent" entropy reduction speed ratio APESR=4.15.
In other words a V1.0 render will resolve the roughest caustic parts to the same entropy level as a corresponding beta1.2.2a image at 4.15 times faster than Beta1.2.2a.
The following graph summarizes the test results.
Last edited by Thomas An. on Mon May 15, 2006 9:20 am, edited 2 times in total.
"Only the happy can escape the labyrinth, but only those who escape are happy"