Any features you'd like to see implemented into Maxwell?
#382089
Wish
A third layer blending option based on viewing angle

-------------
** Edit - 8/27/14 **
-------------

Abridged Background
Real world "micro-roughness" creates variable, blended reflections that Maxwell doesn't currently handle. Maxwell can make a specific viewing angle look correct, but not all viewing angles with the same material.

Read on for a more in depth discussion and how layer blending by viewing angle should be able to resolve this.

Also, there's a great example of a "micro-roughness" workaround by Eric Nixon:

http://www.maxwellrender.com/forum/view ... 11#p382311

-------------
** End Edit - 8/27/14 **
-------------


---------------------------------------------------------------------------------

Full Background
I've been doing some extensive research into Maxwell's material system in an effort to customize the Reflectance curve, specifically for Specular type reflections.

Mihai's Maxwell version 1 post about the material system is great, and is actually still quite relevant in version 3. Great job Mihai!

http://www.maxwellrender.com/forum/view ... eflectance

Mihai sets the stage by defining the nature of light. Further down the thread, he poses a very interesting question about what defines the difference between diffusely reflected light and specularly reflected light. From what I can tell based on my personal research and experience, the answer is simply whatever tool you're using to render your color information.

The real world doesn't distinguish between diffuse and specular light. It's all just reflected light. In order to mimic this phenomenon in Maxwell, customizing the reflectance curve is essential. Fresnel curves driven exclusively by a single, static Nd value or complex IOR data actually oversimplify the situation by applying them to the entire surface of a perfectly smooth object. They cover a huge range of realistic possibilities, but like all things, in certain situations, it just falls apart.

Theoretical Flaw:
In any given render, if the intended Fresnel effect is supposed to be consistent with the mathematical model (a single fresnel curve driven by a single, static Nd value), then adjusting the roughness value works. However, due to microscopic surface variations in the real world, most objects don't actually follow a Snell driven Fresnel effect based on a single IOR to 100% accuracy. It's close enough to be believable, and qualifies as "physically accurate", but only if certain assumptions are made and only if the surface properties aren't affected by a microscopic variation. Snell's law assumes that the incoming light hitting a surface is an infinitely small vector, and thus isn't spread out over a real world surface. When you aggregate that over a real surface with microscopic variations to observe the Fresnel effect, it doesn't always work as expected. To simulate this with Maxwell's existing shading model, you would have to start introducing extremely high resolution maps for normal bump effects and tweak the anisotropy settings.

Proposed Solution
To avoid getting Maxwell hung up on all the finite, granular, messy details that make up microscopic surface variations, what if we compromise by not totally redoing the existing multi-layered BSDF system, but instead allow another blending option of layers that's driven by viewing angle? It would also need some kind of "overlap by X degrees" setting where the real "blending occurs", otherwise you might end up with unrealistic reflectance shifts that make hard lines in the render. This would use all of the existing functionality of the current material system, but give the artistic freedom to the end user as to where to apply that shading model onto their surfaces.

I've seen other suggestions about making a custom reflectance input and making a fall-off procedural map for use in the roughness parameter, but that would actually disrupt the existing Maxwell material methodology due to the roughness value having an effect on BSDF 0 and 90 degree color blending. Maxwell can already determine viewing angles of surfaces quite efficiently (it has to based on the current BSDF implementation), and even let's you output that info by using the Fresnel channel. Blending layers based on the viewing angle could actually be achieved with minimal disruption to the existing system. This would just be another form of "weighting" layers. You would still have the inefficient result of having to calculate every BSDF in every layer for the entire object and then returning their blended values back to the image buffer, but Maxwell already does that now anyway. Currently, if you have two layers set to "Normal" blending and the top one isn't cut by an alpha channel, the bottom one is never visible, which means that Maxwell calculated the effects of the bottom layer but only really stored the effects of the top layer due to the blending calculations.

Current Problematic Situation
This specifically came to mind as I was trying to create a plastic material with some odd properties. Basically, between 0 and 10 degrees, there are a lot of anisotropic, rough specular reflections (roughness around 50). From 10 to 45 degrees, the surface has almost no specular reflections (roughness 95+). From 45 to 85 degrees, the roughness decreases to around 30 with less anisotropy, but from 85 to 90 degrees, the roughness would be really low, perhaps around 2 or 3 with isotropic reflections.

***
NOTE: I will be happy to send a physical sample of this material to anyone on the NL team that wants to see this first hand. Just PM me with an address and I'll FedEx one to you. If you want several sent to one office, just specify that. I tried taking pictures of it but I couldn't clearly capture what I'm talking about with my iPhone. It's pretty obvious once you hold this and move it around. You can tell it's got some odd reflective properties that are probably just exaggerations of similar effects that happen on nearly every other surface.
***

I have yet to figure out how to achieve this in Maxwell version 3. Since the color (and therefore brightness) of specular reflections are only multiplied by the Nd blended BSDF 0 and 90 degree colors, and turning their clarity up or down is exclusive to the roughness parameter, I can't intentionally position them to different viewing angles with different clarities or intensities. I think the layer blending approach would be easy to understand and hard to mess up because accidentally "over brightening" surfaces by stacking too many additive layers and introducing huge amounts of noise into the render won't get more complicated, or at least won't be much different than it is right now.

Current Work Around
The current work around would be to make as many renders as there are specular reflection variations, and then assemble them back together in post using the fresnel channel's output. I simply don't have the time or resources to do that. My current projects (which are animations) are taking between 2 and 3 days just to render. I'm pretty sure that most of that is due to the use of complex IOR files in my materials. If I could replicate the complex IOR effects with multiple BSDF's in the ranges of viewing angles they occur, I would probably reduce my render times anyway because I could bypass any need to compensate for transmission. This of course would be dependent on the ability to blend layers by viewing angle.


Thanks Next Limit!
Last edited by zparrish on Wed Aug 27, 2014 5:15 pm, edited 1 time in total.
#382104
Here's an example of how I achieved this in Mental Ray. Basically, I stacked Falloff maps in the Reflection Color Map input so I could slice up the contribution based on viewing angle into more than two options:

Image

It rendered like this on the top side of the center gasket (the brown piece between the aluminum retainers):

Image

If you were to orbit around that gasket in real life, it almost has some similarities to velvet in how its 0 to ~60 degree reflectivity works. However, once you go from 60 to 90, it slowly transforms into a very sharp, faded mirror reflection.

The Mental Ray version actually looks and acts like the real material. It also only dials in to the reflective component of the shading model, which isn't really what I'm wishing for as it would over complicate Maxwell's fairly straight forward material system.

Also, here's my current attempt a Maxwell translated version of the scene. I did make some UVW and material adjustments, so it's not a literal translation. Even the lighting solution was completely redone:

Image

The initial problems with this one are probably not very obvious without holding a physical sample in your hands, but it looks more like plastic and less like the odd reflective appearance of the real gasket's surface.
#382132
I couldn't find the time to read all that.. but I very much agree with the idea of having a 'viewing angle' type of weightmap.

This would be very useful for micro-roughness but also for thin-sss such as curtains or sheer tights where we want to percieve the thickness of the cloth at grazing angles, (currently thin-sss has no rendered thickness)

Whilst it may be possible to code this function in an automatic-unbiased way, personally I would prefer to have a manual option for artistic control (just an fresnel type weightmap would be perfect).
#382318
Hey Eric,
I wanted to link this request back to your excellent post :D involving a similar work around:

http://www.maxwellrender.com/forum/view ... 11#p382311

-------------------------------------------------------------------------------------------

Continuing with the general discussion, I mentioned in the original post that another advantage to having layers blended by viewing angle is that it helps to maintain energy conservation. Maxwell already does this within each BSDF, as well as within any layer containing multiple BSDF's because it averages out their weights to a cumulative total of 100, if I understand the Maxwell docs correctly (http://support.nextlimit.com/display/mx ... ding+BSDFs). Additive layer blending can create unnatural surface brightness when the combined weight of layers exceeds 100 (http://support.nextlimit.com/display/mx ... ing+Layers).

I personally find it very difficult to distribute diffuse reflections (base color) and direct reflections (specular reflections) across multiple BSDF's and layers simultaneously, given the current blending and weight systems. If the material is just 2 BSDF's in the same layer, using identical "BSDF Properties", but having different "Surface Properties", it's very simple and works beautifully. In fact, it even handles double reflections. As soon as you start mixing different "BSDF Properties", regardless of whether they're on the same layer or not, it gets extremely hard to maintain energy conservation. It seems like most people fake the specular component with unsaturated and pure values, then reduce their weights to keep them from going out of control and breaking physics.

If layers were blended by viewing angle, then contributions from multiple layers can be sliced up so they don't add to each other, effectively preventing any violation of the conservation of energy rules. It would be comparable to having a falloff curve in each layer's alpha mask and thus throttling each layer's contribution. The only downside to having a falloff map in each layers' alpha mask is that you would need to make sure each falloff gracefully hands off the visibility to the next layer. It's exactly what I had to do with the Mental Ray falloff maps shown in my screen shots. If the layer blending by viewing angle were more like a global override for the entire MXM material, it could look and function similar to Photoshop's Gradient Editor slider. The length of the slider would represent 0 to 90 degrees. Each swatch on the top of the slide would represent the interface point between layers (50% <-> 50% blend). The smaller, diamond shaped sliders would control exactly where the blend starts (100% <-> 0%) and where the blend ends (0% <-> 100%).

Image

If there were more layers, there would be blend points between each of them.

The next problem that would arise is trying to stack different materials using the layer system. For that, we would need to be able to toggle which layers are part of the viewing angle control and which ones aren't. It might also be advantageous to do multiple viewing angle layer groups, that way you could stack multiple micro-roughness surfaces on top of one another (like paint over plastic). You could also get into a nested layer setup, but that's currently beyond Maxwell's material system. At that point, it would almost make more sense to go to a node based material system, rather than a linear stack.
#382325
Again I struggle to understand all that writing, but my overall impression is that you are not thinking correctly about maxwell materials. The thing is you need to forget your old approach that was applicable to mental-ray. When I started with Maxwell, I also tried to fake things using the tricks I was familiar with from biased rendering, but its best to start over and think about the parameters as they relate to the behaviour of real-world materials.

re. energy conservation - maxwell handles that automatically so no need to worry about that, (ofcourse I may have misunderstood your point).

re. node based materials, that functionality should/could belong to the host software, and is only really needed for interactive materials where the appearance needs to be modified according to other variables in the scene.
#382351
Hey Eric,
I apologize for the excessive text. I know it's a lot, especially with some of the huge images in the middle. I just like to make sure I completely explain my points. If it helps to understand my post and responses, I'm really not much of an artist. I'm more of a scientist trying to use Maxwell to synthesize photography. That's actually why I dropped Mental Ray and picked up Maxwell, because Maxwell just seems less "gimmicky" and more intentionally direct.

I did actually have significant trouble when 1st starting out with Maxwell's material system. The Official Maxwell Render Youtube Cannel really helped a lot with the series on the BSDF (which I highly recommend to anyone if they haven't seen these). After going through those, I'm fairly confident that I understand how Maxwell's material system deals with the known science of light, and subsequently where it over simplifies and restricts photorealism. I don't know exactly which BRDF and BTDF models Maxwell uses to create its BSDF, but I have discovered that it's not critical to know it to that level (at least not yet :D ).

The Mental Ray example was really just to show that the reflectance curve could be controlled based on viewing angle. The need to do so is irrespective of the rendering engine, because in real life, this effect is caused by things that wouldn't make sense to duplicate in a computer generated environment (microroughness using real geometry). That's why there are shortcuts to achieve the same results. I've abandoned nearly all of my Mental Ray workflows and tricks, especially when working with Maxwell. The things I still apply are shared with nearly all rendering and compositing applications, primarily the math behind compositing color values in RGB.

Of all the ways that a reflectance curve could be customized in Maxwell, I'm suggesting that this one is the least destructive and the most compatible with Maxwell's commitment for physically accurate rendering.
re. energy conservation - maxwell handles that automatically so no need to worry about that, (ofcourse I may have misunderstood your point).
It is actually possible to completely break energy conservation with the Maxwell material system, so it is a valid concern. Additive layer blending mode makes that possible. Here's an example that shows this very thing, as well as the MXM file to replicate it and the Cornell Box based scene I used it in:

http://csimages.c-sgroup.com/external_f ... stment.exr

http://csimages.c-sgroup.com/external_f ... terial.mxm

http://csimages.c-sgroup.com/external_f ... terial.mxs

I had to reduce the exposure by 3 to make this visible in Photoshop, but if you color pick the light in the top of the scene, and then color pick it's reflection on the sphere, you'll notice that the reflection actually has higher color values that the light itself. That's not possible due to the conservation of energy.

The material is just a plain BSDF within a layer. I then copied that layer 9 more times (so there are 10 layers) and set them all to additive mode. If I kept duplicating the layers, the reflection on the sphere would only get brighter. It may be possible (I didn't test this) to add enough layers together that the sphere actually becomes a light source, amplifying any light the contacts it.

Alternatively, if you only have one layer with the same simple BSDF, the color values make more sense. The reflection on the sphere is significantly less intense than the direct intensity of the light.

http://csimages.c-sgroup.com/external_f ... stment.exr
re. node based materials, that functionality should/could belong to the host software, and is only really needed for interactive materials where the appearance needs to be modified according to other variables in the scene.
For a system that would allow layers within layers, I just think a node based system would make it easier to understand for an end user. But that's just my opinion. I think ultimately what I'm trying to address with this feature request would promote interactive materials, regardless of the scene setup. Once you start adding control to materials, like customizing the reflectance curve based on another variable (in this case the viewing angle), you effectively get an interactive material. To me, if the material is dynamic enough to look real in all lighting conditions and from all perspectives, then it would qualify as an interactive material. This is just the kind of material that I need to render my animations in a photorealistic way.

I'm also a programmer, so I might just have to start looking into Maxwell's SDK and try to create a proof of concept for this. I have no idea how much of the engine is exposed in the SDK though, so that's why I haven't even really pursued it yet.
OutDoor Scenery Question

you said: After you apply the image to the polygo[…]

fixed! thank you - customer support! -Ed