Everything related to Maxwell Render and general stuff that doesn't fit in other categories.
#396099
Hey, sorry for the delay.
I do not have any particular technique.

I simplify by saying PBR material equals one layer in maxwell.
So it is impossible to visualize (and generate) precisely an object in Substance Designer or Painter,
mostly if each object contains different materials with different layers.

For Painter with mono-mesh and one PBR, it's even more complicated,
because it's an mxm that includes different materials (so a lot of layer)
and to make matters worse, a complex mask system for each material.

It is best to work with separate objects to simplify the process.

some of my works with Substance Designer :
> the canon is made with differents mxm
> the unicorn is made with a single mxm (five different materials with mask system)

Image

Image
#396111
The more I work with Substance, Arnold & Maxwell the more I wonder why Maxwell doesn't have a separate control for Metalness. It could even be a toggle for a BSDF. This would make so much sense for a variety of reasons. I know that Maxwell isn't focused on SFX but a Substance type of workflow (including substance scans, similar to VRay Scans) really is the future of this stuff (at least as I see it).

Would it not make sense? What am I missing?
#396462
It is a different render system, that's all.
Substance has 2 main PBR workflows, metal/rough (good for Disney Principled) and spec/gloss (good for VRay).
The output is just texture maps with different naming conventions.
The PBR workflow is primarily for games where there is an overhead for the number of textures.
There are lots of different workflows just for games where you swizzle different channels within a texture to drive different shader parameters like roughness or specular value. In Maxwell there are fewer constraints on the number of textures or resolution. So it is better and easier to let different materials be separate. It is great that a game engine texture generator can output maps that can be used by Maxwell but expecting Maxwell to perform like a game engine render engine is self-defeating. Imagine trying to drive Maxwell's complex IOR with an image map and how you would go about making such an image!!?
PBR is just physically based or if you use Disney's Principled shader just physically plausible
Maxwell's premise is as a simulation and if you compare it to something like Ansys you would not want your FEA simulations driven by game engine physics.
#396472
Brian Buxton wrote:
Tue Feb 06, 2018 10:26 am
It is a different render system, that's all.
Substance has 2 main PBR workflows, metal/rough (good for Disney Principled) and spec/gloss (good for VRay).
The output is just texture maps with different naming conventions.
The PBR workflow is primarily for games where there is an overhead for the number of textures.
There are lots of different workflows just for games where you swizzle different channels within a texture to drive different shader parameters like roughness or specular value. In Maxwell there are fewer constraints on the number of textures or resolution. So it is better and easier to let different materials be separate. It is great that a game engine texture generator can output maps that can be used by Maxwell but expecting Maxwell to perform like a game engine render engine is self-defeating. Imagine trying to drive Maxwell's complex IOR with an image map and how you would go about making such an image!!?
PBR is just physically based or if you use Disney's Principled shader just physically plausible
Maxwell's premise is as a simulation and if you compare it to something like Ansys you would not want your FEA simulations driven by game engine physics.
I completely agree on using separate layers to create specific material attributes when creating complex material structures. Thinking of substance as only a "game engine texture generator" is, however, a bit shortsighted in my opinion, especially given how quickly it's being adopted by the VFX industry. No, procedurals alone aren't enough, but the mask generators alone are worth the price of admission.

I wrote this in another thread, but I think it's appropriate here as well:
There are two ways to think of rendering and I find that most working with Maxwell are limited to one way...

The first is to make something that is believably perfect. We all do this sometimes, but for people doing product rendering, or even arch viz, the goal is to create a material that seems real but that is perfect. I've worked as a commercial photographer and that is often the goal of commercial retouching, to correct all the tiny, barely noticeable errors so that the photograph makes the product look believably perfect. This is how I see most folks working.

The second way is to make something perfectly believable. Not simply Pottery Barn Catalog believable, but really believable in all contexts. This is how most visual effects artists work. It's funny, I spent the first half of my life trying to figure out how to remove imperfections and now I'm spending the second half of my life figuring out how to add them.

Making a material that looks like wallpaper is not hard and can be achieved in a single layer. Making a material that looks like wallpaper that has been hanging in a cheap motel since the 50s is a different task. This is why many folks are looking to integrate with programs like Mari, Substance and Quixel. In addition to those, and Maya, I'm constantly running Z-brush, Mudbox (though less of that now), Vue and now Marvelous Designer. All in an effort to create a photorealistic, ART DIRECTED surface. It has taken me a while to fully realize this in Maxwell (certainly longer than it would in other render packages, like V-Ray or Arnold) but the results are fantastic.
Maybe I'm using the wrong render engine, however, as a photographer and cinematographer, I find the photographic controls and lighting instruments very intuitive.
Sketchup 2024 Released

I would like to add my voice to this annual reques[…]