All posts related to V2
Mihai wrote:Note that when I extracted the map, I used the *unaltered* version as the target mesh (reimported a clean copy with no sculpting). If I had used the sculpted mesh at level 0 to do the map extraction, even this level 0 is changed...
I had constancy of that (of course, in Zbrush happens too), but when I've tested the Polyxo files, and I didn't get the same perfectly smooth texture he had included in the files.. I think has been because this issue. I reimported the cube with the normals corrected for Maya, but I've subdivided pushing "d" in Zbrush.. probably this is the reason for the "black lines" that my own maps was generating... I should take a look again.

It's incredible how many little details we should take into account. I knew it, but not realized this for create de vdm...
By Polyxo
Hi Mihai,
thanks for posting these images. What I see here is a very Highpoly Basemesh which (only) at that state can perform the Wow-Action of growing the
default Mudbox-Ear on his belly. But that's not the idea of Vector-Displacement as used in Zbrush entirely. Such deformation actually should be possible
with a model of far lower base-polycounts. The underlying concept is to provide a high-performance method to 1:1 transfer the HighRes Detail to an
animateable mesh. One could certainly not animate your puppy.

Yes I indeed think also the 2D-Displacement could greatly profit of an alternative - true subdivision based scheme.
We are obviously speaking of different desirable levels of detail here.The small detail on his sholder doesn't look crip in Mudbox already but in the Studio-Render
these structures are completely washed out. Also the scales at the neck might have issues but the are too far away and dof-blurred already.
Both areas sure could not be used for closeup-shots. II doubt that a larger map would help, as well as using additional Normal-Map. It's already included anyway.

What you guys really have to understand:
Artists who use Sculpting-Programs may have worked many hours and days to create a model, where every square-cm was carefully crafted.
If images were used in Brushes to move vertices around they had 16 Bit greyscale values. everything was carefully smoothed out.
In the end the modeller does see the full detail-load in the 3D-viewport. It's not some opengl cheatery or only appears when pressing the Zbrush-render-button.
We are loooking at many millions of polygons in Zbrush or Mudbox. the detail is explicidly in there. When bringing such a model to the renderer the expectation
is to replicate exactly this. One can easily have the Sculpting-App open on a secondary screen and track see what's missing in the Maxwell-Render.

In my understanding one should be able to bring something like this model into a Maxwell-Host-Program with 1000 polygons and to profit from the Vector-Displacement bit in
the map on the nose and everywhere undercuts exist. The combined 2D-displacement and Normal-map was there to capture the High-Frequency detail which
is nicely seen if you scoll down on that page.

Please note that I never intended to do anything to Nurbs.

Last edited by Polyxo on Sat Feb 09, 2013 12:12 pm, edited 4 times in total.
By Polyxo
Mihai wrote: Do you really think at the level you mean (pores etc) it would matter if the underlying tiny geometry is quads or triangles? It's vertices pushed around.
At a division-level of less of a milimeter edge-length per face this does not make a difference, no.
But subdividing a Low-Poly-Cage with a Triangle based algorythm vs. a quad-based method makes a tremendous difference.
If Maxwell triangulates right away it inevitably builds errors into the inital step which it has to compensate in following steps.
What's the productive outcome of this? I suppose from where you sit the productive outcome would be for we (the users) to realize we are off-base, and asking for Maxwell to do something it shouldn't/can't...

I've reported this problem nearly a year ago with absolutely no response whatsoever (although that is nothing new)... if there was any serious intent to resolve this issue, discussions to that end should have taken place long ago. Threads on this topic (not including this one) have never resolved the issue -- so my original response still stands. ... 97&t=38315 ... 75&t=38662 ... 97&t=38816

I honestly don't know why I even bother to report bugs in Maxwell if I get no responses (you and your plugin have always been responsive to reports, so my opinion on this has nothing to do with how I perceive you in particular JD... but honestly you have little control over this particular situation).

By JDHill
Well, whatever my control over it, my response has involved a visit to the Pixologic store to try and figure out what's really going on. But you're doing a pretty good job of sapping motivation on that front. So let's please just try to keep it technical if we could...whether you think it's going anywhere or not, nobody's forcing you to hit the Submit button.
Fair enough, if this gets resolved I'd be happier than anyone here since I teach both software, and I'd like to be able to recommend them together. Seeing you get involved gives me some hope -- if you need anything let me know and I'll try to provide whatever you need.

Just for the record, when you save a vector disp map out of C4D it works fine in Maxwell, in case anyones wondering about that. I can dig out the file if necessary.
Well, then it works too in C4D.

We have Studio, Maya, Rhino (with the instances trick), and Cinema4D. All goes well.

JDHill, If you want contact with people of Pixologic, talk with Paul Gaboury, he is who is taking a look at this "uv map diagnose" issue.
User avatar
By Mihai
Polyxo, I posted that in response to your request of showing a more real life example instead of just an "academic" single poly plane. So it's more complex geometry to show that vector displacement works fine here as well. That base mesh is not super heavy and could certainly be rigged and animated. But anyway, regarding your comments about lack of detail and animation, you may or may not be aware that the current trend is to use less and less actual maps to redisplace a piece of geometry and instead just use the geometry directly. Why?

- because imagine the resolution of the map for the kinds of detail you are looking for. Again I repeat that it's silly to expect vector displacement to be so different from regular displacement when both use a map of finite pixels to define an area of possibly infinite detail when zooming in on a part. What is it that makes you think 100 pixels in a VD texture mapped over an area of your model that covers 2000 pixels in your render will magically give you clean perfect detail, compared to 100 pixel in a 2D texture? Marketing?

- there are workflows to transfer an animation from a basemesh to a higher Res render mesh, loading it at render time.

Both these things make your approach to thinking about displacement more academic than I demonstrated. I repeat that getting stuck on something like you have now is always counter productive.

The easy solution to this showstopper would be to instead apply your displacement on a level 1 or 2 subdivision level of a mesh. Honestly, big freaking deal. Animate your low poly mesh, and before rendering, subdivide it twice. After this level, the smoothing created by Catmull/Clark subdivision is negligible. Btw, there are other subdivision methods besides this one and is it guaranteed that Zbrush or Mudbox or 3d Coat or X use exactly the same subdivision method? Because if not, then it might be impossible for your renderer to match the look exactly because those first two levels of subdivision affect the look the most.

Of course there is always room for improvement but you make it sound like it's completely useless and this attitude is a waste of time for all and we try to keep it to a minimum on this forum. Both parties can be equally accused of being stubborn and finding excuses, shocking isn't it?
By Polyxo
Hi Mihai,
You seem to think that I expect some unrealistic magic to happen and find it kind of annoying to be confronted with silly wishes.
I just wonder where you find me put these forward.

I expected the renderer to apply smooth subdivisions at rendertime and wrote that quite a few times already.
Without subdivision one can not make Displacement from sculpting apps to work, neither 2D, nor VD.

Maxwell needs already quite a bit of Geometry to work with. There's different ways to deal with this.
One could - as you suggest - subdivide to some hundredthousand polygons already in the Sculpting-App.
This gives the user a static solution, and thinking of e.g. a complex character one with this method would have to apply this sort of base-subdivision
to all of its component. Thinking of Body, Cloth, and maybe some Accessoires and the fact that such in Zbrush at full resolutions together can have
tens of millions of polygons one with this technique would likely end up with with a million polygons or more just in the basemesh.

That technique has quite a few disadvantages. The model comes pretty heavy into the Maxwell-Host, resulting in longer load times and
depending on machine-specs also in weaker realtime response of the Host-Program. Most 3D apps can deal with such loads nowadays but just not
as easy as specialized sculpting-apps. Frozen to Medium Res one was also was no longer able to do some minor tweaks by using the host-program's
UV-workspace for instance. Such should for good reasons be done to the Low Res Version. Also if one finds that one needed yet another subdivision-level
because the Displacement still doesn't look good and can not further be tweaked by Maxwells means. Well... then one had to go back to the Sculpting-App,
subdivide here, reload all-model components, reapply materials, reposition in the scene, the whole shebang. You already descibed required workarounds
for animations... All in all that strategy is quite unattractive.

Then there's a second option - one can bring in a very low-res version of the model to a Subdivison-Surfaces-enabled host and apply a Catmull-Clark based
Subdivision-Modifier which gets configured in a way that it only gets evaluated at rendertime and before sending the geometry to the renderer. Judging fom
Reaverswords image I assume this is hooked up in Maya and it will work from elsewhere too.

That way one can work with a Lightweight model in the viewport with all of its advantages but gets a lot more Geometry sent to the Renderer.
Pretty neat, also because the the mesh is no more static. If one feels that the renderer still could need some more base-geometry one could losslessly
quad-subdivide once more.

The problem is that there's only a handful of Maxwell-hosts where one has this option at disposal - I'm just guessing here - but the list should consist
of 3DSMax, Maya, Cinema, Modo, Lightwave, Softimage and Houdini. Maybe this list even is too long alrady.

There's nothing like that option in all other officially supported 3D-programs, including the Studio-Interface. All these hosts are in comparison crippled in terms
of Displacement rendering on Geometry output by Sculpting apps, also if their platform generally allows dealing with meshes.

To me the most complete and fuzz-free solution to that problem would seem to be a to introduce consequently Subdivision-based scheme as Renderman
seems to have it. That way one would not need to deal with subdivision-modifiers in any supported package.
Most professionally made sculpted geometry when used in film at least probably gets output with this engine or compliant engines, that scheme seems very
well prepaired for heavy displacement involving great alteration of the overall shape..

The second-best solution as I see it was to at least offer a quad-based subdivision as a pre-process to Maxwells triangle based MicroPolygon-Displacement.
It should be easy to add and made available in all platforms, without exception.
User avatar
By Mihai
Ok now it feels we are getting somewhere. I wrote that I think your expectations are impractical because:
Polyxo wrote: We are loooking at many millions of polygons in Zbrush or Mudbox. the detail is explicidly in there. When bringing such a model to the renderer the expectation
is to replicate exactly this.

Understand that in most cases you can't. Not in renderman, not in anything. Because that detail represented by millions of polygons over a tiny area is converted into pixels over a UV area. In most cases one pixel ends up representing thousands of polygons. Try this: sculpt on some default character with good uv maps, zoom in on an area and sculpt lots of detail there. Now export this map as vd or whatever and apply it as a sculpt to the same unaltered basemesh. Did you get exactly the same result? No render engine was used and it's the sculpting apps "fault" if the result is not the same, thus it is useless...

This is the reason the trend is going towards using the heavy geometry in the renderer directly instead of relying on the conversion from polys to pixels, not because they enjoy making life awkward for themselves.

Regarding CAD apps, my understanding is that in most cases people have modelled something with Nurbs to which they would like to apply some displacement now. So those nurbs would first be converted into triangles by the host app thus a preliminary step of subdivision would not be so useful. IF instead you mean you've modelled some polygon object in a poly modeller and then imported that into your CAD question would be, why? You have no practical uv tools in a Cad app, you have very limited animation, since we were on the subject of animation, and you have no ability to smooth subdivide a few steps before sending the geometry to the renderer.

Just saying that, practically speaking: displacement without smooth subdivision is not a disaster and displacement with smooth subdivision is not panacea.
Guys, the ONLY way to get maps working perfectly is in really, SIMPLE.

Next Limit should do its own Vector Displacement/Displacement/Normals maps creator (and Chaosgroup for their Vray, and Mental Images for their MentalRay/iRay), etc..

I know, its more job, but if you think it about, in deep, seriously, is the only way to forget Zbrush/Mudbox settings, Maya/Max/XSI/etc.. settings.. just directly a map created specifically for the render engine, without any soft in the middle of the workflow. Is the render engine which should read and correctly interpret maps, just import hires and lowres models and create the specific map for the render engine.

I understand Next Limit team could think: buff!, man!, no!, more work no!. But if you see it with the cold mind, this is the only real way to get maps working an adapting perfectly to a render engine.

All this time talking between companies, trying to be agree, collecting information for trying to adapt the render engine.. bad, this at the end is a waste of time and doesn't works properly.

So, people of Next Limit, I know it would be more job, an there are lots of thinks for do, but really you think there is another way to get exact results?. Who's gonna understand MaxwellRender engine better than you?.
WARNING: I'm not requesting anything, this industry thinks maps shoud be done for sculpt softwares, and in my opinion that fact is wrong.
We have that we have. We do the best we can with that. But, maybe is not the more accurate way. Maybe if we would have a "standard", some kind of agreement for Cad-3d apps world, like for example a World Wide Web Consortium (W3C) for the web pages, but adapted to our "world", but...
By Polyxo
I believe to fully picture the problem one needs to leave the UV-Space for a moment - as important as it is.
As soon as one subdivides and smoothes a mesh one alters its shape quite drasticallly - it essentially gets much closer to the shape it had in the source-program.
That also means that Normals stored in the map match better to begin with and that one needed less Microvertex-Refinement to reach a sufficient result.

I think we are generally on the same page concerning throwing more Geometry to the renderer.
If you bring in a basemesh to Softimage with 1000 polygons, then apply 4 levels of smooth CC-subdivision with a Modifier and then send
that output to Maxwell it will evaluate a quarter-million of polygons already in the UV-space and further dice it with its triangle based methods.
Or are you saying that one can not do this in Softimage?

Bringing Nurbs into play in this discussion does not make sense at all. There is no advanced use of displacement over this Geometry type possible.
One can displace a single Nurbs surface yes, or some primitive with a projector and some simple maps.

As soon as a Nurbs- model has a certain level of complexity,say any piece of Industrial-Design or a car it is a polysurface. Such represent the whole UV-Space
in every single of its Sub-Surfaces, also into a tiny fillet. That is so in all Nurbs-Packages one can buy. No problem when you want to assign a Carpaint-Shader.
But whenever one needs a somewhat more complex structured surface-appearance and especially if one wants to work with displacement one needs to export
the model and unwrap UV's. There's no way around. At that point we start talking about a (quad-dominant) mesh.
Displacement can seamlessly be applied then yes, but one would indeed not profit from quad-based subdivision. For most use-cases one will come along but given
one would want to apply vector-displacement on that mesh, originally modelled in Nubbs one had to perform a (mesh) retopology, quite the same way as one would
do this with any sort of non-displacement optimized meshes created outside of the CAD-world.

All supported CAD-applications can also load and render meshes, it is common place, especially for items which are traditionally hard to model in Nurbs.
Limiting the Engine to just deal with stuff created in Nurbs was Nonsense.
Given the mesh has a loop-based nature there was no conceptual indication I can see which would forbid rendering it with rendertime-applied smooth
Quad-Subdivisions. Results would match those on can get in Maya or such. That feature is just not currently available.

Hello Matt, Yes, for the moment, the plugin for M[…]

Maxwell X Substance

How awesome is this!!!! I do have one question su[…]

Hello, I'm afraid the development of Maxwell 4 is[…]

Material Organization

Hello All! I've reached a point where I have well[…]