All posts related to V2
#364854
JDHill wrote: The reason your above example fails is: you attempt to combine subdivision and displacement into one operation, when Maxwell only supports displacement. If you want a good result, you need to include the subdivision part in your mesh, and only the displacement part in your displacement map. Your modeling application may or may not support this separation: it may be necessary to bake the subdivided sphere to a mesh, and then paint your displacement on that.
This entirely defeats the purpose of displacement -- if there is no savings on geometry, then why not just save the sculpted mesh and call it a day... having a fully subdivided object and then applying vector displacement saves no resources at all.

That's not a feature of the render engine, it's a problem to be solved.

Best,
Jason.
By Polyxo
#364855
JDHill wrote: How many methods do you know for dividing a triangle into similar sub-triangles? We have already discussed this in another thread. The question is not specific to 3D applications, we are only talking about simple geometry.
I am familiar with a few but given the numbers Reaversword piled up earlier in this thread are correct I can say that it's no scheme I know.
The reason your above example fails is: you attempt to combine subdivision and displacement into one operation, when Maxwell only supports displacement.
Well it may be that there is a differentiation between these two concepts programmatically.
There's nothing like that in a Sculpting workflow - I believe here even Reaversword will agree.
The displacement is explicitly in the mesh - it's no effect one can turn on and off. There's not
even a way to save the un-deformed subdivided mesh and the displacement-information separately.

Essentially that means that with the current implementation one might be able to grow an ear from a plane or some other examples of mere academic nature.
But one can not use Vector-Displacement as a production-tool. A user who has created a character or any sort of true 3D-model with Mudbox, Zbrush or any
other suitable package and has exported a low poly-mesh alongside with VDisplacement will not come to any sort of useful result using Maxwell.
Sorry, but one can not put that differently.
Last edited by Polyxo on Fri Feb 08, 2013 6:41 pm, edited 1 time in total.
By JDHill
#364858
Guys, I was only breaking down why the given example must fail, not making any comment on what should or shouldn't be. And I don't think it precise to say this completely defeats the purpose of displacement; the base mesh may still consist of only a fraction of the total polycount of the render-time displaced mesh. Whether that serves your specific needs may or may not be the case, but it is not of zero value -- when considered more along the lines of normal mapping, the feature can be considered useful. Wishing that one would be able to go from a cube to a full sculpture using a single map is a feature request; that requires both a map and an internal subdivision, and the lack of the latter cannot be said to be a problem or indicator of anything currently being broken. It is a request for a specific feature: render-time (SubD) subdivision. For this, the engine and the file format would need to deal with quads natively.
By Polyxo
#364859
JDHill wrote: Whether that serves your specific needs may or may not be the case, but it is not of zero value -- when considered more along the lines of normal mapping, the feature can be considered useful

Sorry but no, really.
One can only output Vector-Displacement with programs designed for the whole pipeline where one works works with model-specific
sets of textures. No program designed for creation of generic, tilable maps (Crazybump or such) gives you VD and I have never seen anybody
trying to do anything along these lines with Zbrush, Mudbox or 3DCoat. It would even mean having to trick these apps - as there's no out of
the box way to get VD-Maps tiling.
The workflow you suggest would essentially mean buying a Porsche but only using the first gear.
JDHill wrote: Wishing that one would be able to go from a cube to a full sculpture using a single map is a feature request; that requires both a map and an internal subdivision, and the lack of the latter cannot be said to be a problem or indicator of anything currently being broken. It is a request for a specific feature: render-time (SubD) subdivision. For this, the engine and the file format would need to deal with quads natively.
Well, I frankly expected that if one actually offers VD in a High-End-Engine that such a feature-request was no more necessary.
Quad-Subdivision is the only logical consequence.
#364860
Dancing in the grey areas between explicit and implicit is not very good customer relations -- it reeks a bit of "bait and switch".

As in: "Vector Displacement, yeah we've got that."... The implicit part of that statement is the intended use of Vector Displacement is honored and supported.

The explicit truth is more like: "Yeah, we've got something that allows you to load Vector Displacement maps and get a result, but it won't necessarily be the result you expect/want/need.".

The second statement is far more truthful and also alot less marketable. Over the years I've found several "features" of Maxwell play with the grey area between implicit and explicit -- we have x feature, sure... just don't try to use use it unless xyz criteria are in play.

Best,
Jason.
By JDHill
#364861
Yeah, well just hold that judgment, as I am second-guessing some of my above conclusions. Points will lie somewhere in UV space, such that a vector map should be able to be used to displace them to some other place. That concept on its own doesn't involve quads or subdivision. If you have a displacement map which assumes a subdivided base mesh, this will not work, but if not, it should. The confusion of terms is still certainly problematic, though, since when you say "subdivision 1" in one place, it will have a very different meaning in the other.
User avatar
By Mihai
#364864
Polyxo wrote: Essentially that means that with the current implementation one might be able to grow an ear from a plane or some other examples of mere academic nature.
But one can not use Vector-Displacement as a production-tool. A user who has created a character or any sort of true 3D-model with Mudbox, Zbrush or any
other suitable package and has exported a low poly-mesh alongside with VDisplacement will not come to any sort of useful result using Maxwell.
Sorry, but one can not put that differently.
As you seem to be very sure of your conclusions, I have to ask:

- what experience do you have with non nurbs modelling?
- what experience do you have with applying a displacement map (2D or vector disp. to a base polygon mesh)?
- what experience do you have working with displacement in other renderers?

If you have almost none, a few things to keep in mind:

- you can read the gazillion posts in Zbrush/Mudbox forums regarding problems and artifacts with even 2D displacement.

- there is no prerequisite that vector displacement works by subdividing a mesh in the way you are accustomed to thinking about subdivision surfaces in a modelling application. One might ask if it was so, why is this not necessary for 2D displacement as well?

- vector displacement maps are not magical. It's a 2D displacement map with the addition of direction. There is nothing inherent to it that says if only the renderer could use it properly, I'm always going to get 100% what I see in the Zbrush/Mudbox viewport. It's a mistake to expect that. First of all you will always be limited by the pixels covering a certain area of your model to properly describe an accurate displacement - this alone has *nothing* to do with the method of displacement subdivision used by a render engine.

- following that, the usual workflow is that if you start with a very low poly base mesh (such as in your example with starting from a cube), you export at least the first or even second level of subdivision from the modeller and use that as the base mesh for displacement. 2D or vector displacements will usually be more accurate this way and you save a little bit of memory.

- you are stuck on "non-quad" subdivision when there is no reason to be. I can show you very nice examples of vector displacement done in Maxwell, will you accept it? There is nothing academic about the examples in the docs page for example. It's a pretty extreme example. Accept also the fact that at render time everything is turned into triangles anyway, regardless if the initial stage of subdivision was done using Catmull Clark or Sammy Davis Jr or....SO, you may see artifacts even then, IF you don't use enough subdivisions.
#364865
Well, Polyxo, I've been playing with your files.

The best I can get in Maya is this:

Image
Image

BUT. It is not exactly, take a look at your stroke in Zbrush. The stroke in the render is deformed.

The only way to get it working has been using subdivision (pushing 3 in Maya) over the cube, the other ways results an exploded cube.

The low poly cube, in Maya, was the normals flipped, so I reversed them first (don't know if this is an issue with Rhino .obj exporter, if the original cube has been created there)

More things..

It's me?, or Zbrush is making TANGENT vector displacements with "vd Tangent" button OFF and WORLD ones with "vd Tangent" ON. Is this function reversed in yours Zbrush4R5!?.

Another thing that I can't explain. Polyxo, I can't reproduce the vdm .exr texture you've supplied in the file!. I can create VDM, of course, but I tested in many ways and any of it has produced to me exactly the same map. For make your map should be just opening your .Zbr file and create it, right?. Mines are more contrasted, and although I get almost exactly the same result, I still see a thin black line that are the cube borders, however your file appears totally clean..


For other side, Maxwell, as render engine, not haves subdivision method, only displacement, although Maxwell respects the subdivision applied to a polygonal geometry (or at least in Maya it does). Vray haves subdivision, the own render engine. For other side, yes, I understand that subdivision method (usually at complete object) is different at displacement method (generating dynamically more geometry there where is needed).

Well, I don't know how many this is force the vector displacement method, I mean, I don't have clear what should happen exactly when displacements arrives to the cube borders. I imagine every face of the cube should "explode" and every exploded face should match their neighbourgh exploded face.. but... It's exactly what you're talking about subdivisions methods (not displacements). For example, when you subdivide the cube in Maya, the corners of the cube are retracted. In Zbrush instead, corners remain in its place, and are the faces which "grows out".
But still we have a lot of parameters to take into account: If we subdivide mesh in Zbrush with "smt" on, if we do it with "Suv" (for uv map) on or not, if we export the maps with "vd SUV" or "vd SNormals" on or off.. if we choose "smooth" option of the Displacement node of the Maxwell Material.. it's just crazy!, and still we need do a lot of tests... that for not menction that if I should leave the normals of the object hard or soft in Maya (and for XSI?, Max?, Rhino?, Modo?)...

I'm agree with Mihai in that about Vector displacement maps are "not magic", it's clear that in theory, we should have the ability to turn a cube into a perfect sphere wihout subdivide it, but in practice, with so many "smoothing geometry" ways, vectorial interpretations, scales, etc... for the user is really hard. Companies should collaborate between them, and take its time for each feature, but this is an Utopy that not is gonna happen, maybe between Mudbox and Max or Maya, and because share company...

So maybe we should simply try to help at displacement technique supplying it a more closer-to-highPoly low resolution mesh, minimizing the error thresholds and assume that always gonna be a marginal error.

Yes, we can have envy to this way that displacement works for example in Crytek CryEngine3. I suppose they have taken its software to made the Vdm and had programmed exactly the correct way for this maps for their CryEngine3.. The most a user can do now, its study their tools and find the settings that gets the minor marginal error... its a sorrow, but is what we got right now. :(


Edit: Buff!. I was trying to get from Zbrush the sphere at level 3 subdivision as "low poly", for get the map working with less error threshold, and when I opened it in Maya the mesh was 3 different meshes compositing the sphere!. The cube is ok, and the cube in Zbrush level 2 of subdivision too.. How Zbrush do this often, bad we go...
By Polyxo
#364866
Hey Mihai,
somehow I feel treated like a schoolboy here...
To answer your questions: I use all sorts of non-Nurbs-Techniques for enough time to qualify, for about the time I use Maxwell.
I model with Nurbs for more than a decade. No I do no more use other engines, since I bought Mw still in Beta in 2005.

You mention constant problems with setting up Displacement in all sorts of forums.
This is the nature of things: The concept of traditional displacement is complex. One has to understand how that stuff works in general
and then there's still tons of great options to mess up in details. So people have problems.
It does not mean that Zbrush-Displacement sucks. Otherwise all mayor Film-Studios would step on Pixologics feet.

You might have a look at Zbrushcentral or at basically every Blockbuster.
Here you see all flavours of Zbrush-Displacement rendered with several engines, in great beauty.

But Zbrush and Mudbox clearly expect quad-based subdivision downstream. Triangles won't do.
Just check the Renderman Docs.

To me the examples in the Maxwell documentation therefor indeed only have academic value.
Cool images for a documentation, no question! But nobody outputs maps of that kind, mushrooms or strange thorns,
growing out of a single plane, except for proofs of concept. The underlying idea is to use that technique on full 3D-Models.

Show me some relevant work example.
A low poly character or whatever mechanical object as a Closeup with serious High-Frequency detail (skin-pore-level) and undercuts, justifying the use of VD.
Basemesh of up to 1000 faces. Rendered without applying any previous subdivision inside your Host-Program, best from Studio.

PS: I am aware that at the end everything is triangles.
But it makes a world of a difference if on triangulates in Subdivision-Step two of ten or ten of ten.
User avatar
By Mihai
#364867
Polyxo wrote: But Zbrush and Mudbox clearly expect quad-based subdivision downstream. Triangles won't do.
So then 2D displacement is in trouble now also? Are you talking only about 3D displacement? Maybe we are confusing two "types" of subdivision here, if you expect with displacement alone to turn a cube into a sphere:

do you mean this:
Image

or this:
Image

both of these are "subdivided".

Do you really think at the level you mean (pores etc) it would matter if the underlying tiny geometry is quads or triangles? It's vertices pushed around.
#364868
C'mon guys, I know can be funny occasionally a little bit of "itch between us" (always of good cheer, of course), testing our knowledge and such, but here no one is a schoolboy. All of us know something other not, and all can learn from others, if this fails, forums lose all sense.

Let's see. If Zbrush finds a triangle, subidvides it in 3 polygons of 4 sides each. (with 3 edges from a new central vertex to the midpoint of sides). Displacement tesselation.. well, I don't know!, I understand Maxwell finally uses triangles, and Vray still don't know, but I would be at yes... for other side, surely is an sorted mesh, with quads converteds in 2 triangles each.

If you want take a look interactively at tesselation from Nvidia, you can take a look at "Endless City demo", or just take a look at this video (go to 2:45)

https://www.youtube.com/watch?v=sQQpCd_vvGU

Although, well, is graphic card realtime demo... as usual, should be a lot of tesselation techniques, so who knows...
User avatar
By Mihai
#364870
I don't know how much...production value this image has but here goes:

Image

An example of 3D displacement on a more complex mesh, there is also some finer displacement but I don't think I can extract any more than that from the texture - 4096 resolution but used for the entire body so the area that covers the shoulder on the texture is relatively small. In any case it wouldn't be efficient to try to do that small detail with just displacement. A normal map is used to add that extra fine grain.

The mesh is just the default character mesh from Mudbox, exported at level 2 so I don't have to use too much subdivision in Maxwell. Note that when I extracted the map, I used the *unaltered* version as the target mesh (reimported a clean copy with no sculpting). If I had used the sculpted mesh at level 0 to do the map extraction, even this level 0 is changed (compared to the unsculpted reimported version) so I would have to export it and use that as my mesh in Studio. Otherwise the result would be wrong. Just mentioning this because it can often be an overlooked point.

I think the displacement works.....or not? He's got a freaking ear growing out of his chest isn't that enough for you? :D
User avatar
By Mihai
#364871
There's another thing regarding Rhino and displacement I'm not clear on at all.

If you model in nurbs, that then gets tesselated at export depending on the some detail level you choose at export - nothing to do with Maxwell or the plugin, it's Rhinos tesselation. Which as in Solidworks often gives you funky triangles running all over the place.

How do you expect Maxwells displacement - Catmull/Clark subdivision or not, to fix this in order to give a nice clean quad mesh, based from that chaotic mess of triangles?
I understand this is what you are expecting, or not?

In fact, if you do have this mess of triangles and you want to apply some arbitrary 3D displacement map, lets say screws or something, what does it matter if Maxwell subdivides those triangles now into quads and so on....?
By JDHill
#364873
The Rhino mesher is not a factor here, because Rhino deals with both meshes and NURBS. So if Holger brings something from Zbrush into Rhino or Studio, there's no difference, he's rendering the same mesh. When it does come to meshing NURBS in Rhino, you have a whole set of parameters to control how it is done; in this respect SW cannot be compared to Rhino, they are in different universes, SW having only it's "Image Quality" slider.
Sketchup 2024 Released

I would like to add my voice to this annual reques[…]