All posts related to V2
#364900
Polyoxo,, its much better to use a host app that does this INTERACTIVELY in open-gl, then the user can dial in the right kind of subdiv algo that suits the geo. At the same time I tune my auto-bevel (in C4D), This sort of thing needs realtime feedback, its not a good idea to let maxwell handle this at rendertime- very bad idea. My host is not bogged down, because these effects are disabled in the viewport when working on other things.

Bottom line is users need to upgrade their workflow/host-app if they want to do more advanced work. I dont care about people who use studio as their sole host because thats not what studio is for.
By Polyxo
#364901
eric nixon wrote:Polyoxo,, its much better to use a host app that does this INTERACTIVELY in open-gl, then the user can dial in the right kind of subdiv algo that suits the geo. At the same time I tune my auto-bevel (in C4D), This sort of thing needs realtime feedback, its not a good idea to let maxwell handle this at rendertime- very bad idea. My host is not bogged down, because these effects are disabled in the viewport when working on other things.

Bottom line is users need to upgrade their workflow/host-app if they want to do more advanced work. I dont care about people who use studio as their sole host because thats not what studio is for.
Thanks Eric,
I have a decent knowledge of several CG-packages and scan the market carefully but I find none of these apps convincing enough serve as my
host-program for Maxwell. For me as a Product-Designer who mainly designs and not visualizes stuff your preferrred program C4D for instance certainly
was no "upgrade", quite in contrast.
For everthing non Nurbs I prefer specialized standalone programs (Subd's, Sculpting+UV's). I really like this tailored setup and all is set when I import
3rd-party geometry to Rhino (I never use Studio). All what's missing is a useful mechanism to subdivide at rendertime. I could well come along
without seeing the effect in the viewport ever, for as long as one could dial in its effect separate from Micropolygon-Displacement.
Seeing what's going on in Fire would perfectly do.

@ Mihai: My personal interest was not character-animation.
I just wanted give you another reason why it makes sense to import Low-Poly-Basemeshes and to subdivide later.
Some other users might have that interest though - and I felt you wanted to sell me that it was perfectly fine to bring in pre-subdivided basemeshes.
By JDHill
#364902
Not knowing ZBrush, I can't tell you guys how good or bad this is, but here's what I have so far, using VDispDiagnosic.ZPR:


ZBrush 4R5:
  • Image

Studio 2.7.20:
  • Image

Maxwell 2.7.20:
  • Image
User avatar
By Mihai
#364905
Polyxo wrote:Mihai,
I believe to fully picture the problem one needs to leave the UV-Space for a moment - as important as it is.
If you bring in a basemesh to Softimage with 1000 polygons, then apply 4 levels of smooth CC-subdivision with a Modifier and then send
that output to Maxwell it will evaluate a quarter-million of polygons already in the UV-space and further dice it with its triangle based methods.
Or are you saying that one can not do this in Softimage?
Given the mesh has a loop-based nature there was no conceptual indication I can see which would forbid rendering it with rendertime-applied smooth
Quad-Subdivisions. Results would match those on can get in Maya or such. That feature is just not currently available.
Maybe I still haven't made it clear what I ment - when your sculpt app creates a disp map from your sculpt, it's like a resampling process. Imagine it like MP3 compression or trying to match a perfectly smooth curve using a grid of pixels. When you say:

evaluate a quarter-million of polygons already in the UV-space and further dice it with its triangle based methods

there must be enough pixels in that texture, over that specific UV area of the model to accurately reconstruct that detail! No matter if you first subdivide your mesh once or twice so the overall shapes/normals match better (as I mentioned the biggest difference in outline is between first and second level of subdivision, after that it's negligible), OR if you subdivide it in the renderer first. And this small step is really what you seem to be stuck on and saying Maxwell can't possibly match other renderers detail because of this lack of subdivision. You will find that there is a struggle getting the kind of sharp detail that Maxwells displacement allows in other renderers without a very heavy memory or render time hit. Three examples I made a long time ago when we first introduced displacement:

Image

Image

Image

Of these I find the tire the most impressive because that kind of sharp displacement detail is difficult to get really. Everything on the rubber including the text on the side is displacement. I had to use a huge texture of course. And if I zoomed in just a small amount on the tire thread, I would see problems in the details - not because of the engine, but because now there aren't enough pixels in the displacement texture to give me further detail.
User avatar
By Mihai
#364906
Reaversword wrote:Guys, the ONLY way to get maps working perfectly is in really, SIMPLE.
Next Limit should do its own Vector Displacement/Displacement/Normals maps creator (and Chaosgroup for their Vray, and Mental Images for their MentalRay/iRay), etc..
I think that wouldn't be such a good approach because:

- people would be pretty unhappy with this extra step/app to load and create maps from models when their sculpt app has this functionality already
- a renderer would still have to be compatible with the maps created from the sculpting apps so a lot of work for "nothing" because:
a. the map extraction process while it can be complex has the purpose of outputting simple color values in a map - black/white, up/down, or three channels describing direction for vector displacement maps. There aren't hundreds of ways for the renderer to interpret them.
b. Try to forget all those nice Zbrush/Mudbox viewport screenshot "renders" but try instead to find renders of that detail and see how often it matches perfectly. Personally I think displacement is useful for things that can't be well simulated with normal maps, so large to medium/medium-small detail. Lower than that....I see no point in trying to extract skin pores from a displacement texture and being disappointed you can't, or you can, but map size and displacement settings would be impractical even for todays computers.
By Polyxo
#364912
Mihai,
I know that one needs enough pixels in the texture to displace properly. On the other hand these textures should not get too large as one needs to store them
in memory. That would prolong export-time heftlily and uses a lot of memory.One might deal with that by giving important areas more UV-Space, by using UV-Tiles
or by using separate UV-sets. It's still very helpful to add some initial quad- subdivision to avoid the renderer having to do the serious heavy-lifting.
To get these optional extra-subdvivision it currently is the most attractive option to display a low polygon-representation in the viewport and to use Hypernurbs, Turbosmooth
and comparable parametric Subdivision-Modifiers. Given they are available in the Host-Software.

These are nice pictures.
But one has to face the fact that one could not get the same result on certain geometry with Maxwell-Studio, Rhino, Solidworks, Archicad, Bonzai, FormZ, Microstation
and Sketchup. If the SolidThinking Plugin is still in development also here. One might be able to replicate that tire with a dense cylinder-primitive with one extracted end-cap,
maybe. Also these entwining ropes should work with some dense, deformed cylinders.
But one already ran into problems with all mentioned packages,given one just got that female trunk section with 200 polygons by someone else, along with the maps.
In case a CAD-user had purchased a cool low poly but subdivision-optimized model at Turbosquid or similar, she'd be screwed although it comes with all required maps.

There's a classic and already very old 2D-displacement-example here.
As (non pretess) displacement first was introduced in Maxwell quite some years ago and with close to no respective experience I tried for hours and hours to replicate that head with
Maxwell from Rhino. I know today that I was just doomed to fail by working with that low-res cage. The source-files have become unavailable - maybe one of the readers still has them
and can upload them for you to try.

One can say that is not your cup of tea and that one rather should change to a "proper" host-program.
Or do something.
I have said more than I ever I wanted to say on this topic. This is a new week and I'm out for now.

Holger

PS: I find somehow odd that you are obviously happily willing to swindle small detail without actual deformation. Yes, I realize that there's no reasonable way around
nowadays. Still it's remarkable that NL seems unwilling to compromise on all light and Material-Simulation-issues but agrees with geometry-detail-limits defined
by the entertainment-industry. I'd rather like to throw my model with all the actual detail into my virtual photostudio. Currently such would grind
software to stillstand but hopefully that won't stay so forever.
#364928
Mihai wrote:
Reaversword wrote:Guys, the ONLY way to get maps working perfectly is in really, SIMPLE.
Next Limit should do its own Vector Displacement/Displacement/Normals maps creator (and Chaosgroup for their Vray, and Mental Images for their MentalRay/iRay), etc..
I think that wouldn't be such a good approach because:

- people would be pretty unhappy with this extra step/app to load and create maps from models when their sculpt app has this functionality already
- a renderer would still have to be compatible with the maps created from the sculpting apps so a lot of work for "nothing" because:
a. the map extraction process while it can be complex has the purpose of outputting simple color values in a map - black/white, up/down, or three channels describing direction for vector displacement maps. There aren't hundreds of ways for the renderer to interpret them.
b. Try to forget all those nice Zbrush/Mudbox viewport screenshot "renders" but try instead to find renders of that detail and see how often it matches perfectly. Personally I think displacement is useful for things that can't be well simulated with normal maps, so large to medium/medium-small detail. Lower than that....I see no point in trying to extract skin pores from a displacement texture and being disappointed you can't, or you can, but map size and displacement settings would be impractical even for todays computers.
Well, depends. People working at this level doesn't gonna afraid/complain about Maxwell (or another render engine) haves its own tool for make a perfect customized maps. An artist that has inverted three days crafting a model will be graceful to have a perfect maps creator, I can secure it to you.

There is hundreds of ways for interpret the map. High model has been subdivided with smooth geometry or not?. And Uv smooth? Low poly will have normals smoothed or not? Map will be Tangent or World? Which of the 96 xyz combinations (World + Nomals) are the correct?. Maxwell displacement node should take smooth option activated?.

How you see, really there is. Of course, as user, you need to know what you're doing and why, but a cube, shoud be able to turn in a sphere (with each face "exploded out", and each exploded out face borders should match perfectly its neightbourg faces, using an 8k map if would be necessary). As I told, Mihai, I'm agree with you, this is force the technique, but would be possible. I don't believe you don't believe it.

And of course, have a map-for-engine creator doesn't implies an engine stop working with others softwares maps.

But please, remember, this is not any kind of request, just an opinion.
User avatar
By Jozvex
#365026
I hesitate to step into this thread, haha! :lol:

What about the new render extensions scheme in Maxwell, could someone theoretically write a CC subdivision extension that could be tagged onto a mesh as a modifier that evaluates before Maxwell's displacement?

*quickly runs away*

Hello Matt, Yes, for the moment, the plugin for M[…]

Maxwell X Substance

How awesome is this!!!! I do have one question su[…]

Hello, I'm afraid the development of Maxwell 4 is[…]

Material Organization

Hello All! I've reached a point where I have well[…]