All posts related to V2
By micheloupatrick
#330378
Bubbaloo wrote:Oops, I came here looking for Maxwell interactive preview info. I must have been redirected. :lol:
The problem is that we don't have much to talk about, as we didn't get a single piece of info since the announcement (please Next Limit throw us a bone, even a screenshot would do)!
User avatar
By max3d
#330379
SandroS wrote:
max3d wrote:Maybe there are much better settings which I just couldn´t find
you think? i really didn't want my first post here to be negative but some posts just can't be ignored. i have both Thea and Maxwell and can honestly say that they are both outstanding engines that have different strengths and weaknesses.

for the record, to see clear water and caustics in your example:

1. switch to Unbiased mode (MC engine)
2. check the 'Enable Caustics' box
3. increase the tracing depth to something around 20

why have these settings? so that you can change the quality/speed ratio depending on how fast you need feedback (setup vs final render).
Hi SandroS,

To be clear I didn´t want to comment on either renderer only on the preview functionality. Furthermore I don´t consider your post as negative, the opposite. Immediately after posting I have contacted Thea of course and after an email back with suggestions I have send them two new files reflecting two poosbile extremes which both cause the water to render properly and get the caustics. If they agree that these are realistic results given the time and settings restraints I will update my earlier post.

I fully understand why there would be production vs setup render settings. The confusion was caused by them being in totally different locations.

Max.
User avatar
By max3d
#330380
Bubbaloo wrote:Oops, I came here looking for Maxwell interactive preview info. I must have been redirected. :lol:
Yeah me too, but after ten pages it became clear that there was nothing to even speculate about so it turned into a general preview / interactive rendering topic. It would be good to have some sort of benchmark for it anyway so when NL is done the testing methodology is already there. The playing field may not be ready yet but when it is we have a yardstick waiting :)
User avatar
By Richard
#330810
Thanks for keeping this thread so active Max! Although I have absolutely NO idea what you are talking about most of the time (all in fact) I really do appreciate such indepth interactions that keep developers on their toes!

Ok it is another monday here and for some reason I just have this gut feeling today is the day!
User avatar
By max3d
#330905
Richard wrote:Thanks for keeping this thread so active Max! Although I have absolutely NO idea what you are talking about most of the time (all in fact) I really do appreciate such indepth interactions that keep developers on their toes!

Ok it is another monday here and for some reason I just have this gut feeling today is the day!
I'm sorry that it's incomprehensible to you. I try to write it down in an understandable, not too technical way, so I will try to improve it. I'm aware this is not a forum for hard- or software experts and I tried to do most of it in laymen's terms.

To show my goodwill I'll use the 'a picture is worth more than a thousand words adagium'. Straight from the GPU Technology conference. Maxwell will be directly implemented in Nvidia's hardware:
Image

As you can see it's unavoidable. Solutions converge even more and in a completely different way than I was thinking :)
User avatar
By Richard
#330931
Wow Max, if GPU rendering for maxwell using maxwell by maxwell things look like they'll rip in 2013 (assuming that the earth doesn't split in 2012). Like Brodes, I thought ypu may have been suggest it 2013 until MR went GPU, I got confused again!

I can half imagine though if considering the price of tesla cards, maxwell (wouldn't you think they would have picked a different name there) may well be ex'y!
User avatar
By max3d
#330938
Richard wrote:Wow Max, if GPU rendering for maxwell using maxwell by maxwell things look like they'll rip in 2013 (assuming that the earth doesn't split in 2012). Like Brodes, I thought ypu may have been suggest it 2013 until MR went GPU, I got confused again!

I can half imagine though if considering the price of tesla cards, maxwell (wouldn't you think they would have picked a different name there) may well be ex'y!
It will be build in so the price of Maxwell Render will no longer be important. 200$ will buy you Maxwell the polaroid version. Remember you heard it here first. Another design win for Maxwell :)
By brodie_geers
#330939
max3d wrote:
Richard wrote:Wow Max, if GPU rendering for maxwell using maxwell by maxwell things look like they'll rip in 2013 (assuming that the earth doesn't split in 2012). Like Brodes, I thought ypu may have been suggest it 2013 until MR went GPU, I got confused again!

I can half imagine though if considering the price of tesla cards, maxwell (wouldn't you think they would have picked a different name there) may well be ex'y!
It will be build in so the price of Maxwell Render will no longer be important. 200$ will buy you Maxwell the polaroid version. Remember you heard it here first. Another design win for Maxwell :)
Sounds cool. If we shake our monitor will it render faster?

-brodie
User avatar
By max3d
#330954
brodie_geers wrote:
max3d wrote:
Richard wrote:Wow Max, if GPU rendering for maxwell using maxwell by maxwell things look like they'll rip in 2013 (assuming that the earth doesn't split in 2012). Like Brodes, I thought ypu may have been suggest it 2013 until MR went GPU, I got confused again!

I can half imagine though if considering the price of tesla cards, maxwell (wouldn't you think they would have picked a different name there) may well be ex'y!
It will be build in so the price of Maxwell Render will no longer be important. 200$ will buy you Maxwell the polaroid version. Remember you heard it here first. Another design win for Maxwell :)
Sounds cool. If we shake our monitor will it render faster?

-brodie
Shake is discontinued after being taken over by Apple so that will no longer work. Nothing real actually.

To be serious again. I have a large collection of preview renderers. Some on CPU, some on GPU and nothing real (pun intended) on hybrid as the CPU is just too slow on these implementations. From what I have seen most of them seem to be at least usable on current hardware and a huge improvement in the CG workflow. It will however depend on the actual users how useful they find it. I do most of my testing with low to midrange poly models, simple textures and I try to avoid custom materials as it would make it more difficult to compare speeds and usability.

However the users of Maxwell will only be interested in their preview and they will use high poly models which are texture rich and use the complex Mawwell materials. My premises in the testing work I do are:
- preview rendering will be most useful in complex cases as every experienced user will know how his model will work under lighting conditions and materials he often uses, etc.
- ergo you will want to concentrate on the spots where you expect problems and sort that out to be certain you´re on the right way
- convergence to the final render output is not important while setting the scene as it´s not a big deal to push the final render button instead of just letting it go, BUT you want to be able to preview the end result otherwise it makes no sense. So a preview renderer in which you can enable/disable certain features would be okay to speed up things as the user will know what he´s concentrating on, but it should be completely clear and predictable to the user what the difference will be. Noise and fire flies f.i. are in my view acceptable as it´s easy to forecast how it will look without noise. So the preview renderer could produce more noise even when giving more time if this is needed to get the more or less instant feedback you are looking for.

Now these are my premises and thats why I tested Thea somewhere above with the kind of model and material I did and the time frames: 15sec, 2m.35s and long... as measuring points.

I would be very interested what others expect from a preview renderer and if they agree to my premises or have completely different demands. Maybe they really want fully interactive handling of complex scenes or are willing to sit for more than 2,5 minute to get proper feedback or etc.. I´m all ears.

Max.

* I didn´t update the Thea post as I´m still waiting for support to clarify things. I know how I can get caustics etc with different rendering technologies, but the model I use is something they don´t like and I don´t want to revert to the simpler case as provided by them. The intersection of glass with another material is considered by me just an example of where you would like to use your preview for. I have made loads of new renderings, used suggested settings, but the results although different are not very satisfactory. I still hope for further clarifications about some issues in the produced images which are still there after hours of cooking. I always try to be very supportive of new start ups, so I will wait a bit longer before updating the images.
User avatar
By Richard
#330958
Max I agree with you in entirety!

The speed of preview is the single biggest issue for me, I actually have little issue with long render times, over night is generally fine for me (sure I'd like it quicker) but in testing I want fast feedback. No matter what PC grunt you have it will likely never make up for the man hours one loses in testing!

The other point here is ones ability to make changes on the fly, I don't know about you but I find my brain gets stuffed up with stops and starts, what was the last setting I changed and by how much? Thinking quickly and fluidly (eg Sketchup as a design tool) really does help one maintain focus and produce results!
User avatar
By max3d
#330980
Richard wrote:Max I agree with you in entirety!

The speed of preview is the single biggest issue for me, I actually have little issue with long render times, over night is generally fine for me (sure I'd like it quicker) but in testing I want fast feedback. No matter what PC grunt you have it will likely never make up for the man hours one loses in testing!

The other point here is ones ability to make changes on the fly, I don't know about you but I find my brain gets stuffed up with stops and starts, what was the last setting I changed and by how much? Thinking quickly and fluidly (eg Sketchup as a design tool) really does help one maintain focus and produce results!
Hi Richard,

Thanks for the input. Does that mean you agree with all my considerations?

Feel free to adjust if you wish. I took f.i. about 30 seconds for my first timing as I felt that was okay for me. If I´m rotating the scene I get updates near instant, but there is hardly any detail, mostly noise, but I don´t mind as I know the scene and have enough feedback this way. When I drop a new material or adjust its settings 30 secs is about the limit where I decide yes or no.

My next step is zooming in on possible problem spots and I´m willing to spend about 2,5 minutes to see if they show up already, but all these numbers are just arbitrary. Maybe you and others have another point for hard to judge cases where they would be willing to wait for 15 minutes to see if it works the way they want to. I don´t know. There are no benchmarks for this kind of thing and my intention is to develop one.

It´s not easy as you have to define the ´interactive´ part in rendering and stick to numbers to it. Realistically we are still very far away from truly interactive in the sense that you get a fully rendered model at a 30 hz refresh rate so it´s about compromises for the years to come. I posted my experiences with Iray in another topic about Cuda here. Have a look at it, because for me the possibility to have a brush / magnifier which I can use to select small parts of my rendering and concentrate all GPU power there is the best invention since sliced bread (actually I don´t like sliced bread but you get the drift.

If people don´t know what I´m talking about I can show a preview window with that brush enabled.

At the moment I don´t know how fast Maxwells preview renderer will be and what features it will have but I they read this: please put that in if possible!

Max
User avatar
By Richard
#330983
Max

Yes mate I do agree with all your saying! Doing pretty much just single shot arch vis work I'm wanting more a way to visualise real time shadow and sky effects and hopefully HDRI location / effect. Emitter lighting is pretty much handled now perfectly by multilight.

As an aside, the unforunate currently is that ML doesn't have the ability for adjusting the levels of the individual channels and with arch viz work renders HDRI fairly useless for most work unless your up for hours of trial and error. As one in most cases cant use a separate BG / Illum images and independantly adjust levels. This is one area I think Thea will overtake MR for arch viz as LDRI can be used for the background saving any excessive work in post pro.

Modo as I understand has had realtime HDRI lighting for sometime and would certainly serve this option well, but yet another workflow to learn. This must be a brilliant time saver in locating the HDRI primary light to where one prefers!

Shadows are the other tester for me I like to use clipped trees to shadow cast to larger bland areas of a building to add some artistry and soften the visual impact. Being able to turn off emitters prior to testing would help greatly to speed this aspect as renders clean very quickly if emitters are off and ML not required for testing. Though now going through a massive list of objects to find and hide emitters to too time consuming to consider doing it manually in studio. I can handle this quickly out of my host app but like most things in studio, things aren't ever meant to be easy!

Materials come in last probably!

I do understand what you are suggesting regarding regarding a region brush / magnifier, mate that would be a great idea!

The times you suggest as suitable for being happy with a result are still IMHO rather high at 2.5min - 15min whoa! Again this is where the likes of Thea may gain an edge (just a shame the UI is such a shocker) as the biased engine can give very fast and at the same fairly representative results of the likely unbiased result.

Shaderlight is soon to be released for SU and although the renders are FAR from useable for commercial work it does give some hint to what I hope will be the future of interactive results using HDRI inparticular and this if the method of setting IBL locations and intensities at least in proportion to each other channel might well prove a simple preview to at least help in that single aspect of testing for correct location!
  • 1
  • 14
  • 15
  • 16
  • 17
  • 18
  • 24

Workaround using the "RESOURCES BROWSER"[…]

render engines and Maxwell

I'm talking about arch-viz and architecture as tho[…]

> .\maxwell.exe -benchwell -nowait -priority:[…]