Everything related to Maxwell Render and General Stuff that doesn't fit in other categories
User avatar
By Max
#399349
Hello everyone.
I have been testing extensively Maxwell V.5 on demo mode and i have some questions would be nice to have an answer or insight.
I've tested V5 on a scene made of 8.3 Milion polygons, my workstation is an intel 990xtreme clocked running 2x GEforce 1080 with 8gb ram each for a total of 16gb.
I understand its not the latest model of machine.

That said i also have a Redshift licence which i dont know if anyone is aware uses an out-of core technology when it comes to throwing data to GPU cards, otherwise the GPU would fill more quickly.

Now, needless to say i was a bit disappointed with the fact that Maxwell V5 did not even start the render of this scene, it just shuts down (in GPU mode), most likely because it ran out of memory on the GPU Cards.

My main gripe is, i never liked in general GPU engines because i think its more of a headache and an expensive way of doing things, whie it would be amazing if everything ould work perfectly, sadly the memory limitation is a showstopper, altho i didnt think i would hit the memory limit on an 8.3 mil polygon scene with just a couple of 8k textures.

SO back to my question, what about CPU mode?
I know i sound weird asking this but, i would love to understand if Maxwell CPU is still being developed, continued, fixed and made it faster, or all the efforts are going into this GPU system? Because frankly as far as i can tell unless you are rendering some very small data, GPU with non out of core technology will pretty much run out of memory, unless you spend a fortune into linking multiple cards with lots of memory.

So Maxwell CPU is at its best and can not be improved further? especially in terms of speed/feedback/interactive FIRE (in a scene with 8mil poly and textures the fire preview is very slow).

While i love Maxwell quality and the material manager, i think the quality is just still superb and unmatched, i just feel like this engine is a huge step back in productivity and speed workflow.

Just to add that i've tested Maxwell 3.2 and maxwell 4 in cpu mode and it is basically the same. Actually Maxwell 3.2 performs extremely better than V4 and V5 on Voxelization, Scene export and time spent between pressing Render on Maxwell Studio and the app Maxwell render to start rendering itself (which i still have to understand why V4 and V5 are so slow compared to V3).


p.s. Just tried removing all the textures, using just a lambertian default grey shader, using physical sky, and gPU rendering doesnt start, it just crashes, on the console it says that it used 3940mb after voxelization not even half of 1 single GPU i have.
Something definately going wrong.
User avatar
By Mark Bell
#399351
Just a stab in the dark here, have you tried changing your Textures Resolution to say 256x256 in the File/Preferences/Viewport Rendering tab, as when we changed it higher the file would crash in GPU but not when it was set back to 256x256?

I sort of agree with your other comment about not giving up on CPU development until hardware is cheap enough to fill the box with enough GPU's to make it affordable. It may be a combination of both works best and Maxwell needs to do a hardware check first to know what percentage of GPU/CPU to use in renders for optimum quality AND speed, which would be different for each user's setup?
User avatar
By Max
#399353
Mark Bell wrote:
Tue Feb 11, 2020 11:58 am
Just a stab in the dark here, have you tried changing your Textures Resolution to say 256x256 in the File/Preferences/Viewport Rendering tab, as when we changed it higher the file would crash in GPU but not when it was set back to 256x256?

I sort of agree with your other comment about not giving up on CPU development until hardware is cheap enough to fill the box with enough GPU's to make it affordable. It may be a combination of both works best and Maxwell needs to do a hardware check first to know what percentage of GPU/CPU to use in renders for optimum quality AND speed, which would be different for each user's setup?
i've removed any texture in the scene and its just a flat lambertian shader.
I've located the issue is on the curtains that apparently have too many polygons, even tho redshift and any other render engines just renders them on the fly and fine.
Also Maxwell cpu mode renders them fine. Its really weird because i do not exceed the memory limit.
Feel like everytime i give MAxwell render a chance, im a beta tester lol.
User avatar
By CDRDA
#399354
I've tested V5 on a scene made of 8.3 Milion polygons, my workstation is an intel 990xtreme clocked running 2x GEforce 1080 with 8gb ram each for a total of 16gb.
With non-Quadro cards I'm pretty sure the total memory allocated is of the GPU with the lowest amount of memory, not additive. Therefore you only have 8GB available. This certainly used to be the case.

I have had similar issues on my RTX 2080ti with 11GB. I can render a 1200x800 image no problem with a few instanced reference trees and a couple of 8K textures, but upping it to 2400x1600 and the GPU engine says no way. No chance for a high res final visual. Hopefully with the new update I can render out my draft images on the GPU at least, because the speed improvements over CPU are impressive.
User avatar
By Max
#399355
CDRDA wrote:
Tue Feb 11, 2020 1:23 pm
I've tested V5 on a scene made of 8.3 Milion polygons, my workstation is an intel 990xtreme clocked running 2x GEforce 1080 with 8gb ram each for a total of 16gb.
With non-Quadro cards I'm pretty sure the total memory allocated is of the GPU with the lowest amount of memory, not additive. Therefore you only have 8GB available. This certainly used to be the case.

I have had similar issues on my RTX 2080ti with 11GB. I can render a 1200x800 image no problem with a few instanced reference trees and a couple of 8K textures, but upping it to 2400x1600 and the GPU engine says no way. No chance for a high res final visual. Hopefully with the new update I can render out my draft images on the GPU at least, because the speed improvements over CPU are impressive.
even if thats the case using just only 1 gpu, the mb used after voxelization (coming from Maxwell console) is 3980mb, so it shouldnt crash at all.
#399356
The memory shown after voxelization is not the total amount. That does not include textures, the image buffers and intermediate memory needed during render. Final image resolution is another thing to take into account.

The memory does not add on multi-gpu on any card type. It works more similar to a network, being each card a node.

Anyway, memory itself is not the only barrier, it can give errors on number of vertices, even if they fit. I do test with 32 GB card and have scenes failing too. The good news is that we are addressing all this problems on mid term (I mean, we have people working on this, not for 5.1, but we hope to improve behavour on some 5 something.
#399357
Hello everybody,

That's not good news. We just bought 3 Maxwell V5 licenses (as upgrades) and then wanted to purchase the necessary hardware (Threadripper 3990 + 3x 2080 Ti). If I now correctly interpret your posts, we will not be able to produce professional renderings (in the necessary resolution) with this hardware solution. Is there already an official statement from Maxwell about your experience?

Best,
Alex
#399358
Many of our marketing images are done with gpu. Some of them even on 2060RTX. We are using maxwell GPU on production on other internal products.

Memory is a bottleneck, as is noted on requirements:
https://nextlimitsupport.atlassian.net/ ... GPU+engine

And we are active on improving this right now. We are trying to do shorter dev cycles, so, after 5.1 can be readed as very far, but it may be sooner than expected.
By dmeyer
#399361
CDRDA wrote:
Tue Feb 11, 2020 1:23 pm
I've tested V5 on a scene made of 8.3 Milion polygons, my workstation is an intel 990xtreme clocked running 2x GEforce 1080 with 8gb ram each for a total of 16gb.
With non-Quadro cards I'm pretty sure the total memory allocated is of the GPU with the lowest amount of memory, not additive. Therefore you only have 8GB available. This certainly used to be the case.

I have had similar issues on my RTX 2080ti with 11GB. I can render a 1200x800 image no problem with a few instanced reference trees and a couple of 8K textures, but upping it to 2400x1600 and the GPU engine says no way. No chance for a high res final visual. Hopefully with the new update I can render out my draft images on the GPU at least, because the speed improvements over CPU are impressive.
It isn't really a Quadro vs Geforce limitation. The GPU memory limit is what it is, unless the application explicitly supports out-of-core memory and/or NVLink. To my knowledge Maxwell GPU does not support either of these yet.

You can monitor GPU total memory using using GPU-Z. The Maxwell console would not be a good indicator of total GPU memory usage as you may have other applications claiming some of that memory.
#399381
Alexander Kuhnert wrote:
Tue Feb 11, 2020 2:35 pm
Hello everybody,

That's not good news. We just bought 3 Maxwell V5 licenses (as upgrades) and then wanted to purchase the necessary hardware (Threadripper 3990 + 3x 2080 Ti). If I now correctly interpret your posts, we will not be able to produce professional renderings (in the necessary resolution) with this hardware solution. Is there already an official statement from Maxwell about your experience?

Best,
Alex
Actually I’m working on 5 different kind of setup.

1) Dual 2697v3 Xeon + 3 titan X 12gb
No problem at all with small/medium project.
2) Dual 2698v3 Xeon + 3 titan X 12gb
No problem at all with small and medium project.

3) 7700k @ 5.2 GHz + (x2) 1080ti
Got some problem on high poly project using gpu rendering. Overall this is the worst system to render on.

4) 2990wx Threadripper + (2x) 1080ti
Using mainly the CPU for rendering plus Fire and work like a charm. Even on higher resolution output.
1080TI still give some problem with very complex scenes.
Using the CPU is far a better solution in that case.

5) Hp Zbook 17 G6 with Xeon 2286M and RTX 5000 quadro

On portability side, using a Quadro within 16 GB memory is actually the best solution I found for fast rendering and fast previewing even complex project.

Compared to the system listed above, I’m working better only on the Threadripper system.


I’ll receive soon a 3990x to test within a quadro 6000 rtx so i could test out how that beast of cpu could improve our workflow.

Overall I think having a 3990x wouldn’t give you any problem and the speed even in previewing the project using fire will be faster like using a Quadro gpu .

In my opinion I would drop the 3 2080ti and would consider buying a Quadro rtx 6000 spending less and saving even some cash.

The extra memory would help even within larger projects.

Considering the good experience I’ve working Within the Quadro RTX 5000 ( mobile) I would consider even the quadro RTX 5000 for desktop. ( for saving even more cash overall)


Cheers 😉🍺
#399382
Hi Matteo,

thank you for your detailed answer and the tests!

The Quadro RTX 5000 as a dual version would then probably be a real alternative to the 3 x 2080 ti.

We are simply looking for a reliable system that will replace our small in-house render farm in the near future (15 clients (iMacs and MacMinis)). We are still very convinced of the realism at Maxwell and would therefore like to continue working productively with it. When I look at the supported features of the GPU engine, there are unfortunately still a few missing properties that are at least essential for us (SSS, Thin SSS, Dispersion). Due to the technical questions (which graphics card is really suitable for production) and the missing features of the GPU engine, we are simply not convinced to convert our system and unfortunately have to postpone the decision.

Best Alex
#399383
Actually i use the GPU Fire only for fast preview of the scene to keep under control the texturing workflow.

For the final renders i-ll use the 2990 WX and soon the 3990X Threadripper.

Plus using multiple machines as NODES you could reduce exponentially the rendering time of any kind of Project.

For some missing characteristic about GPU rendering i'm more confident this time with Maxwell V5 then in the past with V4,
we have to hope the Developers will bring finally a final product to their customer.

But actually i've seen little difference between CPU and GPU render as a final result.


The difference between a QUADRO and a normal RTX is how many memory you've to load inside the GPU the data for larger project.

For sure 3x 2080-TI RTX have generally far superior raw rendering power then a single QUADRO 5000 or 6000 or even 8000
but for complex scenes you could find yourself in a bottleneck situation where the project can't be loaded inside the gpu memory.

Memory doesn't stack.


But if you plan to purchase a 3990x i doubt you would have any kind of problem in rendering time.

If you wil base your rendering workflow over GPU would be better focusing over buying 2x QUADRO RTX 5000 or 2x QUADRO RTX 6000

but in that case downgrading the CPU to a less costly 3970X Threadripper.

...and 3 Days later, 82.528 Views !!! ...NL, every[…]

Hello dear customers, We have just released a new[…]

grass generator extension

Just downloaded MWR5 for Rhino 6 and want to use g[…]

Hello everyone, I have a new bug with the latest M[…]