Everything related to Maxwell Render and General Stuff that doesn't fit in other categories
User avatar
By Max
#399349
Hello everyone.
I have been testing extensively Maxwell V.5 on demo mode and i have some questions would be nice to have an answer or insight.
I've tested V5 on a scene made of 8.3 Milion polygons, my workstation is an intel 990xtreme clocked running 2x GEforce 1080 with 8gb ram each for a total of 16gb.
I understand its not the latest model of machine.

That said i also have a Redshift licence which i dont know if anyone is aware uses an out-of core technology when it comes to throwing data to GPU cards, otherwise the GPU would fill more quickly.

Now, needless to say i was a bit disappointed with the fact that Maxwell V5 did not even start the render of this scene, it just shuts down (in GPU mode), most likely because it ran out of memory on the GPU Cards.

My main gripe is, i never liked in general GPU engines because i think its more of a headache and an expensive way of doing things, whie it would be amazing if everything ould work perfectly, sadly the memory limitation is a showstopper, altho i didnt think i would hit the memory limit on an 8.3 mil polygon scene with just a couple of 8k textures.

SO back to my question, what about CPU mode?
I know i sound weird asking this but, i would love to understand if Maxwell CPU is still being developed, continued, fixed and made it faster, or all the efforts are going into this GPU system? Because frankly as far as i can tell unless you are rendering some very small data, GPU with non out of core technology will pretty much run out of memory, unless you spend a fortune into linking multiple cards with lots of memory.

So Maxwell CPU is at its best and can not be improved further? especially in terms of speed/feedback/interactive FIRE (in a scene with 8mil poly and textures the fire preview is very slow).

While i love Maxwell quality and the material manager, i think the quality is just still superb and unmatched, i just feel like this engine is a huge step back in productivity and speed workflow.

Just to add that i've tested Maxwell 3.2 and maxwell 4 in cpu mode and it is basically the same. Actually Maxwell 3.2 performs extremely better than V4 and V5 on Voxelization, Scene export and time spent between pressing Render on Maxwell Studio and the app Maxwell render to start rendering itself (which i still have to understand why V4 and V5 are so slow compared to V3).


p.s. Just tried removing all the textures, using just a lambertian default grey shader, using physical sky, and gPU rendering doesnt start, it just crashes, on the console it says that it used 3940mb after voxelization not even half of 1 single GPU i have.
Something definately going wrong.
User avatar
By Mark Bell
#399351
Just a stab in the dark here, have you tried changing your Textures Resolution to say 256x256 in the File/Preferences/Viewport Rendering tab, as when we changed it higher the file would crash in GPU but not when it was set back to 256x256?

I sort of agree with your other comment about not giving up on CPU development until hardware is cheap enough to fill the box with enough GPU's to make it affordable. It may be a combination of both works best and Maxwell needs to do a hardware check first to know what percentage of GPU/CPU to use in renders for optimum quality AND speed, which would be different for each user's setup?
User avatar
By Max
#399353
Mark Bell wrote:
Tue Feb 11, 2020 11:58 am
Just a stab in the dark here, have you tried changing your Textures Resolution to say 256x256 in the File/Preferences/Viewport Rendering tab, as when we changed it higher the file would crash in GPU but not when it was set back to 256x256?

I sort of agree with your other comment about not giving up on CPU development until hardware is cheap enough to fill the box with enough GPU's to make it affordable. It may be a combination of both works best and Maxwell needs to do a hardware check first to know what percentage of GPU/CPU to use in renders for optimum quality AND speed, which would be different for each user's setup?
i've removed any texture in the scene and its just a flat lambertian shader.
I've located the issue is on the curtains that apparently have too many polygons, even tho redshift and any other render engines just renders them on the fly and fine.
Also Maxwell cpu mode renders them fine. Its really weird because i do not exceed the memory limit.
Feel like everytime i give MAxwell render a chance, im a beta tester lol.
User avatar
By CDRDA
#399354
I've tested V5 on a scene made of 8.3 Milion polygons, my workstation is an intel 990xtreme clocked running 2x GEforce 1080 with 8gb ram each for a total of 16gb.
With non-Quadro cards I'm pretty sure the total memory allocated is of the GPU with the lowest amount of memory, not additive. Therefore you only have 8GB available. This certainly used to be the case.

I have had similar issues on my RTX 2080ti with 11GB. I can render a 1200x800 image no problem with a few instanced reference trees and a couple of 8K textures, but upping it to 2400x1600 and the GPU engine says no way. No chance for a high res final visual. Hopefully with the new update I can render out my draft images on the GPU at least, because the speed improvements over CPU are impressive.
User avatar
By Max
#399355
CDRDA wrote:
Tue Feb 11, 2020 1:23 pm
I've tested V5 on a scene made of 8.3 Milion polygons, my workstation is an intel 990xtreme clocked running 2x GEforce 1080 with 8gb ram each for a total of 16gb.
With non-Quadro cards I'm pretty sure the total memory allocated is of the GPU with the lowest amount of memory, not additive. Therefore you only have 8GB available. This certainly used to be the case.

I have had similar issues on my RTX 2080ti with 11GB. I can render a 1200x800 image no problem with a few instanced reference trees and a couple of 8K textures, but upping it to 2400x1600 and the GPU engine says no way. No chance for a high res final visual. Hopefully with the new update I can render out my draft images on the GPU at least, because the speed improvements over CPU are impressive.
even if thats the case using just only 1 gpu, the mb used after voxelization (coming from Maxwell console) is 3980mb, so it shouldnt crash at all.
#399356
The memory shown after voxelization is not the total amount. That does not include textures, the image buffers and intermediate memory needed during render. Final image resolution is another thing to take into account.

The memory does not add on multi-gpu on any card type. It works more similar to a network, being each card a node.

Anyway, memory itself is not the only barrier, it can give errors on number of vertices, even if they fit. I do test with 32 GB card and have scenes failing too. The good news is that we are addressing all this problems on mid term (I mean, we have people working on this, not for 5.1, but we hope to improve behavour on some 5 something.
#399357
Hello everybody,

That's not good news. We just bought 3 Maxwell V5 licenses (as upgrades) and then wanted to purchase the necessary hardware (Threadripper 3990 + 3x 2080 Ti). If I now correctly interpret your posts, we will not be able to produce professional renderings (in the necessary resolution) with this hardware solution. Is there already an official statement from Maxwell about your experience?

Best,
Alex
#399358
Many of our marketing images are done with gpu. Some of them even on 2060RTX. We are using maxwell GPU on production on other internal products.

Memory is a bottleneck, as is noted on requirements:
https://nextlimitsupport.atlassian.net/ ... GPU+engine

And we are active on improving this right now. We are trying to do shorter dev cycles, so, after 5.1 can be readed as very far, but it may be sooner than expected.
By dmeyer
#399361
CDRDA wrote:
Tue Feb 11, 2020 1:23 pm
I've tested V5 on a scene made of 8.3 Milion polygons, my workstation is an intel 990xtreme clocked running 2x GEforce 1080 with 8gb ram each for a total of 16gb.
With non-Quadro cards I'm pretty sure the total memory allocated is of the GPU with the lowest amount of memory, not additive. Therefore you only have 8GB available. This certainly used to be the case.

I have had similar issues on my RTX 2080ti with 11GB. I can render a 1200x800 image no problem with a few instanced reference trees and a couple of 8K textures, but upping it to 2400x1600 and the GPU engine says no way. No chance for a high res final visual. Hopefully with the new update I can render out my draft images on the GPU at least, because the speed improvements over CPU are impressive.
It isn't really a Quadro vs Geforce limitation. The GPU memory limit is what it is, unless the application explicitly supports out-of-core memory and/or NVLink. To my knowledge Maxwell GPU does not support either of these yet.

You can monitor GPU total memory using using GPU-Z. The Maxwell console would not be a good indicator of total GPU memory usage as you may have other applications claiming some of that memory.
Image Composition Helper

Hi all, It would be nice to have a tool to help a[…]

New Hardware Tests

Thanks for the tests dmeyer.

Proxies Sketchup 2020

Yes, it looks like when it fails to find one of th[…]

I've tested V5 on a scene made of 8.3 Milion po[…]