- Tue Feb 11, 2020 11:18 am
#399349
Hello everyone.
I have been testing extensively Maxwell V.5 on demo mode and i have some questions would be nice to have an answer or insight.
I've tested V5 on a scene made of 8.3 Milion polygons, my workstation is an intel 990xtreme clocked running 2x GEforce 1080 with 8gb ram each for a total of 16gb.
I understand its not the latest model of machine.
That said i also have a Redshift licence which i dont know if anyone is aware uses an out-of core technology when it comes to throwing data to GPU cards, otherwise the GPU would fill more quickly.
Now, needless to say i was a bit disappointed with the fact that Maxwell V5 did not even start the render of this scene, it just shuts down (in GPU mode), most likely because it ran out of memory on the GPU Cards.
My main gripe is, i never liked in general GPU engines because i think its more of a headache and an expensive way of doing things, whie it would be amazing if everything ould work perfectly, sadly the memory limitation is a showstopper, altho i didnt think i would hit the memory limit on an 8.3 mil polygon scene with just a couple of 8k textures.
SO back to my question, what about CPU mode?
I know i sound weird asking this but, i would love to understand if Maxwell CPU is still being developed, continued, fixed and made it faster, or all the efforts are going into this GPU system? Because frankly as far as i can tell unless you are rendering some very small data, GPU with non out of core technology will pretty much run out of memory, unless you spend a fortune into linking multiple cards with lots of memory.
So Maxwell CPU is at its best and can not be improved further? especially in terms of speed/feedback/interactive FIRE (in a scene with 8mil poly and textures the fire preview is very slow).
While i love Maxwell quality and the material manager, i think the quality is just still superb and unmatched, i just feel like this engine is a huge step back in productivity and speed workflow.
Just to add that i've tested Maxwell 3.2 and maxwell 4 in cpu mode and it is basically the same. Actually Maxwell 3.2 performs extremely better than V4 and V5 on Voxelization, Scene export and time spent between pressing Render on Maxwell Studio and the app Maxwell render to start rendering itself (which i still have to understand why V4 and V5 are so slow compared to V3).
p.s. Just tried removing all the textures, using just a lambertian default grey shader, using physical sky, and gPU rendering doesnt start, it just crashes, on the console it says that it used 3940mb after voxelization not even half of 1 single GPU i have.
Something definately going wrong.
I have been testing extensively Maxwell V.5 on demo mode and i have some questions would be nice to have an answer or insight.
I've tested V5 on a scene made of 8.3 Milion polygons, my workstation is an intel 990xtreme clocked running 2x GEforce 1080 with 8gb ram each for a total of 16gb.
I understand its not the latest model of machine.
That said i also have a Redshift licence which i dont know if anyone is aware uses an out-of core technology when it comes to throwing data to GPU cards, otherwise the GPU would fill more quickly.
Now, needless to say i was a bit disappointed with the fact that Maxwell V5 did not even start the render of this scene, it just shuts down (in GPU mode), most likely because it ran out of memory on the GPU Cards.
My main gripe is, i never liked in general GPU engines because i think its more of a headache and an expensive way of doing things, whie it would be amazing if everything ould work perfectly, sadly the memory limitation is a showstopper, altho i didnt think i would hit the memory limit on an 8.3 mil polygon scene with just a couple of 8k textures.
SO back to my question, what about CPU mode?
I know i sound weird asking this but, i would love to understand if Maxwell CPU is still being developed, continued, fixed and made it faster, or all the efforts are going into this GPU system? Because frankly as far as i can tell unless you are rendering some very small data, GPU with non out of core technology will pretty much run out of memory, unless you spend a fortune into linking multiple cards with lots of memory.
So Maxwell CPU is at its best and can not be improved further? especially in terms of speed/feedback/interactive FIRE (in a scene with 8mil poly and textures the fire preview is very slow).
While i love Maxwell quality and the material manager, i think the quality is just still superb and unmatched, i just feel like this engine is a huge step back in productivity and speed workflow.
Just to add that i've tested Maxwell 3.2 and maxwell 4 in cpu mode and it is basically the same. Actually Maxwell 3.2 performs extremely better than V4 and V5 on Voxelization, Scene export and time spent between pressing Render on Maxwell Studio and the app Maxwell render to start rendering itself (which i still have to understand why V4 and V5 are so slow compared to V3).
p.s. Just tried removing all the textures, using just a lambertian default grey shader, using physical sky, and gPU rendering doesnt start, it just crashes, on the console it says that it used 3940mb after voxelization not even half of 1 single GPU i have.
Something definately going wrong.