When I was using Maxwell, my nodes had always same amount of RAM as main workstation. When job takes 20GB of memory you can't expect that it will take less on node.
Personally I think Maxwell would need a lot better memory optimization (and a lot of other improvements...)
Just example :
Simple sphere with 400 polys subdivided 8x times which makes 51 118 080 polys takes nearly 20GB of memory
and I don't mention insanely high pre-processing times
Code: Select all
[06/March/2020 19:47:55] Memory used before preprocessing: 1139 Mb
[06/March/2020 19:47:56] [Extension SubdivisionModifier] Executing Mesh Modifier Extension :"pSphere1_1"
[06/March/2020 19:54:36] Memory used after preprocessing: 14145 Mb
- 6m41s just subdividing
[06/March/2020 19:54:36] Starting voxelization.
[06/March/2020 19:57:59] Maximum Memory used during voxelization: 22120 Mb
[06/March/2020 19:57:59] Voxelization done.
[06/March/2020 19:57:59] Memory used after voxelization: 20271 Mb
[06/March/2020 19:58:00] Start Rendering
- 3m24s just for voxelization
so just 10 minutes of waiting for first pixel to appear on screen.
To put this in perspective in other engine it took only few seconds and memory usage was 2800 MB.
I know that Maxwell stores geometry in voxels and it's probably not really fair to compare it with different engine, but it's 2020, since V4 there wasn't any major improvement for CPU engine (remember the time when V4 was released and was slower than V3 ? yeah, it was fixed later...)